Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
277,858 | 24,107,141,307 | IssuesEvent | 2022-09-20 08:24:59 | MTES-MCT/histologe | https://api.github.com/repos/MTES-MCT/histologe | closed | [BO - Pagination] Retour page 1 | bug A tester | Dans la page d'accueil du BO, lorsqu'on utilise la pagination pour passer de la page 1 à 2, 3 ect... il n'est pas possible de revenir sur la page 1. On reste sur la page en cours lorsqu'on clique sur page 1.
Sur toutes les plateformes. Test : https://habitatindigne13.histologe.fr/bo/ ou https://habitat-indigne06.histologe.fr/bo/
| 1.0 | [BO - Pagination] Retour page 1 - Dans la page d'accueil du BO, lorsqu'on utilise la pagination pour passer de la page 1 à 2, 3 ect... il n'est pas possible de revenir sur la page 1. On reste sur la page en cours lorsqu'on clique sur page 1.
Sur toutes les plateformes. Test : https://habitatindigne13.histologe.fr/bo/ ou https://habitat-indigne06.histologe.fr/bo/
| test | retour page dans la page d accueil du bo lorsqu on utilise la pagination pour passer de la page à ect il n est pas possible de revenir sur la page on reste sur la page en cours lorsqu on clique sur page sur toutes les plateformes test ou | 1 |
247,112 | 20,957,621,002 | IssuesEvent | 2022-03-27 10:16:52 | Fabulously-Optimized/fabulously-optimized | https://api.github.com/repos/Fabulously-Optimized/fabulously-optimized | closed | Bobby | mod feedback/testers wanted | **Mod name**
Bobby
**Curseforge link**
https://www.curseforge.com/minecraft/mc-mods/bobby
**Modrinth link**
https://modrinth.com/mod/bobby
**Other link**
https://github.com/Johni0702/bobby
**What it does**
Bobby is a Minecraft mod which allows for render distances greater than the server's view-distance setting. It accomplishes this goal by recording and storing (in .minecraft/.bobby) all chunks sent by the server which it then can load and display at a later point when the chunk is outside the server's view-distance.
**Why should it be in the modpack**
Because players can see further than what the server's view-distance is set to, if they want to.
**Why shouldn't it be in the modpack**
Starlight currently has an issue with it, but Phosphor works fine: https://github.com/Spottedleaf/Starlight/issues/38
**Categories**
<!--- Select any that match: -->
- [ ] Performance optimization
- [x] Graphics optimization
- [ ] New feature
- [ ] Optifine parity
- [ ] Fixes a bug/dependency
- [ ] Replaces an existing mod
| 1.0 | Bobby - **Mod name**
Bobby
**Curseforge link**
https://www.curseforge.com/minecraft/mc-mods/bobby
**Modrinth link**
https://modrinth.com/mod/bobby
**Other link**
https://github.com/Johni0702/bobby
**What it does**
Bobby is a Minecraft mod which allows for render distances greater than the server's view-distance setting. It accomplishes this goal by recording and storing (in .minecraft/.bobby) all chunks sent by the server which it then can load and display at a later point when the chunk is outside the server's view-distance.
**Why should it be in the modpack**
Because players can see further than what the server's view-distance is set to, if they want to.
**Why shouldn't it be in the modpack**
Starlight currently has an issue with it, but Phosphor works fine: https://github.com/Spottedleaf/Starlight/issues/38
**Categories**
<!--- Select any that match: -->
- [ ] Performance optimization
- [x] Graphics optimization
- [ ] New feature
- [ ] Optifine parity
- [ ] Fixes a bug/dependency
- [ ] Replaces an existing mod
| test | bobby mod name bobby curseforge link modrinth link other link what it does bobby is a minecraft mod which allows for render distances greater than the server s view distance setting it accomplishes this goal by recording and storing in minecraft bobby all chunks sent by the server which it then can load and display at a later point when the chunk is outside the server s view distance why should it be in the modpack because players can see further than what the server s view distance is set to if they want to why shouldn t it be in the modpack starlight currently has an issue with it but phosphor works fine categories performance optimization graphics optimization new feature optifine parity fixes a bug dependency replaces an existing mod | 1 |
193,432 | 14,652,988,714 | IssuesEvent | 2020-12-28 04:11:55 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | open-telemetry/opentelemetry-collector-contrib: exporter/splunkhecexporter/metricdata_to_splunk_test.go; 3 LoC | fresh test tiny |
Found a possible issue in [open-telemetry/opentelemetry-collector-contrib](https://www.github.com/open-telemetry/opentelemetry-collector-contrib) at [exporter/splunkhecexporter/metricdata_to_splunk_test.go](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/c228239efd3eee55bf395cfddffb69ecc61bf8b2/exporter/splunkhecexporter/metricdata_to_splunk_test.go#L597-L599)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
>
[Click here to see the code in its original context.](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/c228239efd3eee55bf395cfddffb69ecc61bf8b2/exporter/splunkhecexporter/metricdata_to_splunk_test.go#L597-L599)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for i, want := range tt.wantSplunkMetrics {
assert.Equal(t, &want, gotMetrics[i])
}
```
</details>
<details>
<summary>Click here to show extra information the analyzer produced.</summary>
```
No path was found through the callgraph that could lead to a function which writes a pointer argument.
No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.
root signature {Equal 3} was not found in the callgraph; reference was passed directly to third-party code
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: c228239efd3eee55bf395cfddffb69ecc61bf8b2
| 1.0 | open-telemetry/opentelemetry-collector-contrib: exporter/splunkhecexporter/metricdata_to_splunk_test.go; 3 LoC -
Found a possible issue in [open-telemetry/opentelemetry-collector-contrib](https://www.github.com/open-telemetry/opentelemetry-collector-contrib) at [exporter/splunkhecexporter/metricdata_to_splunk_test.go](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/c228239efd3eee55bf395cfddffb69ecc61bf8b2/exporter/splunkhecexporter/metricdata_to_splunk_test.go#L597-L599)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
>
[Click here to see the code in its original context.](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/c228239efd3eee55bf395cfddffb69ecc61bf8b2/exporter/splunkhecexporter/metricdata_to_splunk_test.go#L597-L599)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for i, want := range tt.wantSplunkMetrics {
assert.Equal(t, &want, gotMetrics[i])
}
```
</details>
<details>
<summary>Click here to show extra information the analyzer produced.</summary>
```
No path was found through the callgraph that could lead to a function which writes a pointer argument.
No path was found through the callgraph that could lead to a function which passes a pointer to third-party code.
root signature {Equal 3} was not found in the callgraph; reference was passed directly to third-party code
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: c228239efd3eee55bf395cfddffb69ecc61bf8b2
| test | open telemetry opentelemetry collector contrib exporter splunkhecexporter metricdata to splunk test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message click here to show the line s of go which triggered the analyzer go for i want range tt wantsplunkmetrics assert equal t want gotmetrics click here to show extra information the analyzer produced no path was found through the callgraph that could lead to a function which writes a pointer argument no path was found through the callgraph that could lead to a function which passes a pointer to third party code root signature equal was not found in the callgraph reference was passed directly to third party code leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
151,342 | 19,648,810,994 | IssuesEvent | 2022-01-10 02:36:14 | turkdevops/angular | https://api.github.com/repos/turkdevops/angular | closed | WS-2019-0318 (High) detected in handlebars-4.4.3.tgz, handlebars-4.4.2.tgz - autoclosed | security vulnerability | ## WS-2019-0318 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>handlebars-4.4.3.tgz</b>, <b>handlebars-4.4.2.tgz</b></p></summary>
<p>
<details><summary><b>handlebars-4.4.3.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.3.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.3.tgz</a></p>
<p>Path to dependency file: angular/integration/cli-hello-world-lazy/package.json</p>
<p>Path to vulnerable library: angular/integration/cli-hello-world-lazy/node_modules/handlebars/package.json,angular/integration/cli-hello-world-lazy-rollup/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-istanbul-reporter-2.1.0.tgz (Root Library)
- istanbul-api-2.1.6.tgz
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.4.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>handlebars-4.4.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz</a></p>
<p>Path to dependency file: angular/integration/cli-hello-world-ivy-i18n/package.json</p>
<p>Path to vulnerable library: angular/integration/cli-hello-world-ivy-i18n/node_modules/handlebars/package.json,angular/integration/ivy-i18n/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-istanbul-reporter-2.1.0.tgz (Root Library)
- istanbul-api-2.1.6.tgz
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/angular/commit/c6aca37f442da8c55a02d7c53ccc58100ab004f3">c6aca37f442da8c55a02d7c53ccc58100ab004f3</a></p>
<p>Found in base branch: <b>labs/router</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In "showdownjs/showdown", versions prior to v4.4.5 are vulnerable against Regular expression Denial of Service (ReDOS) once receiving specially-crafted templates.
<p>Publish Date: 2019-10-20
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0318</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-10-20</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0318 (High) detected in handlebars-4.4.3.tgz, handlebars-4.4.2.tgz - autoclosed - ## WS-2019-0318 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>handlebars-4.4.3.tgz</b>, <b>handlebars-4.4.2.tgz</b></p></summary>
<p>
<details><summary><b>handlebars-4.4.3.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.3.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.3.tgz</a></p>
<p>Path to dependency file: angular/integration/cli-hello-world-lazy/package.json</p>
<p>Path to vulnerable library: angular/integration/cli-hello-world-lazy/node_modules/handlebars/package.json,angular/integration/cli-hello-world-lazy-rollup/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-istanbul-reporter-2.1.0.tgz (Root Library)
- istanbul-api-2.1.6.tgz
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.4.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>handlebars-4.4.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz</a></p>
<p>Path to dependency file: angular/integration/cli-hello-world-ivy-i18n/package.json</p>
<p>Path to vulnerable library: angular/integration/cli-hello-world-ivy-i18n/node_modules/handlebars/package.json,angular/integration/ivy-i18n/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-istanbul-reporter-2.1.0.tgz (Root Library)
- istanbul-api-2.1.6.tgz
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/angular/commit/c6aca37f442da8c55a02d7c53ccc58100ab004f3">c6aca37f442da8c55a02d7c53ccc58100ab004f3</a></p>
<p>Found in base branch: <b>labs/router</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In "showdownjs/showdown", versions prior to v4.4.5 are vulnerable against Regular expression Denial of Service (ReDOS) once receiving specially-crafted templates.
<p>Publish Date: 2019-10-20
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0318</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-10-20</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | ws high detected in handlebars tgz handlebars tgz autoclosed ws high severity vulnerability vulnerable libraries handlebars tgz handlebars tgz handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file angular integration cli hello world lazy package json path to vulnerable library angular integration cli hello world lazy node modules handlebars package json angular integration cli hello world lazy rollup node modules handlebars package json dependency hierarchy karma coverage istanbul reporter tgz root library istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file angular integration cli hello world ivy package json path to vulnerable library angular integration cli hello world ivy node modules handlebars package json angular integration ivy node modules handlebars package json dependency hierarchy karma coverage istanbul reporter tgz root library istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library found in head commit a href found in base branch labs router vulnerability details in showdownjs showdown versions prior to are vulnerable against regular expression denial of service redos once receiving specially crafted templates publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
285,125 | 8,755,082,488 | IssuesEvent | 2018-12-14 13:50:37 | leeensminger/DelDOT-NPDES-Viewer | https://api.github.com/repos/leeensminger/DelDOT-NPDES-Viewer | closed | BMP Report - First word of every action does not display | priority item reports | The first word of every action item comment generated under the Action Item Summary portion of the report does display: Example is BMP 25 - 2017 Inspection (The first word, "REMOVE" was cut off).
Database:

Web Viewer Generated Report:

| 1.0 | BMP Report - First word of every action does not display - The first word of every action item comment generated under the Action Item Summary portion of the report does display: Example is BMP 25 - 2017 Inspection (The first word, "REMOVE" was cut off).
Database:

Web Viewer Generated Report:

| non_test | bmp report first word of every action does not display the first word of every action item comment generated under the action item summary portion of the report does display example is bmp inspection the first word remove was cut off database web viewer generated report | 0 |
104,472 | 16,616,834,294 | IssuesEvent | 2021-06-02 17:49:31 | Dima2021/t-vault | https://api.github.com/repos/Dima2021/t-vault | opened | WS-2017-0268 (Medium) detected in angular-1.4.14.tgz | security vulnerability | ## WS-2017-0268 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.4.14.tgz</b></p></summary>
<p>HTML enhanced for web apps</p>
<p>Library home page: <a href="https://registry.npmjs.org/angular/-/angular-1.4.14.tgz">https://registry.npmjs.org/angular/-/angular-1.4.14.tgz</a></p>
<p>Path to dependency file: t-vault/tvaultui/package.json</p>
<p>Path to vulnerable library: t-vault/tvaultui/node_modules/angular/package.json</p>
<p>
Dependency Hierarchy:
- angular-counter-0.2.1.tgz (Root Library)
- :x: **angular-1.4.14.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/t-vault/commit/259885b704776a5554c5d008b51b19c9b0ea9fd5">259885b704776a5554c5d008b51b19c9b0ea9fd5</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Both Firefox and Safari are vulnerable to XSS if we use an inert document created via `document.implementation.createHTMLDocument()`.
<p>Publish Date: 2017-05-25
<p>URL: <a href=https://github.com/angular/angular.js/commit/8f31f1ff43b673a24f84422d5c13d6312b2c4d94>WS-2017-0268</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/angular/angular.js/commit/8f31f1ff43b673a24f84422d5c13d6312b2c4d94">https://github.com/angular/angular.js/commit/8f31f1ff43b673a24f84422d5c13d6312b2c4d94</a></p>
<p>Release Date: 2017-06-05</p>
<p>Fix Resolution: Replace or update the following files: sanitize.js, sanitizeSpec.js</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"angular","packageVersion":"1.4.14","packageFilePaths":["/tvaultui/package.json"],"isTransitiveDependency":true,"dependencyTree":"angular-counter:0.2.1;angular:1.4.14","isMinimumFixVersionAvailable":false}],"baseBranches":["dev"],"vulnerabilityIdentifier":"WS-2017-0268","vulnerabilityDetails":"Both Firefox and Safari are vulnerable to XSS if we use an inert document created via `document.implementation.createHTMLDocument()`.","vulnerabilityUrl":"https://github.com/angular/angular.js/commit/8f31f1ff43b673a24f84422d5c13d6312b2c4d94","cvss3Severity":"medium","cvss3Score":"4.7","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | WS-2017-0268 (Medium) detected in angular-1.4.14.tgz - ## WS-2017-0268 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.4.14.tgz</b></p></summary>
<p>HTML enhanced for web apps</p>
<p>Library home page: <a href="https://registry.npmjs.org/angular/-/angular-1.4.14.tgz">https://registry.npmjs.org/angular/-/angular-1.4.14.tgz</a></p>
<p>Path to dependency file: t-vault/tvaultui/package.json</p>
<p>Path to vulnerable library: t-vault/tvaultui/node_modules/angular/package.json</p>
<p>
Dependency Hierarchy:
- angular-counter-0.2.1.tgz (Root Library)
- :x: **angular-1.4.14.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/t-vault/commit/259885b704776a5554c5d008b51b19c9b0ea9fd5">259885b704776a5554c5d008b51b19c9b0ea9fd5</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Both Firefox and Safari are vulnerable to XSS if we use an inert document created via `document.implementation.createHTMLDocument()`.
<p>Publish Date: 2017-05-25
<p>URL: <a href=https://github.com/angular/angular.js/commit/8f31f1ff43b673a24f84422d5c13d6312b2c4d94>WS-2017-0268</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/angular/angular.js/commit/8f31f1ff43b673a24f84422d5c13d6312b2c4d94">https://github.com/angular/angular.js/commit/8f31f1ff43b673a24f84422d5c13d6312b2c4d94</a></p>
<p>Release Date: 2017-06-05</p>
<p>Fix Resolution: Replace or update the following files: sanitize.js, sanitizeSpec.js</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"angular","packageVersion":"1.4.14","packageFilePaths":["/tvaultui/package.json"],"isTransitiveDependency":true,"dependencyTree":"angular-counter:0.2.1;angular:1.4.14","isMinimumFixVersionAvailable":false}],"baseBranches":["dev"],"vulnerabilityIdentifier":"WS-2017-0268","vulnerabilityDetails":"Both Firefox and Safari are vulnerable to XSS if we use an inert document created via `document.implementation.createHTMLDocument()`.","vulnerabilityUrl":"https://github.com/angular/angular.js/commit/8f31f1ff43b673a24f84422d5c13d6312b2c4d94","cvss3Severity":"medium","cvss3Score":"4.7","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_test | ws medium detected in angular tgz ws medium severity vulnerability vulnerable library angular tgz html enhanced for web apps library home page a href path to dependency file t vault tvaultui package json path to vulnerable library t vault tvaultui node modules angular package json dependency hierarchy angular counter tgz root library x angular tgz vulnerable library found in head commit a href found in base branch dev vulnerability details both firefox and safari are vulnerable to xss if we use an inert document created via document implementation createhtmldocument publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type change files origin a href release date fix resolution replace or update the following files sanitize js sanitizespec js isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree angular counter angular isminimumfixversionavailable false basebranches vulnerabilityidentifier ws vulnerabilitydetails both firefox and safari are vulnerable to xss if we use an inert document created via document implementation createhtmldocument vulnerabilityurl | 0 |
213,215 | 16,506,173,031 | IssuesEvent | 2021-05-25 19:37:10 | The-GNTL-Project/Exchange | https://api.github.com/repos/The-GNTL-Project/Exchange | closed | Add a conversion ticker to bottom of trading window for XMR-BTC-USD-GBP | testing | Trading in XMR is weird. Let's put something up to bring the numbers back to something more familiar. | 1.0 | Add a conversion ticker to bottom of trading window for XMR-BTC-USD-GBP - Trading in XMR is weird. Let's put something up to bring the numbers back to something more familiar. | test | add a conversion ticker to bottom of trading window for xmr btc usd gbp trading in xmr is weird let s put something up to bring the numbers back to something more familiar | 1 |
28,952 | 4,152,655,890 | IssuesEvent | 2016-06-16 02:30:08 | DSDL2016/FwdRevDesign | https://api.github.com/repos/DSDL2016/FwdRevDesign | closed | Circuit spec | design | ```
let schematic = [
{
type: 'input',
out: [[{id: 2, port: 0}]]
},
{
type: 'input',
out: [[{id: 2, port: 1}]]
},
{
type: 'rs',
out: [[{id: 3, port: 0],[{id: 4, port: 0]]
},
{
type: 'output'
},
{
type: 'output'
}
];
```
#1 | 1.0 | Circuit spec - ```
let schematic = [
{
type: 'input',
out: [[{id: 2, port: 0}]]
},
{
type: 'input',
out: [[{id: 2, port: 1}]]
},
{
type: 'rs',
out: [[{id: 3, port: 0],[{id: 4, port: 0]]
},
{
type: 'output'
},
{
type: 'output'
}
];
```
#1 | non_test | circuit spec let schematic type input out type input out type rs out type output type output | 0 |
788,789 | 27,766,702,785 | IssuesEvent | 2023-03-16 11:57:44 | AY2223S2-CS2103T-T17-1/tp | https://api.github.com/repos/AY2223S2-CS2103T-T17-1/tp | closed | Update Undo Function | type.Story priority.Medium | As a careless student(user), I can undo my last command that was accidental so that my mistakes can easily fixed. | 1.0 | Update Undo Function - As a careless student(user), I can undo my last command that was accidental so that my mistakes can easily fixed. | non_test | update undo function as a careless student user i can undo my last command that was accidental so that my mistakes can easily fixed | 0 |
219,688 | 24,513,414,451 | IssuesEvent | 2022-10-11 01:08:16 | jaredhendrickson13/pfsense-api | https://api.github.com/repos/jaredhendrickson13/pfsense-api | closed | How to generate a API token by command line or api? | feature request security | Hi,
If I want to generate a API token, is there any way to generate by command line or api?
Because I want to write a script that automates the settings of pfsense from scratch and can use the API Token to operate pfsense, but I am stop at unable to generate a TOKEN to use.
Also, sorry for my bad English~. | True | How to generate a API token by command line or api? - Hi,
If I want to generate a API token, is there any way to generate by command line or api?
Because I want to write a script that automates the settings of pfsense from scratch and can use the API Token to operate pfsense, but I am stop at unable to generate a TOKEN to use.
Also, sorry for my bad English~. | non_test | how to generate a api token by command line or api hi if i want to generate a api token is there any way to generate by command line or api because i want to write a script that automates the settings of pfsense from scratch and can use the api token to operate pfsense but i am stop at unable to generate a token to use also sorry for my bad english | 0 |
282,619 | 8,708,732,399 | IssuesEvent | 2018-12-06 11:49:42 | akeeba/angie | https://api.github.com/repos/akeeba/angie | closed | Console errors about missing fonts | Priority 3 bug | ANGIE tries to load missing FEF fonts: OpenSans-Bold.ttf Montserrat-Regular.ttf OpenSans-Italic.ttf
Should we include in the installer or remove them from the CSS?
Is there an option to link fef.min.css to avoid using those fonts?
Paging @nikosdion | 1.0 | Console errors about missing fonts - ANGIE tries to load missing FEF fonts: OpenSans-Bold.ttf Montserrat-Regular.ttf OpenSans-Italic.ttf
Should we include in the installer or remove them from the CSS?
Is there an option to link fef.min.css to avoid using those fonts?
Paging @nikosdion | non_test | console errors about missing fonts angie tries to load missing fef fonts opensans bold ttf montserrat regular ttf opensans italic ttf should we include in the installer or remove them from the css is there an option to link fef min css to avoid using those fonts paging nikosdion | 0 |
257,124 | 22,147,910,642 | IssuesEvent | 2022-06-03 13:54:25 | osl-incubator/cookiecutter-python | https://api.github.com/repos/osl-incubator/cookiecutter-python | opened | Improve tests | test | Improve tests on CI:
- [ ] test all options when creating a NEW project from template
- [ ] test semantic release for the NEW project from template
- [ ] test semantic release for cookiecutter-python | 1.0 | Improve tests - Improve tests on CI:
- [ ] test all options when creating a NEW project from template
- [ ] test semantic release for the NEW project from template
- [ ] test semantic release for cookiecutter-python | test | improve tests improve tests on ci test all options when creating a new project from template test semantic release for the new project from template test semantic release for cookiecutter python | 1 |
86,316 | 15,755,519,957 | IssuesEvent | 2021-03-31 01:55:39 | biswajit-paul/dpone | https://api.github.com/repos/biswajit-paul/dpone | opened | CVE-2015-8857 (High) detected in uglify-js-2.2.5.tgz | security vulnerability | ## CVE-2015-8857 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.2.5.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz</a></p>
<p>Path to dependency file: /dpone/core/assets/vendor/jquery.ui/package.json</p>
<p>Path to vulnerable library: dpone/core/assets/vendor/jquery.ui/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-uglify-0.1.1.tgz (Root Library)
- :x: **uglify-js-2.2.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8857>CVE-2015-8857</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p>
<p>Release Date: 2018-12-15</p>
<p>Fix Resolution: v2.4.24</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-8857 (High) detected in uglify-js-2.2.5.tgz - ## CVE-2015-8857 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.2.5.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.2.5.tgz</a></p>
<p>Path to dependency file: /dpone/core/assets/vendor/jquery.ui/package.json</p>
<p>Path to vulnerable library: dpone/core/assets/vendor/jquery.ui/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-uglify-0.1.1.tgz (Root Library)
- :x: **uglify-js-2.2.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8857>CVE-2015-8857</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p>
<p>Release Date: 2018-12-15</p>
<p>Fix Resolution: v2.4.24</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in uglify js tgz cve high severity vulnerability vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file dpone core assets vendor jquery ui package json path to vulnerable library dpone core assets vendor jquery ui node modules uglify js package json dependency hierarchy grunt contrib uglify tgz root library x uglify js tgz vulnerable library vulnerability details the uglify js package before for node js does not properly account for non boolean values when rewriting boolean expressions which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten javascript publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
53,243 | 6,305,914,220 | IssuesEvent | 2017-07-21 19:35:24 | apache/couchdb | https://api.github.com/repos/apache/couchdb | closed | Test failure in compact.js | bug testsuite | Seems like a new one and a bit of an oddity.
Travis log:
```
==> couchdb (eunit)
make[1]: Leaving directory `/home/travis/build/apache/couchdb'
make[1]: Entering directory `/home/travis/build/apache/couchdb'
test/javascript/tests/all_docs.js pass
test/javascript/tests/attachment_names.js pass
test/javascript/tests/attachment_paths.js pass
test/javascript/tests/attachment_ranges.js pass
test/javascript/tests/attachment_views.js pass
test/javascript/tests/attachments.js pass
test/javascript/tests/attachments_multipart.js pass
test/javascript/tests/auth_cache.js pass
test/javascript/tests/basics.js pass
test/javascript/tests/batch_save.js pass
test/javascript/tests/bulk_docs.js pass
test/javascript/tests/changes.js pass
test/javascript/tests/coffee.js pass
test/javascript/tests/compact.js
Error: Failed to execute HTTP request: couldn't connect to host
Trace back (most recent call first):
37: test/javascript/couch_http.js
("")
468: 127.0.0.1/_system")@test/javascript/couch.js
("GET","/_node/node1
75: test/javascript/test_setup.js
getUptime()
95: test/javascript/test_setup.js
restartServer()
57: test/javascript/tests/compact.js
()
37: test/javascript/cli_runner.js
runTest()
48: test/javascript/cli_runner.js
[31mfail
=======================================================
JavaScript tests complete.
Failed: 1. Skipped or passed: 13.
make[1]: *** [javascript] Error 1
make[1]: Leaving directory `/home/travis/build/apache/couchdb'
make: *** [check] Error 2
```
And the dev/logs/node1.log from the uploaded report
```
[info] 2017-07-12T22:04:08.243221Z node1@127.0.0.1 <0.7.0> -------- Application couch_log started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.244195Z node1@127.0.0.1 <0.7.0> -------- Application folsom started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.273907Z node1@127.0.0.1 <0.7.0> -------- Application couch_stats started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.274377Z node1@127.0.0.1 <0.7.0> -------- Application crypto started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.276054Z node1@127.0.0.1 <0.7.0> -------- Application sasl started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.277534Z node1@127.0.0.1 <0.7.0> -------- Application inets started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.277786Z node1@127.0.0.1 <0.7.0> -------- Application asn1 started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.278060Z node1@127.0.0.1 <0.7.0> -------- Application public_key started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.278683Z node1@127.0.0.1 <0.7.0> -------- Application ssl started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.305258Z node1@127.0.0.1 <0.7.0> -------- Application os_mon started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.321391Z node1@127.0.0.1 <0.7.0> -------- Application ibrowse started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.321690Z node1@127.0.0.1 <0.7.0> -------- Application xmerl started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.321930Z node1@127.0.0.1 <0.7.0> -------- Application compiler started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.322159Z node1@127.0.0.1 <0.7.0> -------- Application syntax_tools started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.322391Z node1@127.0.0.1 <0.7.0> -------- Application mochiweb started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.322632Z node1@127.0.0.1 <0.7.0> -------- Application b64url started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.322854Z node1@127.0.0.1 <0.7.0> -------- Application khash started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.323672Z node1@127.0.0.1 <0.7.0> -------- Application couch_event started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.324164Z node1@127.0.0.1 <0.7.0> -------- Application ioq started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.324407Z node1@127.0.0.1 <0.216.0> -------- Apache CouchDB 2.1.0-e05ae1c is starting.
[info] 2017-07-12T22:04:08.324594Z node1@127.0.0.1 <0.217.0> -------- Starting couch_sup
[info] 2017-07-12T22:04:10.441495Z node1@127.0.0.1 <0.216.0> -------- Apache CouchDB has started. Time to relax.
[info] 2017-07-12T22:04:10.441582Z node1@127.0.0.1 <0.216.0> -------- Apache CouchDB has started on http://127.0.0.1:15986/
[info] 2017-07-12T22:04:10.441734Z node1@127.0.0.1 <0.7.0> -------- Application couch started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.441863Z node1@127.0.0.1 <0.7.0> -------- Application ets_lru started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.443154Z node1@127.0.0.1 <0.7.0> -------- Application rexi started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.452089Z node1@127.0.0.1 <0.327.0> -------- Opening index for db: _users idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-07-12T22:04:10.455477Z node1@127.0.0.1 <0.320.0> -------- Starting compaction for db "_dbs"
[info] 2017-07-12T22:04:10.460567Z node1@127.0.0.1 <0.7.0> -------- Application mem3 started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.460618Z node1@127.0.0.1 <0.7.0> -------- Application fabric started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.467160Z node1@127.0.0.1 <0.7.0> -------- Application chttpd started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.468730Z node1@127.0.0.1 <0.7.0> -------- Application setup started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.468864Z node1@127.0.0.1 <0.7.0> -------- Application couch_peruser started on node 'node1@127.0.0.1'
[notice] 2017-07-12T22:04:10.470615Z node1@127.0.0.1 <0.74.0> -------- config: [features] scheduler set to true for reason nil
[info] 2017-07-12T22:04:10.479206Z node1@127.0.0.1 <0.7.0> -------- Application couch_replicator started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.479397Z node1@127.0.0.1 <0.7.0> -------- Application bear started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.482266Z node1@127.0.0.1 <0.7.0> -------- Application global_changes started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.482538Z node1@127.0.0.1 <0.7.0> -------- Application couch_plugins started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.485953Z node1@127.0.0.1 <0.7.0> -------- Application runtime_tools started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.486659Z node1@127.0.0.1 <0.7.0> -------- Application ddoc_cache started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.487439Z node1@127.0.0.1 <0.7.0> -------- Application couch_index started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.487690Z node1@127.0.0.1 <0.7.0> -------- Application couch_mrview started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.487927Z node1@127.0.0.1 <0.7.0> -------- Application snappy started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.488168Z node1@127.0.0.1 <0.7.0> -------- Application jiffy started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.490912Z node1@127.0.0.1 <0.7.0> -------- Application mango started on node 'node1@127.0.0.1'
[notice] 2017-07-12T22:04:10.557583Z node1@127.0.0.1 <0.320.0> -------- Compaction swap for db: /home/travis/build/apache/couchdb/dev/lib/node1/data/_dbs.couch 209086 20670
[info] 2017-07-12T22:04:10.558894Z node1@127.0.0.1 <0.320.0> -------- Compaction for db "_dbs" completed.
[info] 2017-07-12T22:04:10.573322Z node1@127.0.0.1 <0.474.0> -------- Opening index for db: _replicator idx: _design/_replicator sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-07-12T22:04:10.623009Z node1@127.0.0.1 <0.484.0> -------- Starting compaction for db "shards/60000000-7fffffff/_global_changes.1499896983"
[notice] 2017-07-12T22:04:10.697090Z node1@127.0.0.1 <0.484.0> -------- Compaction swap for db: /home/travis/build/apache/couchdb/dev/lib/node1/data/shards/60000000-7fffffff/_global_changes.1499896983.couch 135358 12478
[info] 2017-07-12T22:04:10.705956Z node1@127.0.0.1 <0.484.0> -------- Compaction for db "shards/60000000-7fffffff/_global_changes.1499896983" completed.
[info] 2017-07-12T22:04:10.830374Z node1@127.0.0.1 <0.561.0> -------- Opening index for db: shards/40000000-5fffffff/_users.1499896983 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-07-12T22:04:10.853621Z node1@127.0.0.1 <0.580.0> -------- Opening index for db: shards/80000000-9fffffff/_replicator.1499896983 idx: _design/_replicator sig: "3e823c2a4383ac0c18d4e574135a5b08"
[notice] 2017-07-12T22:04:11.018617Z node1@127.0.0.1 <0.343.0> ac9d241d53 127.0.0.1:15984 127.0.0.1 undefined GET / 200 ok 1
[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
[os_mon] memory supervisor port (memsup): Erlang has closed
```
I don't understand how the previous tests aren't showing up in that log. | 1.0 | Test failure in compact.js - Seems like a new one and a bit of an oddity.
Travis log:
```
==> couchdb (eunit)
make[1]: Leaving directory `/home/travis/build/apache/couchdb'
make[1]: Entering directory `/home/travis/build/apache/couchdb'
test/javascript/tests/all_docs.js pass
test/javascript/tests/attachment_names.js pass
test/javascript/tests/attachment_paths.js pass
test/javascript/tests/attachment_ranges.js pass
test/javascript/tests/attachment_views.js pass
test/javascript/tests/attachments.js pass
test/javascript/tests/attachments_multipart.js pass
test/javascript/tests/auth_cache.js pass
test/javascript/tests/basics.js pass
test/javascript/tests/batch_save.js pass
test/javascript/tests/bulk_docs.js pass
test/javascript/tests/changes.js pass
test/javascript/tests/coffee.js pass
test/javascript/tests/compact.js
Error: Failed to execute HTTP request: couldn't connect to host
Trace back (most recent call first):
37: test/javascript/couch_http.js
("")
468: 127.0.0.1/_system")@test/javascript/couch.js
("GET","/_node/node1
75: test/javascript/test_setup.js
getUptime()
95: test/javascript/test_setup.js
restartServer()
57: test/javascript/tests/compact.js
()
37: test/javascript/cli_runner.js
runTest()
48: test/javascript/cli_runner.js
[31mfail
=======================================================
JavaScript tests complete.
Failed: 1. Skipped or passed: 13.
make[1]: *** [javascript] Error 1
make[1]: Leaving directory `/home/travis/build/apache/couchdb'
make: *** [check] Error 2
```
And the dev/logs/node1.log from the uploaded report
```
[info] 2017-07-12T22:04:08.243221Z node1@127.0.0.1 <0.7.0> -------- Application couch_log started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.244195Z node1@127.0.0.1 <0.7.0> -------- Application folsom started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.273907Z node1@127.0.0.1 <0.7.0> -------- Application couch_stats started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.274377Z node1@127.0.0.1 <0.7.0> -------- Application crypto started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.276054Z node1@127.0.0.1 <0.7.0> -------- Application sasl started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.277534Z node1@127.0.0.1 <0.7.0> -------- Application inets started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.277786Z node1@127.0.0.1 <0.7.0> -------- Application asn1 started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.278060Z node1@127.0.0.1 <0.7.0> -------- Application public_key started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.278683Z node1@127.0.0.1 <0.7.0> -------- Application ssl started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.305258Z node1@127.0.0.1 <0.7.0> -------- Application os_mon started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.321391Z node1@127.0.0.1 <0.7.0> -------- Application ibrowse started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.321690Z node1@127.0.0.1 <0.7.0> -------- Application xmerl started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.321930Z node1@127.0.0.1 <0.7.0> -------- Application compiler started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.322159Z node1@127.0.0.1 <0.7.0> -------- Application syntax_tools started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.322391Z node1@127.0.0.1 <0.7.0> -------- Application mochiweb started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.322632Z node1@127.0.0.1 <0.7.0> -------- Application b64url started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.322854Z node1@127.0.0.1 <0.7.0> -------- Application khash started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.323672Z node1@127.0.0.1 <0.7.0> -------- Application couch_event started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.324164Z node1@127.0.0.1 <0.7.0> -------- Application ioq started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:08.324407Z node1@127.0.0.1 <0.216.0> -------- Apache CouchDB 2.1.0-e05ae1c is starting.
[info] 2017-07-12T22:04:08.324594Z node1@127.0.0.1 <0.217.0> -------- Starting couch_sup
[info] 2017-07-12T22:04:10.441495Z node1@127.0.0.1 <0.216.0> -------- Apache CouchDB has started. Time to relax.
[info] 2017-07-12T22:04:10.441582Z node1@127.0.0.1 <0.216.0> -------- Apache CouchDB has started on http://127.0.0.1:15986/
[info] 2017-07-12T22:04:10.441734Z node1@127.0.0.1 <0.7.0> -------- Application couch started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.441863Z node1@127.0.0.1 <0.7.0> -------- Application ets_lru started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.443154Z node1@127.0.0.1 <0.7.0> -------- Application rexi started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.452089Z node1@127.0.0.1 <0.327.0> -------- Opening index for db: _users idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-07-12T22:04:10.455477Z node1@127.0.0.1 <0.320.0> -------- Starting compaction for db "_dbs"
[info] 2017-07-12T22:04:10.460567Z node1@127.0.0.1 <0.7.0> -------- Application mem3 started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.460618Z node1@127.0.0.1 <0.7.0> -------- Application fabric started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.467160Z node1@127.0.0.1 <0.7.0> -------- Application chttpd started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.468730Z node1@127.0.0.1 <0.7.0> -------- Application setup started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.468864Z node1@127.0.0.1 <0.7.0> -------- Application couch_peruser started on node 'node1@127.0.0.1'
[notice] 2017-07-12T22:04:10.470615Z node1@127.0.0.1 <0.74.0> -------- config: [features] scheduler set to true for reason nil
[info] 2017-07-12T22:04:10.479206Z node1@127.0.0.1 <0.7.0> -------- Application couch_replicator started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.479397Z node1@127.0.0.1 <0.7.0> -------- Application bear started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.482266Z node1@127.0.0.1 <0.7.0> -------- Application global_changes started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.482538Z node1@127.0.0.1 <0.7.0> -------- Application couch_plugins started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.485953Z node1@127.0.0.1 <0.7.0> -------- Application runtime_tools started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.486659Z node1@127.0.0.1 <0.7.0> -------- Application ddoc_cache started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.487439Z node1@127.0.0.1 <0.7.0> -------- Application couch_index started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.487690Z node1@127.0.0.1 <0.7.0> -------- Application couch_mrview started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.487927Z node1@127.0.0.1 <0.7.0> -------- Application snappy started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.488168Z node1@127.0.0.1 <0.7.0> -------- Application jiffy started on node 'node1@127.0.0.1'
[info] 2017-07-12T22:04:10.490912Z node1@127.0.0.1 <0.7.0> -------- Application mango started on node 'node1@127.0.0.1'
[notice] 2017-07-12T22:04:10.557583Z node1@127.0.0.1 <0.320.0> -------- Compaction swap for db: /home/travis/build/apache/couchdb/dev/lib/node1/data/_dbs.couch 209086 20670
[info] 2017-07-12T22:04:10.558894Z node1@127.0.0.1 <0.320.0> -------- Compaction for db "_dbs" completed.
[info] 2017-07-12T22:04:10.573322Z node1@127.0.0.1 <0.474.0> -------- Opening index for db: _replicator idx: _design/_replicator sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-07-12T22:04:10.623009Z node1@127.0.0.1 <0.484.0> -------- Starting compaction for db "shards/60000000-7fffffff/_global_changes.1499896983"
[notice] 2017-07-12T22:04:10.697090Z node1@127.0.0.1 <0.484.0> -------- Compaction swap for db: /home/travis/build/apache/couchdb/dev/lib/node1/data/shards/60000000-7fffffff/_global_changes.1499896983.couch 135358 12478
[info] 2017-07-12T22:04:10.705956Z node1@127.0.0.1 <0.484.0> -------- Compaction for db "shards/60000000-7fffffff/_global_changes.1499896983" completed.
[info] 2017-07-12T22:04:10.830374Z node1@127.0.0.1 <0.561.0> -------- Opening index for db: shards/40000000-5fffffff/_users.1499896983 idx: _design/_auth sig: "3e823c2a4383ac0c18d4e574135a5b08"
[info] 2017-07-12T22:04:10.853621Z node1@127.0.0.1 <0.580.0> -------- Opening index for db: shards/80000000-9fffffff/_replicator.1499896983 idx: _design/_replicator sig: "3e823c2a4383ac0c18d4e574135a5b08"
[notice] 2017-07-12T22:04:11.018617Z node1@127.0.0.1 <0.343.0> ac9d241d53 127.0.0.1:15984 127.0.0.1 undefined GET / 200 ok 1
[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
[os_mon] memory supervisor port (memsup): Erlang has closed
```
I don't understand how the previous tests aren't showing up in that log. | test | test failure in compact js seems like a new one and a bit of an oddity travis log couchdb eunit make leaving directory home travis build apache couchdb make entering directory home travis build apache couchdb test javascript tests all docs js pass test javascript tests attachment names js pass test javascript tests attachment paths js pass test javascript tests attachment ranges js pass test javascript tests attachment views js pass test javascript tests attachments js pass test javascript tests attachments multipart js pass test javascript tests auth cache js pass test javascript tests basics js pass test javascript tests batch save js pass test javascript tests bulk docs js pass test javascript tests changes js pass test javascript tests coffee js pass test javascript tests compact js error failed to execute http request couldn t connect to host trace back most recent call first test javascript couch http js system test javascript couch js get node test javascript test setup js getuptime test javascript test setup js restartserver test javascript tests compact js test javascript cli runner js runtest test javascript cli runner js javascript tests complete failed skipped or passed make error make leaving directory home travis build apache couchdb make error and the dev logs log from the uploaded report application couch log started on node application folsom started on node application couch stats started on node application crypto started on node application sasl started on node application inets started on node application started on node application public key started on node application ssl started on node application os mon started on node application ibrowse started on node application xmerl started on node application compiler started on node application syntax tools started on node application mochiweb started on node application started on node application khash started on node application couch event started on node application ioq started on node apache couchdb is starting starting couch sup apache couchdb has started time to relax apache couchdb has started on application couch started on node application ets lru started on node application rexi started on node opening index for db users idx design auth sig starting compaction for db dbs application started on node application fabric started on node application chttpd started on node application setup started on node application couch peruser started on node config scheduler set to true for reason nil application couch replicator started on node application bear started on node application global changes started on node application couch plugins started on node application runtime tools started on node application ddoc cache started on node application couch index started on node application couch mrview started on node application snappy started on node application jiffy started on node application mango started on node compaction swap for db home travis build apache couchdb dev lib data dbs couch compaction for db dbs completed opening index for db replicator idx design replicator sig starting compaction for db shards global changes compaction swap for db home travis build apache couchdb dev lib data shards global changes couch compaction for db shards global changes completed opening index for db shards users idx design auth sig opening index for db shards replicator idx design replicator sig undefined get ok memory supervisor port memsup erlang has closed cpu supervisor port cpu sup erlang has closed cpu supervisor port cpu sup erlang has closed memory supervisor port memsup erlang has closed i don t understand how the previous tests aren t showing up in that log | 1 |
387,495 | 26,725,209,675 | IssuesEvent | 2023-01-29 16:26:14 | poivronjaune/orca | https://api.github.com/repos/poivronjaune/orca | opened | Document Government Organisations Involved | documentation | Search Canada's Endangered Species Web site to fin all other governemnt agencies involved.
[See Appendix 3](https://www.canada.ca/en/environment-climate-change/services/species-risk-public-registry/action-plans/killer-whale-northern-southern-resident.html#_AB)
| 1.0 | Document Government Organisations Involved - Search Canada's Endangered Species Web site to fin all other governemnt agencies involved.
[See Appendix 3](https://www.canada.ca/en/environment-climate-change/services/species-risk-public-registry/action-plans/killer-whale-northern-southern-resident.html#_AB)
| non_test | document government organisations involved search canada s endangered species web site to fin all other governemnt agencies involved | 0 |
110,623 | 9,462,668,716 | IssuesEvent | 2019-04-17 15:54:28 | LiskHQ/lisk-sdk | https://api.github.com/repos/LiskHQ/lisk-sdk | closed | Jenkins should run network test in the framework | jenkins test | ### Expected behavior
Jenkins should have jobs to run network test in framework folder
### Actual behavior
Currently disabled due to product merge | 1.0 | Jenkins should run network test in the framework - ### Expected behavior
Jenkins should have jobs to run network test in framework folder
### Actual behavior
Currently disabled due to product merge | test | jenkins should run network test in the framework expected behavior jenkins should have jobs to run network test in framework folder actual behavior currently disabled due to product merge | 1 |
155,826 | 13,634,018,450 | IssuesEvent | 2020-09-24 22:43:24 | awslabs/aws-perspective | https://api.github.com/repos/awslabs/aws-perspective | closed | Receiving 'account must be verified' on nested CF stack | bug documentation | **Describe the bug**
On deployment using the CloudFormation stack, it is halted on a nested-stack with the following:
`Your account must be verified before you can add new CloudFront resources. To verify your account, please contact AWS Support (https://console.aws.amazon.com/support/home#/ ) and include this error message. (Service: AmazonCloudFront; Status Code: 403; Error Code: AccessDenied; Request ID: XXX-XXX)`
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy the CloudFormation stack for aws-perspective per the deployment guide
2. The behavior is experienced on the nested stack creation for CloudFront
**Expected behavior**
Creation of the CloudFront distributions as expected.
**Additional context**
If additional service quotas or preliminary steps are required, specify it as such in the documentation. [Perhaps under the design considerations](https://docs.aws.amazon.com/solutions/latest/aws-perspective/design-considerations.html)?
| 1.0 | Receiving 'account must be verified' on nested CF stack - **Describe the bug**
On deployment using the CloudFormation stack, it is halted on a nested-stack with the following:
`Your account must be verified before you can add new CloudFront resources. To verify your account, please contact AWS Support (https://console.aws.amazon.com/support/home#/ ) and include this error message. (Service: AmazonCloudFront; Status Code: 403; Error Code: AccessDenied; Request ID: XXX-XXX)`
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy the CloudFormation stack for aws-perspective per the deployment guide
2. The behavior is experienced on the nested stack creation for CloudFront
**Expected behavior**
Creation of the CloudFront distributions as expected.
**Additional context**
If additional service quotas or preliminary steps are required, specify it as such in the documentation. [Perhaps under the design considerations](https://docs.aws.amazon.com/solutions/latest/aws-perspective/design-considerations.html)?
| non_test | receiving account must be verified on nested cf stack describe the bug on deployment using the cloudformation stack it is halted on a nested stack with the following your account must be verified before you can add new cloudfront resources to verify your account please contact aws support and include this error message service amazoncloudfront status code error code accessdenied request id xxx xxx to reproduce steps to reproduce the behavior deploy the cloudformation stack for aws perspective per the deployment guide the behavior is experienced on the nested stack creation for cloudfront expected behavior creation of the cloudfront distributions as expected additional context if additional service quotas or preliminary steps are required specify it as such in the documentation | 0 |
194,138 | 14,670,354,932 | IssuesEvent | 2020-12-30 04:32:51 | atom-ide-community/atom-script | https://api.github.com/repos/atom-ide-community/atom-script | closed | I can not install... | installation please-try-the-latest-version | I can not git clone, I have some "npm ERR" , because my working environment can not use git protocol.
I can use http and https.
| 1.0 | I can not install... - I can not git clone, I have some "npm ERR" , because my working environment can not use git protocol.
I can use http and https.
| test | i can not install i can not git clone i have some npm err because my working environment can not use git protocol i can use http and https | 1 |
199,328 | 6,988,313,567 | IssuesEvent | 2017-12-14 12:28:23 | wso2/product-iots | https://api.github.com/repos/wso2/product-iots | closed | Role Listing dosent work as expected. | 3.1.0 3.1.0-update1 3.1.0-update2 3.1.0-update3 3.1.0-update4 3.1.0-Update5 3.1.0-Update6 cdmf enhancement medium Priority/High Type/Bug | **Description:**
The role listing must retrieve the no of roles which applies to the given filters from the user store, but at the moment we retrieve the 1st set of roles which has roles equivalent to the value of the MaxRoleNameListLength property and then apply the filtering on device management API level. this results in always get the same set of roles and there will be a set of roles which never gets listed in device management API or UI.
**Suggested Labels:**
user management, role management
**Affected Product Version:**
3.1.0, 3.0.0
**Steps to reproduce:**
Change the MaxRoleNameListLength to 5 or 10 when there is more than roles available.
Try searching for the role which comes as the 11th or 12th place when listing in carbon console.
The role will never be listed.
| 1.0 | Role Listing dosent work as expected. - **Description:**
The role listing must retrieve the no of roles which applies to the given filters from the user store, but at the moment we retrieve the 1st set of roles which has roles equivalent to the value of the MaxRoleNameListLength property and then apply the filtering on device management API level. this results in always get the same set of roles and there will be a set of roles which never gets listed in device management API or UI.
**Suggested Labels:**
user management, role management
**Affected Product Version:**
3.1.0, 3.0.0
**Steps to reproduce:**
Change the MaxRoleNameListLength to 5 or 10 when there is more than roles available.
Try searching for the role which comes as the 11th or 12th place when listing in carbon console.
The role will never be listed.
| non_test | role listing dosent work as expected description the role listing must retrieve the no of roles which applies to the given filters from the user store but at the moment we retrieve the set of roles which has roles equivalent to the value of the maxrolenamelistlength property and then apply the filtering on device management api level this results in always get the same set of roles and there will be a set of roles which never gets listed in device management api or ui suggested labels user management role management affected product version steps to reproduce change the maxrolenamelistlength to or when there is more than roles available try searching for the role which comes as the or place when listing in carbon console the role will never be listed | 0 |
270,833 | 23,541,240,823 | IssuesEvent | 2022-08-20 12:31:36 | arana-db/arana | https://api.github.com/repos/arana-db/arana | closed | [Test] add shadow & read_write_split scene for integration test. | test | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
**Why is this needed**:
Now there are three packages at `integration_test/scene` package, `db`, `db_tbl`, `tbl`.
Now we need add two other scene: `read_write_split` and `shadow`. | 1.0 | [Test] add shadow & read_write_split scene for integration test. - <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
**Why is this needed**:
Now there are three packages at `integration_test/scene` package, `db`, `db_tbl`, `tbl`.
Now we need add two other scene: `read_write_split` and `shadow`. | test | add shadow read write split scene for integration test what would you like to be added why is this needed now there are three packages at integration test scene package db db tbl tbl now we need add two other scene read write split and shadow | 1 |
296,999 | 25,590,985,566 | IssuesEvent | 2022-12-01 13:00:34 | akkadotnet/akka.net | https://api.github.com/repos/akkadotnet/akka.net | closed | `TestActorRef` can not catch exceptions on asynchronous methods | potential bug akka-testkit | **Version Information**
Version of Akka.NET? `Akka 1.4.46`
Which Akka.NET Modules? `Akka.TestKit 1.4.46`, `xunit 2.4.1`
**Describe the bug**
I am attempting to test the behavior of Actors. I am not very good at it yet, so every once in a while there is an exception, and the test just swallows the exception, and I have to go search why `ExpectMsg` is timing out.
I have found the best way to test the actor behavior is with `TestActorRef.Receive`, but it only works for synchronous methods.
**To Reproduce**
Test actor:
```C#
public class ExceptionActor : ReceiveActor
{
public record GiveError();
public record GiveErrorAsync();
public ExceptionActor()
{
Receive<GiveError>((b) =>
{
throw new Exception("WAT");
});
ReceiveAsync<GiveErrorAsync>(async (b) =>
{
await Task.Delay(TimeSpan.FromSeconds(0.1));
throw new Exception("WATASYNC");
});
}
}
```
This test fails (Expected behavior):
```C#
[Fact]
public void GetException()
{
var props = Props.Create<ExceptionActor>();
var subject = new TestActorRef<ExceptionActor>(Sys, props, null, "testA");
subject.Receive(new ExceptionActor.GiveError());
}
```
This test does not fail (Not expected behavior):
```C#
[Fact]
public void GetExceptionAsync()
{
var props = Props.Create<ExceptionActor>();
var subject = new TestActorRef<ExceptionActor>(Sys, props, null, "testB");
subject.Receive(new ExceptionActor.GiveErrorAsync());
}
```
**Expected behavior**
To have a `ReceiveAsync` method for asynchronous method handling.
OR
Some mechanism that automatically watches for any unhandled exceptions in tests, and have the test failure display the information.
**Actual behavior**
Unhandled exceptions disappear.
**Environment**
.NET Core v7 on Windows
**Additional context**
I am fairly new to akka.net, but I am loving it. I am following the instructions on [Testing Actor Systems](https://getakka.net/articles/actors/testing-actor-systems.html), and it feels like I am missing something. Are there no better ways to watch that unhandled exceptions exist? | 1.0 | `TestActorRef` can not catch exceptions on asynchronous methods - **Version Information**
Version of Akka.NET? `Akka 1.4.46`
Which Akka.NET Modules? `Akka.TestKit 1.4.46`, `xunit 2.4.1`
**Describe the bug**
I am attempting to test the behavior of Actors. I am not very good at it yet, so every once in a while there is an exception, and the test just swallows the exception, and I have to go search why `ExpectMsg` is timing out.
I have found the best way to test the actor behavior is with `TestActorRef.Receive`, but it only works for synchronous methods.
**To Reproduce**
Test actor:
```C#
public class ExceptionActor : ReceiveActor
{
public record GiveError();
public record GiveErrorAsync();
public ExceptionActor()
{
Receive<GiveError>((b) =>
{
throw new Exception("WAT");
});
ReceiveAsync<GiveErrorAsync>(async (b) =>
{
await Task.Delay(TimeSpan.FromSeconds(0.1));
throw new Exception("WATASYNC");
});
}
}
```
This test fails (Expected behavior):
```C#
[Fact]
public void GetException()
{
var props = Props.Create<ExceptionActor>();
var subject = new TestActorRef<ExceptionActor>(Sys, props, null, "testA");
subject.Receive(new ExceptionActor.GiveError());
}
```
This test does not fail (Not expected behavior):
```C#
[Fact]
public void GetExceptionAsync()
{
var props = Props.Create<ExceptionActor>();
var subject = new TestActorRef<ExceptionActor>(Sys, props, null, "testB");
subject.Receive(new ExceptionActor.GiveErrorAsync());
}
```
**Expected behavior**
To have a `ReceiveAsync` method for asynchronous method handling.
OR
Some mechanism that automatically watches for any unhandled exceptions in tests, and have the test failure display the information.
**Actual behavior**
Unhandled exceptions disappear.
**Environment**
.NET Core v7 on Windows
**Additional context**
I am fairly new to akka.net, but I am loving it. I am following the instructions on [Testing Actor Systems](https://getakka.net/articles/actors/testing-actor-systems.html), and it feels like I am missing something. Are there no better ways to watch that unhandled exceptions exist? | test | testactorref can not catch exceptions on asynchronous methods version information version of akka net akka which akka net modules akka testkit xunit describe the bug i am attempting to test the behavior of actors i am not very good at it yet so every once in a while there is an exception and the test just swallows the exception and i have to go search why expectmsg is timing out i have found the best way to test the actor behavior is with testactorref receive but it only works for synchronous methods to reproduce test actor c public class exceptionactor receiveactor public record giveerror public record giveerrorasync public exceptionactor receive b throw new exception wat receiveasync async b await task delay timespan fromseconds throw new exception watasync this test fails expected behavior c public void getexception var props props create var subject new testactorref sys props null testa subject receive new exceptionactor giveerror this test does not fail not expected behavior c public void getexceptionasync var props props create var subject new testactorref sys props null testb subject receive new exceptionactor giveerrorasync expected behavior to have a receiveasync method for asynchronous method handling or some mechanism that automatically watches for any unhandled exceptions in tests and have the test failure display the information actual behavior unhandled exceptions disappear environment net core on windows additional context i am fairly new to akka net but i am loving it i am following the instructions on and it feels like i am missing something are there no better ways to watch that unhandled exceptions exist | 1 |
149,934 | 23,550,923,210 | IssuesEvent | 2022-08-21 20:22:51 | ParadoxGameConverters/Vic3ToHoI4 | https://api.github.com/repos/ParadoxGameConverters/Vic3ToHoI4 | opened | Convert state categories | enhancement coding design | Vic2 to HoI4 mostly just made them big enough for the existing industry. This should be re-examined, and should handle cases like islands. Maybe population based? | 1.0 | Convert state categories - Vic2 to HoI4 mostly just made them big enough for the existing industry. This should be re-examined, and should handle cases like islands. Maybe population based? | non_test | convert state categories to mostly just made them big enough for the existing industry this should be re examined and should handle cases like islands maybe population based | 0 |
195,733 | 14,750,678,119 | IssuesEvent | 2021-01-08 02:51:00 | mdflynn/game-sleuth | https://api.github.com/repos/mdflynn/game-sleuth | opened | Test Solo Movie View | testing | Solo Movie View should have unit and integration tests to make sure its running the way we want it to. | 1.0 | Test Solo Movie View - Solo Movie View should have unit and integration tests to make sure its running the way we want it to. | test | test solo movie view solo movie view should have unit and integration tests to make sure its running the way we want it to | 1 |
106,248 | 9,125,538,742 | IssuesEvent | 2019-02-24 14:38:10 | vgstation-coders/vgstation13 | https://api.github.com/repos/vgstation-coders/vgstation13 | closed | Traitor mimes can't purchase Invisible Spray from uplink. | 100% tested Oversight | There's glue and glove gun, but invisible spray doesn't even show up on the list if you're a syndie mime. | 1.0 | Traitor mimes can't purchase Invisible Spray from uplink. - There's glue and glove gun, but invisible spray doesn't even show up on the list if you're a syndie mime. | test | traitor mimes can t purchase invisible spray from uplink there s glue and glove gun but invisible spray doesn t even show up on the list if you re a syndie mime | 1 |
608,979 | 18,851,810,123 | IssuesEvent | 2021-11-11 22:00:36 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | opened | [commercetools-blueprint] Add ICE | enhancement priority: low triage | ### Feature Request
#### Is your feature request related to a problem? Please describe.
commercetools BP doesn't support ICE
#### Describe the solution you'd like
Add ICE to the commercetools BP
#### Describe alternatives you've considered
{{A clear and concise description of any alternative solutions or features you've considered.}}
| 1.0 | [commercetools-blueprint] Add ICE - ### Feature Request
#### Is your feature request related to a problem? Please describe.
commercetools BP doesn't support ICE
#### Describe the solution you'd like
Add ICE to the commercetools BP
#### Describe alternatives you've considered
{{A clear and concise description of any alternative solutions or features you've considered.}}
| non_test | add ice feature request is your feature request related to a problem please describe commercetools bp doesn t support ice describe the solution you d like add ice to the commercetools bp describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered | 0 |
96,070 | 8,584,921,572 | IssuesEvent | 2018-11-14 00:51:08 | mono/monodevelop | https://api.github.com/repos/mono/monodevelop | opened | Test Pad QOL Improvements | Area: Unit Testing feature-request papercut vs-sync | cc @chamons
Migrated from devdiv 550420
I spent hours with VS over my vacation and found a number of areas in VSfM that could use some QOL love. The most painful non-broken thing were limitations in the Unit Tests and Test Results pads.
- There is no search in units tests, which makes finding a specific test rather painful.
- You can not re-run tests in debug mode from the test results, just run which is useless
- There is no obvious way to run tests from the source code. I later found out there is a setting "Enable text editor unit test integration" that is defaulted to off that gets me 99% of what I want. | 1.0 | Test Pad QOL Improvements - cc @chamons
Migrated from devdiv 550420
I spent hours with VS over my vacation and found a number of areas in VSfM that could use some QOL love. The most painful non-broken thing were limitations in the Unit Tests and Test Results pads.
- There is no search in units tests, which makes finding a specific test rather painful.
- You can not re-run tests in debug mode from the test results, just run which is useless
- There is no obvious way to run tests from the source code. I later found out there is a setting "Enable text editor unit test integration" that is defaulted to off that gets me 99% of what I want. | test | test pad qol improvements cc chamons migrated from devdiv i spent hours with vs over my vacation and found a number of areas in vsfm that could use some qol love the most painful non broken thing were limitations in the unit tests and test results pads there is no search in units tests which makes finding a specific test rather painful you can not re run tests in debug mode from the test results just run which is useless there is no obvious way to run tests from the source code i later found out there is a setting enable text editor unit test integration that is defaulted to off that gets me of what i want | 1 |
293,365 | 25,287,129,262 | IssuesEvent | 2022-11-16 20:18:40 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Jest Tests.x-pack/plugins/cases/public/components/create - Create case Step 1 - Case Fields should select LOW as the default severity | failed-test Team:ResponseOps Feature:Cases | A test failed on a tracked branch
```
TestingLibraryElementError: Unable to find an element by: [data-test-subj="caseSeverity"]
Ignored nodes: comments, <script />, <style />
<body
class=""
>
<div />
</body>
at Object.getElementError (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/@testing-library/dom/dist/config.js:38:19)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/@testing-library/dom/dist/query-helpers.js:90:38
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/@testing-library/dom/dist/query-helpers.js:62:17
at getByTestId (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/@testing-library/dom/dist/query-helpers.js:111:19)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/x-pack/plugins/cases/public/components/create/form_context.test.tsx:330:27)
at Promise.then.completed (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:276:28)
at new Promise (<anonymous>)
at callAsyncCircusFn (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:216:10)
at _callCircusTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:212:40)
at runNextTicks (node:internal/process/task_queues:61:5)
at processTimers (node:internal/timers:499:9)
at _runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:149:3)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:63:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at run (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:472:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/22487#0183d837-54d8-42fa-b89c-0d1b8d0b84b1)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Tests.x-pack/plugins/cases/public/components/create","test.name":"Create case Step 1 - Case Fields should select LOW as the default severity","test.failCount":1}} --> | 1.0 | Failing test: Jest Tests.x-pack/plugins/cases/public/components/create - Create case Step 1 - Case Fields should select LOW as the default severity - A test failed on a tracked branch
```
TestingLibraryElementError: Unable to find an element by: [data-test-subj="caseSeverity"]
Ignored nodes: comments, <script />, <style />
<body
class=""
>
<div />
</body>
at Object.getElementError (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/@testing-library/dom/dist/config.js:38:19)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/@testing-library/dom/dist/query-helpers.js:90:38
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/@testing-library/dom/dist/query-helpers.js:62:17
at getByTestId (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/@testing-library/dom/dist/query-helpers.js:111:19)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/x-pack/plugins/cases/public/components/create/form_context.test.tsx:330:27)
at Promise.then.completed (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:276:28)
at new Promise (<anonymous>)
at callAsyncCircusFn (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:216:10)
at _callCircusTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:212:40)
at runNextTicks (node:internal/process/task_queues:61:5)
at processTimers (node:internal/timers:499:9)
at _runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:149:3)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:63:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:57:9)
at run (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:176:21)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:109:19)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-44d53f83b83faf14/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:472:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/22487#0183d837-54d8-42fa-b89c-0d1b8d0b84b1)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Tests.x-pack/plugins/cases/public/components/create","test.name":"Create case Step 1 - Case Fields should select LOW as the default severity","test.failCount":1}} --> | test | failing test jest tests x pack plugins cases public components create create case step case fields should select low as the default severity a test failed on a tracked branch testinglibraryelementerror unable to find an element by ignored nodes comments body class at object getelementerror var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules testing library dom dist config js at var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules testing library dom dist query helpers js at var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules testing library dom dist query helpers js at getbytestid var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules testing library dom dist query helpers js at object var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack plugins cases public components create form context test tsx at promise then completed var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build utils js at new promise at callasynccircusfn var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build utils js at callcircustest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runnextticks node internal process task queues at processtimers node internal timers at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at run var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runandtransformresultstojestformat var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapterinit js at jestadapter var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapter js at runtestinternal var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js first failure | 1 |
63,246 | 6,835,301,158 | IssuesEvent | 2017-11-10 00:24:27 | Kademi/kademi-dev | https://api.github.com/repos/Kademi/kademi-dev | closed | KCRM funnel not working | bug Help Wanted High priority Ready to Test - Dev | http://kcrm.kademi.us/analytics/kademi-sales?filters=source%3DNone&stage=Hot
Clicking on hot does not show the correct list of leads in the table below.
Note this lead is HOT but is not in the table http://kcrm.kademi.us/leads/15869188/

| 1.0 | KCRM funnel not working - http://kcrm.kademi.us/analytics/kademi-sales?filters=source%3DNone&stage=Hot
Clicking on hot does not show the correct list of leads in the table below.
Note this lead is HOT but is not in the table http://kcrm.kademi.us/leads/15869188/

| test | kcrm funnel not working clicking on hot does not show the correct list of leads in the table below note this lead is hot but is not in the table | 1 |
218,019 | 16,746,975,065 | IssuesEvent | 2021-06-11 16:47:36 | 17cupsofcoffee/tetra | https://api.github.com/repos/17cupsofcoffee/tetra | closed | Add an example of using Tetra with an ECS library | Area: Documentation Good First Issue Type: Feature Request | I'm not sure whether I plan on using anything as heavy duty as Specs for now, but it'd be good to have an example in the repository, both for the sake of documentation and to make sure any API changes we make play nicely.
[I ported my `rl` demo from GGEZ to Tetra](https://github.com/17cupsofcoffee/rl/blob/master/src/main.rs) and it seems to work well, although it's probably a bit more complicated than what we're looking for here. | 1.0 | Add an example of using Tetra with an ECS library - I'm not sure whether I plan on using anything as heavy duty as Specs for now, but it'd be good to have an example in the repository, both for the sake of documentation and to make sure any API changes we make play nicely.
[I ported my `rl` demo from GGEZ to Tetra](https://github.com/17cupsofcoffee/rl/blob/master/src/main.rs) and it seems to work well, although it's probably a bit more complicated than what we're looking for here. | non_test | add an example of using tetra with an ecs library i m not sure whether i plan on using anything as heavy duty as specs for now but it d be good to have an example in the repository both for the sake of documentation and to make sure any api changes we make play nicely and it seems to work well although it s probably a bit more complicated than what we re looking for here | 0 |
317,646 | 27,251,452,320 | IssuesEvent | 2023-02-22 08:26:32 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | “kube-apiserver” cannot be pulled up by kubelet | kind/failing-test | ### Which jobs are failing?
Using kubespray to upgrade 1.20.15 to 1.21.14 had no problems, but failed to upgrade from 1.21.14 to 1.22.16. The key commands are as follows:
kubeadm upgrade apply -y v1.22.16 --certificate-renewal=True --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all --allow-experimental-upgrades --etcd-upgrade=false --force --v=5
I found that modify "/etc/kubernetes manifests/kube-apiserver.yaml" file, kubelet won't pull up the pod, But I can restart kube-apiserver with systemctl restart kubelet
### Which tests are failing?
upgrade
### Since when has it been failing?
upgrade
### Testgrid link
_No response_
### Reason for failure (if possible)
_No response_
### Anything else we need to know?
After the upgrade fails, the static file logs are modified
I0222 10:36:19.024703 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:20.025083 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:21.024845 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:22.024467 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:23.025379 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:23.360491 22533 kubelet.go:2161] "SyncLoop (SYNC) pods" total=2 pods=[kube-system/kube-apiserver-k1 kube-system/kube-controller-manager-k1]
I0222 10:36:23.360808 22533 pod_workers.go:882] "Pod cannot start yet" pod="kube-system/kube-apiserver-k1" podUID=075ec6b806a03cd1244165970f8fff18
I0222 10:36:24.025305 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:25.024721 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:25.202300 22533 common.go:73] "Generated pod name" pod="kube-system/kube-apiserver-k1" podUID=075ec6b806a03cd1244165970f8fff18 source="/etc/kubernetes/manifests/kube-apiserver.yaml"
I0222 10:36:25.202332 22533 common.go:78] "Set namespace for pod" pod="kube-system/kube-apiserver-k1" source="/etc/kubernetes/manifests/kube-apiserver.yaml"
I0222 10:36:26.024464 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:27.026350 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:28.024795 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:29.024782 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:30.025001 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:49.024541 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:50.024481 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:50.360123 22533 kubelet.go:2161] "SyncLoop (SYNC) pods" total=2 pods=[kubesphere-logging-system/logsidecar-injector-deploy-6684594c6d-97dkx kube-system/kube-apiserver-k1]
I0222 10:36:50.360276 22533 pod_workers.go:882] "Pod cannot start yet" pod="kube-system/kube-apiserver-k1" podUID=075ec6b806a03cd1244165970f8fff18
I0222 10:36:51.025433 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:52.024788 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:53.024407 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:54.025114 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:55.025360 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:55.524551 22533 kuberuntime_container.go:732] "Container exited normally" pod="kube-system/kube-apiserver-k1" podUID=60c487d9a509f8411e449e945425c9ae containerName="kube-apiserver" containerID="docker://5db5124a0370e8eaeb0a0c522dd0fd099f59ed001be2c590a46418b2d851df53"
I0222 10:36:56.024940 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:56.490471 22533 kuberuntime_manager.go:1031] "getSandboxIDByPodUID got sandbox IDs for pod" podSandboxID=[e7390a775453ae4c325f083fa318d624c2d5acd0ac3b09ecec5af3fb1b3a4585] pod="kube-system/kube-apiserver-k1"
I0222 10:36:57.024954 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:19.025398 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:20.024644 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:21.025222 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:22.025402 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:23.024447 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:24.025015 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:25.025216 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:25.200914 22533 common.go:73] "Generated pod name" pod="kube-system/kube-apiserver-k1" podUID=075ec6b806a03cd1244165970f8fff18 source="/etc/kubernetes/manifests/kube-apiserver.yaml"
I0222 10:37:25.200929 22533 common.go:78] "Set namespace for pod" pod="kube-system/kube-apiserver-k1" source="/etc/kubernetes/manifests/kube-apiserver.yaml"
I0222 10:37:26.024845 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:26.360456 22533 kubelet.go:2161] "SyncLoop (SYNC) pods" total=4 pods=[kube-system/kube-apiserver-k1 kubesphere-system/ks-apiserver-6f9b8f448c-8hqtq kubesphere-system/ks-installer-f569d549-pdd6f kubesphere-logging-system/elasticsearch-logging-master-0]
I0222 10:37:26.360656 22533 pod_workers.go:882] "Pod cannot start yet" pod="kube-system/kube-apiserver-k1" podUID=075ec6b806a03cd1244165970f8fff18
### Relevant SIG(s)
/sig | 1.0 | “kube-apiserver” cannot be pulled up by kubelet - ### Which jobs are failing?
Using kubespray to upgrade 1.20.15 to 1.21.14 had no problems, but failed to upgrade from 1.21.14 to 1.22.16. The key commands are as follows:
kubeadm upgrade apply -y v1.22.16 --certificate-renewal=True --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all --allow-experimental-upgrades --etcd-upgrade=false --force --v=5
I found that modify "/etc/kubernetes manifests/kube-apiserver.yaml" file, kubelet won't pull up the pod, But I can restart kube-apiserver with systemctl restart kubelet
### Which tests are failing?
upgrade
### Since when has it been failing?
upgrade
### Testgrid link
_No response_
### Reason for failure (if possible)
_No response_
### Anything else we need to know?
After the upgrade fails, the static file logs are modified
I0222 10:36:19.024703 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:20.025083 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:21.024845 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:22.024467 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:23.025379 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:23.360491 22533 kubelet.go:2161] "SyncLoop (SYNC) pods" total=2 pods=[kube-system/kube-apiserver-k1 kube-system/kube-controller-manager-k1]
I0222 10:36:23.360808 22533 pod_workers.go:882] "Pod cannot start yet" pod="kube-system/kube-apiserver-k1" podUID=075ec6b806a03cd1244165970f8fff18
I0222 10:36:24.025305 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:25.024721 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:25.202300 22533 common.go:73] "Generated pod name" pod="kube-system/kube-apiserver-k1" podUID=075ec6b806a03cd1244165970f8fff18 source="/etc/kubernetes/manifests/kube-apiserver.yaml"
I0222 10:36:25.202332 22533 common.go:78] "Set namespace for pod" pod="kube-system/kube-apiserver-k1" source="/etc/kubernetes/manifests/kube-apiserver.yaml"
I0222 10:36:26.024464 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:27.026350 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:28.024795 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:29.024782 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:30.025001 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:49.024541 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:50.024481 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:50.360123 22533 kubelet.go:2161] "SyncLoop (SYNC) pods" total=2 pods=[kubesphere-logging-system/logsidecar-injector-deploy-6684594c6d-97dkx kube-system/kube-apiserver-k1]
I0222 10:36:50.360276 22533 pod_workers.go:882] "Pod cannot start yet" pod="kube-system/kube-apiserver-k1" podUID=075ec6b806a03cd1244165970f8fff18
I0222 10:36:51.025433 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:52.024788 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:53.024407 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:54.025114 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:55.025360 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:55.524551 22533 kuberuntime_container.go:732] "Container exited normally" pod="kube-system/kube-apiserver-k1" podUID=60c487d9a509f8411e449e945425c9ae containerName="kube-apiserver" containerID="docker://5db5124a0370e8eaeb0a0c522dd0fd099f59ed001be2c590a46418b2d851df53"
I0222 10:36:56.024940 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:36:56.490471 22533 kuberuntime_manager.go:1031] "getSandboxIDByPodUID got sandbox IDs for pod" podSandboxID=[e7390a775453ae4c325f083fa318d624c2d5acd0ac3b09ecec5af3fb1b3a4585] pod="kube-system/kube-apiserver-k1"
I0222 10:36:57.024954 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:19.025398 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:20.024644 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:21.025222 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:22.025402 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:23.024447 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:24.025015 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:25.025216 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:25.200914 22533 common.go:73] "Generated pod name" pod="kube-system/kube-apiserver-k1" podUID=075ec6b806a03cd1244165970f8fff18 source="/etc/kubernetes/manifests/kube-apiserver.yaml"
I0222 10:37:25.200929 22533 common.go:78] "Set namespace for pod" pod="kube-system/kube-apiserver-k1" source="/etc/kubernetes/manifests/kube-apiserver.yaml"
I0222 10:37:26.024845 22533 worker.go:187] "No status for pod" pod="kube-system/kube-apiserver-k1"
I0222 10:37:26.360456 22533 kubelet.go:2161] "SyncLoop (SYNC) pods" total=4 pods=[kube-system/kube-apiserver-k1 kubesphere-system/ks-apiserver-6f9b8f448c-8hqtq kubesphere-system/ks-installer-f569d549-pdd6f kubesphere-logging-system/elasticsearch-logging-master-0]
I0222 10:37:26.360656 22533 pod_workers.go:882] "Pod cannot start yet" pod="kube-system/kube-apiserver-k1" podUID=075ec6b806a03cd1244165970f8fff18
### Relevant SIG(s)
/sig | test | “kube apiserver” cannot be pulled up by kubelet which jobs are failing using kubespray to upgrade to had no problems but failed to upgrade from to the key commands are as follows kubeadm upgrade apply y certificate renewal true config etc kubernetes kubeadm config yaml ignore preflight errors all allow experimental upgrades etcd upgrade false force v i found that modify etc kubernetes manifests kube apiserver yaml file kubelet won t pull up the pod but i can restart kube apiserver with systemctl restart kubelet which tests are failing upgrade since when has it been failing upgrade testgrid link no response reason for failure if possible no response anything else we need to know after the upgrade fails the static file logs are modified worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver kubelet go syncloop sync pods total pods pod workers go pod cannot start yet pod kube system kube apiserver poduid worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver common go generated pod name pod kube system kube apiserver poduid source etc kubernetes manifests kube apiserver yaml common go set namespace for pod pod kube system kube apiserver source etc kubernetes manifests kube apiserver yaml worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver kubelet go syncloop sync pods total pods pod workers go pod cannot start yet pod kube system kube apiserver poduid worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver kuberuntime container go container exited normally pod kube system kube apiserver poduid containername kube apiserver containerid docker worker go no status for pod pod kube system kube apiserver kuberuntime manager go getsandboxidbypoduid got sandbox ids for pod podsandboxid pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver worker go no status for pod pod kube system kube apiserver common go generated pod name pod kube system kube apiserver poduid source etc kubernetes manifests kube apiserver yaml common go set namespace for pod pod kube system kube apiserver source etc kubernetes manifests kube apiserver yaml worker go no status for pod pod kube system kube apiserver kubelet go syncloop sync pods total pods pod workers go pod cannot start yet pod kube system kube apiserver poduid relevant sig s sig | 1 |
8,964 | 3,013,627,065 | IssuesEvent | 2015-07-29 10:07:40 | photonstorm/phaser | https://api.github.com/repos/photonstorm/phaser | closed | [2.4] Button callback doesn't trigger on click | user to test | In 2.4.0, the button callback doesn't trigger on mouse click. With phone touch works fine (and the same code works under 2.3). Maybe related to the changes to Input.Mouse?
Code (trivial, but you never knows...):
```javascript
var aButton = this.add.button(400, 300, 'aButton', function() {
console.log('aButtonCallback');
//do stuffs
}, this);
``` | 1.0 | [2.4] Button callback doesn't trigger on click - In 2.4.0, the button callback doesn't trigger on mouse click. With phone touch works fine (and the same code works under 2.3). Maybe related to the changes to Input.Mouse?
Code (trivial, but you never knows...):
```javascript
var aButton = this.add.button(400, 300, 'aButton', function() {
console.log('aButtonCallback');
//do stuffs
}, this);
``` | test | button callback doesn t trigger on click in the button callback doesn t trigger on mouse click with phone touch works fine and the same code works under maybe related to the changes to input mouse code trivial but you never knows javascript var abutton this add button abutton function console log abuttoncallback do stuffs this | 1 |
276,802 | 24,021,096,739 | IssuesEvent | 2022-09-15 07:43:51 | ValveSoftware/Dota-2 | https://api.github.com/repos/ValveSoftware/Dota-2 | closed | Battlepass broken on (my?) mac | Need Retest | #### Your system information
* System information from steam (`Steam` -> `Help` -> `System Information`) in a [gist](https://gist.github.com/):
https://gist.github.com/reutsharabani/b87192e87526ec68bbbc7f5688a6792a
* Have you checked for system updates?: Yes
* Are you using the latest stable video driver available for your system? Yes
* Have you verified the game files?: Yes
#### Please describe your issue in as much detail as possible:
Describe what you _expected_ should happen and what _did_ happen. Please link any large pastes as a [Github Gist](https://gist.github.com/).
Battle pass does not start on mac (again...)
These are my logs:
https://gist.github.com/reutsharabani/33f0863aa2e966fdef1161c5b96937d5
I ran:
```sh
exec ~/Library/Application\ Support/Steam/steamapps/common/dota\ 2\ beta/game/dota.sh
```
#### Steps for reproducing this issue:
1. buy mac
2. run dota
| 1.0 | Battlepass broken on (my?) mac - #### Your system information
* System information from steam (`Steam` -> `Help` -> `System Information`) in a [gist](https://gist.github.com/):
https://gist.github.com/reutsharabani/b87192e87526ec68bbbc7f5688a6792a
* Have you checked for system updates?: Yes
* Are you using the latest stable video driver available for your system? Yes
* Have you verified the game files?: Yes
#### Please describe your issue in as much detail as possible:
Describe what you _expected_ should happen and what _did_ happen. Please link any large pastes as a [Github Gist](https://gist.github.com/).
Battle pass does not start on mac (again...)
These are my logs:
https://gist.github.com/reutsharabani/33f0863aa2e966fdef1161c5b96937d5
I ran:
```sh
exec ~/Library/Application\ Support/Steam/steamapps/common/dota\ 2\ beta/game/dota.sh
```
#### Steps for reproducing this issue:
1. buy mac
2. run dota
| test | battlepass broken on my mac your system information system information from steam steam help system information in a have you checked for system updates yes are you using the latest stable video driver available for your system yes have you verified the game files yes please describe your issue in as much detail as possible describe what you expected should happen and what did happen please link any large pastes as a battle pass does not start on mac again these are my logs i ran sh exec library application support steam steamapps common dota beta game dota sh steps for reproducing this issue buy mac run dota | 1 |
128,402 | 10,531,585,198 | IssuesEvent | 2019-10-01 08:51:49 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Editor warns of unsaved changes even if there are none | Needs Testing | **Describe the bug**
With Gutenberg 6.4.0, the editor sometimes warns of unsaved changes when navigating away, even if all changes are saved. This happens with a clean installation and only e.g. a paragraph in the content.
**To reproduce**
Steps to reproduce the behavior:
1. Create a new post with some content (irrelevant which blocks) and schedule it.
2. Attempt to navigate away from the editor view.
**Expected behavior**
The navigation should happen with no warnings.
**Desktop (please complete the following information):**
- OS: Windows and OSX
- Browser: Chrome, Safari, Firefox
- Version : various | 1.0 | Editor warns of unsaved changes even if there are none - **Describe the bug**
With Gutenberg 6.4.0, the editor sometimes warns of unsaved changes when navigating away, even if all changes are saved. This happens with a clean installation and only e.g. a paragraph in the content.
**To reproduce**
Steps to reproduce the behavior:
1. Create a new post with some content (irrelevant which blocks) and schedule it.
2. Attempt to navigate away from the editor view.
**Expected behavior**
The navigation should happen with no warnings.
**Desktop (please complete the following information):**
- OS: Windows and OSX
- Browser: Chrome, Safari, Firefox
- Version : various | test | editor warns of unsaved changes even if there are none describe the bug with gutenberg the editor sometimes warns of unsaved changes when navigating away even if all changes are saved this happens with a clean installation and only e g a paragraph in the content to reproduce steps to reproduce the behavior create a new post with some content irrelevant which blocks and schedule it attempt to navigate away from the editor view expected behavior the navigation should happen with no warnings desktop please complete the following information os windows and osx browser chrome safari firefox version various | 1 |
277,179 | 24,054,532,429 | IssuesEvent | 2022-09-16 15:34:05 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | closed | Release 4.3.8 - Revision 1 - Release Candidate RC1 - Footprint Metrics - SYSCOLLECTOR (1.0d) | release test/4.3.8 | ## Footprint metrics information
| | |
|---------------------------------|--------------------------------------------|
| **Main release candidate issue #** | #14827 |
| **Main footprint metrics issue #** | #14859 |
| **Version** | 4.3.8 |
| **Release candidate #** | RC1 |
| **Tag** | https://github.com/wazuh/wazuh/tree/4.3.8-rc1 |
## Stress test documentation
### Packages used
- Repository: `packages-dev.wazuh.com`
- Package path: `pre-release`
- Package revision: `1`
- **Jenkins build**: https://ci.wazuh.info/job/Test_stress/3562/
---
<details><summary>Manager</summary>
+ <details><summary>Plots</summary>
















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3562_manager_2022-09-14.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_manager_centos/logs/ossec_Test_stress_B3562_manager_2022-09-14.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-manager-Test_stress_B3562_manager-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_manager_centos/data/monitor-manager-Test_stress_B3562_manager-pre-release.csv)
[Test_stress_B3562_manager_analysisd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_manager_centos/data/Test_stress_B3562_manager_analysisd_state.csv)
[Test_stress_B3562_manager_remoted_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_manager_centos/data/Test_stress_B3562_manager_remoted_state.csv)
</details>
</details>
<details><summary>Centos agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3562_centos_2022-09-14.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_centos/logs/ossec_Test_stress_B3562_centos_2022-09-14.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3562_centos-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_centos/data/monitor-agent-Test_stress_B3562_centos-pre-release.csv)
[Test_stress_B3562_centos_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_centos/data/Test_stress_B3562_centos_agentd_state.csv)
</details>
</details>
<details><summary>Ubuntu agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3562_ubuntu_2022-09-14.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_ubuntu/logs/ossec_Test_stress_B3562_ubuntu_2022-09-14.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3562_ubuntu-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_ubuntu/data/monitor-agent-Test_stress_B3562_ubuntu-pre-release.csv)
[Test_stress_B3562_ubuntu_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_ubuntu/data/Test_stress_B3562_ubuntu_agentd_state.csv)
</details>
</details>
<details><summary>Windows agent</summary>
+ <details><summary>Plots</summary>















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3562_windows_2022-09-14.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_windows/logs/ossec_Test_stress_B3562_windows_2022-09-14.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-winagent-Test_stress_B3562_windows-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_windows/data/monitor-winagent-Test_stress_B3562_windows-pre-release.csv)
[Test_stress_B3562_windows_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_windows/data/Test_stress_B3562_windows_agentd_state.csv)
</details>
</details>
<details><summary>macOS agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details>
<details><summary>Solaris agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details> | 1.0 | Release 4.3.8 - Revision 1 - Release Candidate RC1 - Footprint Metrics - SYSCOLLECTOR (1.0d) - ## Footprint metrics information
| | |
|---------------------------------|--------------------------------------------|
| **Main release candidate issue #** | #14827 |
| **Main footprint metrics issue #** | #14859 |
| **Version** | 4.3.8 |
| **Release candidate #** | RC1 |
| **Tag** | https://github.com/wazuh/wazuh/tree/4.3.8-rc1 |
## Stress test documentation
### Packages used
- Repository: `packages-dev.wazuh.com`
- Package path: `pre-release`
- Package revision: `1`
- **Jenkins build**: https://ci.wazuh.info/job/Test_stress/3562/
---
<details><summary>Manager</summary>
+ <details><summary>Plots</summary>
















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3562_manager_2022-09-14.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_manager_centos/logs/ossec_Test_stress_B3562_manager_2022-09-14.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-manager-Test_stress_B3562_manager-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_manager_centos/data/monitor-manager-Test_stress_B3562_manager-pre-release.csv)
[Test_stress_B3562_manager_analysisd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_manager_centos/data/Test_stress_B3562_manager_analysisd_state.csv)
[Test_stress_B3562_manager_remoted_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_manager_centos/data/Test_stress_B3562_manager_remoted_state.csv)
</details>
</details>
<details><summary>Centos agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3562_centos_2022-09-14.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_centos/logs/ossec_Test_stress_B3562_centos_2022-09-14.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3562_centos-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_centos/data/monitor-agent-Test_stress_B3562_centos-pre-release.csv)
[Test_stress_B3562_centos_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_centos/data/Test_stress_B3562_centos_agentd_state.csv)
</details>
</details>
<details><summary>Ubuntu agent</summary>
+ <details><summary>Plots</summary>

















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3562_ubuntu_2022-09-14.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_ubuntu/logs/ossec_Test_stress_B3562_ubuntu_2022-09-14.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-agent-Test_stress_B3562_ubuntu-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_ubuntu/data/monitor-agent-Test_stress_B3562_ubuntu-pre-release.csv)
[Test_stress_B3562_ubuntu_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_ubuntu/data/Test_stress_B3562_ubuntu_agentd_state.csv)
</details>
</details>
<details><summary>Windows agent</summary>
+ <details><summary>Plots</summary>















</details>
+ <details><summary>Logs and configuration</summary>
[ossec_Test_stress_B3562_windows_2022-09-14.zip](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_windows/logs/ossec_Test_stress_B3562_windows_2022-09-14.zip)
</details>
+ <details><summary>CSV</summary>
[monitor-winagent-Test_stress_B3562_windows-pre-release.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_windows/data/monitor-winagent-Test_stress_B3562_windows-pre-release.csv)
[Test_stress_B3562_windows_agentd_state.csv](https://ci.wazuh.com/data/Test_stress/pre-release/4.3.8/B3562-1440m/B3562_agent_windows/data/Test_stress_B3562_windows_agentd_state.csv)
</details>
</details>
<details><summary>macOS agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details>
<details><summary>Solaris agent</summary>
+ <details><summary>Plots</summary>
</details>
+ <details><summary>Logs and configuration</summary>
</details>
+ <details><summary>CSV</summary>
</details>
</details> | test | release revision release candidate footprint metrics syscollector footprint metrics information main release candidate issue main footprint metrics issue version release candidate tag stress test documentation packages used repository packages dev wazuh com package path pre release package revision jenkins build manager plots logs and configuration csv centos agent plots logs and configuration csv ubuntu agent plots logs and configuration csv windows agent plots logs and configuration csv macos agent plots logs and configuration csv solaris agent plots logs and configuration csv | 1 |
255,833 | 21,959,236,338 | IssuesEvent | 2022-05-24 14:32:25 | rstudio/rstudio | https://api.github.com/repos/rstudio/rstudio | closed | GPU and Accessibility Diagnostics Windows have menubar on Linux and Windows | bug electron test | <!--
IMPORTANT: Please fill out this template fully! Failure to do so will result in the issue being closed automatically.
This issue tracker is for bugs and feature requests in the RStudio IDE. If you're having trouble with R itself or an R package, see https://www.r-project.org/help.html, and if you want to ask a question rather than report a bug, go to https://community.rstudio.com/. Finally, if you use RStudio Server Pro, get in touch with our Pro support team at support@rstudio.com.
-->
### System details
RStudio Edition : Desktop (Electron)
RStudio Version : 2022.06.0-daily+356
OS Version : Windows-11, Ubuntu 22
R Version : Any
### Steps to reproduce the problem
On Windows or Linux (only tried Gnome/Ubuntu 22), start IDE and display either the GPU Diagnostics or Accessibility Diagnostics windows.
### Describe the problem in detail
The windows have a menubar (File, Edit, etc).

### Describe the behavior you expected
Menubar not displayed.
<!--
Please keep the below portion in your issue, and check `[x]` the applicable boxes.
-->
- [x] I have read the guide for [submitting good bug reports](https://github.com/rstudio/rstudio/wiki/Writing-Good-Bug-Reports).
- [x] I have installed the latest version of RStudio, and confirmed that the issue still persists.
- [x] If I am reporting an RStudio crash, I have included a [diagnostics report](https://support.rstudio.com/hc/en-us/articles/200321257-Running-a-Diagnostics-Report).
- [x] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
| 1.0 | GPU and Accessibility Diagnostics Windows have menubar on Linux and Windows - <!--
IMPORTANT: Please fill out this template fully! Failure to do so will result in the issue being closed automatically.
This issue tracker is for bugs and feature requests in the RStudio IDE. If you're having trouble with R itself or an R package, see https://www.r-project.org/help.html, and if you want to ask a question rather than report a bug, go to https://community.rstudio.com/. Finally, if you use RStudio Server Pro, get in touch with our Pro support team at support@rstudio.com.
-->
### System details
RStudio Edition : Desktop (Electron)
RStudio Version : 2022.06.0-daily+356
OS Version : Windows-11, Ubuntu 22
R Version : Any
### Steps to reproduce the problem
On Windows or Linux (only tried Gnome/Ubuntu 22), start IDE and display either the GPU Diagnostics or Accessibility Diagnostics windows.
### Describe the problem in detail
The windows have a menubar (File, Edit, etc).

### Describe the behavior you expected
Menubar not displayed.
<!--
Please keep the below portion in your issue, and check `[x]` the applicable boxes.
-->
- [x] I have read the guide for [submitting good bug reports](https://github.com/rstudio/rstudio/wiki/Writing-Good-Bug-Reports).
- [x] I have installed the latest version of RStudio, and confirmed that the issue still persists.
- [x] If I am reporting an RStudio crash, I have included a [diagnostics report](https://support.rstudio.com/hc/en-us/articles/200321257-Running-a-Diagnostics-Report).
- [x] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
| test | gpu and accessibility diagnostics windows have menubar on linux and windows important please fill out this template fully failure to do so will result in the issue being closed automatically this issue tracker is for bugs and feature requests in the rstudio ide if you re having trouble with r itself or an r package see and if you want to ask a question rather than report a bug go to finally if you use rstudio server pro get in touch with our pro support team at support rstudio com system details rstudio edition desktop electron rstudio version daily os version windows ubuntu r version any steps to reproduce the problem on windows or linux only tried gnome ubuntu start ide and display either the gpu diagnostics or accessibility diagnostics windows describe the problem in detail the windows have a menubar file edit etc describe the behavior you expected menubar not displayed please keep the below portion in your issue and check the applicable boxes i have read the guide for i have installed the latest version of rstudio and confirmed that the issue still persists if i am reporting an rstudio crash i have included a i have done my best to include a minimal self contained set of instructions for consistently reproducing the issue | 1 |
49,963 | 6,047,829,191 | IssuesEvent | 2017-06-12 15:13:03 | CorfuDB/CorfuDB | https://api.github.com/repos/CorfuDB/CorfuDB | closed | SIGSEGV while running large write simulation workload | bug in progress Stability Testing | Initially reported by Johnny Z, using `master` branch as of commit b314266037be66af3cf1c85c2911f8c99a4664f6 (Tue Oct 4 15:00:47 2016 -0700). Workload is a single client writing 64000000 byte blobs sequentially; contact me or Johnny for details. The JVM crashes consistently & reliably with SIGSEGV after writing the 23rd or 24th chunk.
The server is run using a single node layout using:
```
env CORFUDB_HEAP=16364 SERVER_JVMFLAGS="-XX:MaxDirectMemorySize=8g" ./bin/corfu_server -l `pwd`/tmp-data-dir -d TRACE 8000 2>&1 | tee RUNLOG
```
Raising or lowering the values of `CORFUDB_HEAP` or the `MaxDirectMemorySize` don't make an obvious change in behavior: still the 23rd or 24th chunk.
Tail of console output when run with `-d TRACE`. I don't know offhand what the `o.c.i.LogUnitServer - Eviction[0]: SIZE` messages mean, but they start with index numbers at zero and count up incrementally for the first 8 writes ... and then jump up to (concidentally?) the write number just processed by the server, e.g. jump from 7 to 23, and the segfault happens immediately afterward.
```
09:22:27.228 TRACE [ForkJoinPool.commonPool-worker-25] o.c.i.LogUnitServer - Eviction[4]: SIZE
09:22:27.228 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to SequencerServer: TokenRequestMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=41, epoch=1, buf=SlicedAbstractByteBuf(freed), msgType=TOKEN_REQ), streamIDs=[845d1324-f453-3775-9413-85e626ee1c35], numTokens=1, tokenFlags=[])
09:22:27.229 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: TokenResponseMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=41, epoch=1, buf=null, msgType=TOKEN_RES), token=20, backpointerMap={845d1324-f453-3775-9413-85e626ee1c35=-1})
09:22:27.638 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to LogUnitServer: LogUnitWriteMsg(super=LogUnitPayloadMsg(payload=null, data=SlicedAbstractByteBuf(ridx: 0, widx: 63999990, cap: 63999990/63999990, unwrapped: PooledUnsafeDirectByteBuf(ridx: 64000098, widx: 64000098, cap: 67108864)), serializer=org.corfudb.util.serializer.CorfuSerializer@274c788a), address=20)
09:22:27.639 TRACE [event-0] o.c.i.LogUnitServer - Handling write request for address 20
09:22:27.639 TRACE [event-0] o.c.i.LogUnitServer - Write[20]
09:22:27.677 INFO [event-0] o.c.i.l.RollingLog - Disk_write[20]: Written to disk.
09:22:27.773 TRACE [ForkJoinPool.commonPool-worker-18] o.c.i.LogUnitServer - Eviction[5]: SIZE
09:22:27.773 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=42, epoch=1, buf=null, msgType=ERROR_OK)
09:22:27.778 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to SequencerServer: TokenRequestMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=43, epoch=1, buf=SlicedAbstractByteBuf(freed), msgType=TOKEN_REQ), streamIDs=[6fb32f53-9543-3e71-ae69-513d904b2ad2], numTokens=1, tokenFlags=[])
09:22:27.778 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: TokenResponseMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=43, epoch=1, buf=null, msgType=TOKEN_RES), token=21, backpointerMap={6fb32f53-9543-3e71-ae69-513d904b2ad2=-1})
09:22:28.212 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to LogUnitServer: LogUnitWriteMsg(super=LogUnitPayloadMsg(payload=null, data=SlicedAbstractByteBuf(ridx: 0, widx: 63999990, cap: 63999990/63999990, unwrapped: PooledUnsafeDirectByteBuf(ridx: 64000098, widx: 64000098, cap: 67108864)), serializer=org.corfudb.util.serializer.CorfuSerializer@274c788a), address=21)
09:22:28.212 TRACE [event-0] o.c.i.LogUnitServer - Handling write request for address 21
09:22:28.212 TRACE [event-0] o.c.i.LogUnitServer - Write[21]
09:22:28.242 INFO [event-0] o.c.i.l.RollingLog - Disk_write[21]: Written to disk.
09:22:28.243 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=44, epoch=1, buf=null, msgType=ERROR_OK)
09:22:28.331 TRACE [ForkJoinPool.commonPool-worker-18] o.c.i.LogUnitServer - Eviction[6]: SIZE
09:22:28.334 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to SequencerServer: TokenRequestMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=45, epoch=1, buf=SlicedAbstractByteBuf(freed), msgType=TOKEN_REQ), streamIDs=[5b7a4ff1-505e-3173-884e-7976cff2a517], numTokens=1, tokenFlags=[])
09:22:28.335 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: TokenResponseMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=45, epoch=1, buf=null, msgType=TOKEN_RES), token=22, backpointerMap={5b7a4ff1-505e-3173-884e-7976cff2a517=-1})
09:22:28.769 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to LogUnitServer: LogUnitWriteMsg(super=LogUnitPayloadMsg(payload=null, data=SlicedAbstractByteBuf(ridx: 0, widx: 63999990, cap: 63999990/63999990, unwrapped: PooledUnsafeDirectByteBuf(ridx: 64000098, widx: 64000098, cap: 67108864)), serializer=org.corfudb.util.serializer.CorfuSerializer@274c788a), address=22)
09:22:28.769 TRACE [event-0] o.c.i.LogUnitServer - Handling write request for address 22
09:22:28.769 TRACE [event-0] o.c.i.LogUnitServer - Write[22]
09:22:28.800 INFO [event-0] o.c.i.l.RollingLog - Disk_write[22]: Written to disk.
09:22:28.864 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=46, epoch=1, buf=null, msgType=ERROR_OK)
09:22:28.865 TRACE [ForkJoinPool.commonPool-worker-18] o.c.i.LogUnitServer - Eviction[7]: SIZE
09:22:28.868 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to SequencerServer: TokenRequestMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=47, epoch=1, buf=SlicedAbstractByteBuf(freed), msgType=TOKEN_REQ), streamIDs=[c0f0b1a3-481e-35f2-bf92-5a8a66e89522], numTokens=1, tokenFlags=[])
09:22:28.869 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: TokenResponseMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=47, epoch=1, buf=null, msgType=TOKEN_RES), token=23, backpointerMap={c0f0b1a3-481e-35f2-bf92-5a8a66e89522=-1})
09:22:29.307 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to LogUnitServer: LogUnitWriteMsg(super=LogUnitPayloadMsg(payload=null, data=SlicedAbstractByteBuf(ridx: 0, widx: 63999990, cap: 63999990/63999990, unwrapped: PooledUnsafeDirectByteBuf(ridx: 64000098, widx: 64000098, cap: 67108864)), serializer=org.corfudb.util.serializer.CorfuSerializer@274c788a), address=23)
09:22:29.307 TRACE [event-0] o.c.i.LogUnitServer - Handling write request for address 23
09:22:29.307 TRACE [event-0] o.c.i.LogUnitServer - Write[23]
09:22:29.348 INFO [event-0] o.c.i.l.RollingLog - Disk_write[23]: Written to disk.
09:22:29.348 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=48, epoch=1, buf=null, msgType=ERROR_OK)
09:22:29.349 TRACE [ForkJoinPool.commonPool-worker-25] o.c.i.LogUnitServer - Eviction[23]: SIZE
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f4f33ef6f9a, pid=29215, tid=139977161094912
#
# JRE version: Java(TM) SE Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V [libjvm.so+0x7f8f9a]09:22:29.352 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to SequencerServer: TokenRequestMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=49, epoch=1, buf=SlicedAbstractByteBuf(freed), msgType=TOKEN_REQ), streamIDs=[5e3bc63e-54f2-3ab3-ae1e-3d399bf2014d], numTokens=1, tokenFlags=[])
09:22:29.352 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: TokenResponseMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=49, epoch=1, buf=null, msgType=TOKEN_RES), token=24, backpointerMap={5e3bc63e-54f2-3ab3-ae1e-3d399bf2014d=-1})
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/slf-tmp-delme/c/hs_err_pid29215.log
Compiled method (nm) 19335 1110 n 0 sun.misc.Unsafe::copyMemory (native)
total in heap [0x00007f4f1d55c690,0x00007f4f1d55ca00] = 880
relocation [0x00007f4f1d55c7b8,0x00007f4f1d55c800] = 72
main code [0x00007f4f1d55c800,0x00007f4f1d55ca00] = 512
Compiled method (c1) 19335 1122 3 sun.misc.Unsafe::copyMemory (11 bytes)
total in heap [0x00007f4f1d560990,0x00007f4f1d560da8] = 1048
relocation [0x00007f4f1d560ab8,0x00007f4f1d560af8] = 64
main code [0x00007f4f1d560b00,0x00007f4f1d560c80] = 384
stub code [0x00007f4f1d560c80,0x00007f4f1d560d28] = 168
metadata [0x00007f4f1d560d28,0x00007f4f1d560d30] = 8
scopes data [0x00007f4f1d560d30,0x00007f4f1d560d60] = 48
scopes pcs [0x00007f4f1d560d60,0x00007f4f1d560da0] = 64
dependencies [0x00007f4f1d560da0,0x00007f4f1d560da8] = 8
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
```
| 1.0 | SIGSEGV while running large write simulation workload - Initially reported by Johnny Z, using `master` branch as of commit b314266037be66af3cf1c85c2911f8c99a4664f6 (Tue Oct 4 15:00:47 2016 -0700). Workload is a single client writing 64000000 byte blobs sequentially; contact me or Johnny for details. The JVM crashes consistently & reliably with SIGSEGV after writing the 23rd or 24th chunk.
The server is run using a single node layout using:
```
env CORFUDB_HEAP=16364 SERVER_JVMFLAGS="-XX:MaxDirectMemorySize=8g" ./bin/corfu_server -l `pwd`/tmp-data-dir -d TRACE 8000 2>&1 | tee RUNLOG
```
Raising or lowering the values of `CORFUDB_HEAP` or the `MaxDirectMemorySize` don't make an obvious change in behavior: still the 23rd or 24th chunk.
Tail of console output when run with `-d TRACE`. I don't know offhand what the `o.c.i.LogUnitServer - Eviction[0]: SIZE` messages mean, but they start with index numbers at zero and count up incrementally for the first 8 writes ... and then jump up to (concidentally?) the write number just processed by the server, e.g. jump from 7 to 23, and the segfault happens immediately afterward.
```
09:22:27.228 TRACE [ForkJoinPool.commonPool-worker-25] o.c.i.LogUnitServer - Eviction[4]: SIZE
09:22:27.228 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to SequencerServer: TokenRequestMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=41, epoch=1, buf=SlicedAbstractByteBuf(freed), msgType=TOKEN_REQ), streamIDs=[845d1324-f453-3775-9413-85e626ee1c35], numTokens=1, tokenFlags=[])
09:22:27.229 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: TokenResponseMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=41, epoch=1, buf=null, msgType=TOKEN_RES), token=20, backpointerMap={845d1324-f453-3775-9413-85e626ee1c35=-1})
09:22:27.638 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to LogUnitServer: LogUnitWriteMsg(super=LogUnitPayloadMsg(payload=null, data=SlicedAbstractByteBuf(ridx: 0, widx: 63999990, cap: 63999990/63999990, unwrapped: PooledUnsafeDirectByteBuf(ridx: 64000098, widx: 64000098, cap: 67108864)), serializer=org.corfudb.util.serializer.CorfuSerializer@274c788a), address=20)
09:22:27.639 TRACE [event-0] o.c.i.LogUnitServer - Handling write request for address 20
09:22:27.639 TRACE [event-0] o.c.i.LogUnitServer - Write[20]
09:22:27.677 INFO [event-0] o.c.i.l.RollingLog - Disk_write[20]: Written to disk.
09:22:27.773 TRACE [ForkJoinPool.commonPool-worker-18] o.c.i.LogUnitServer - Eviction[5]: SIZE
09:22:27.773 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=42, epoch=1, buf=null, msgType=ERROR_OK)
09:22:27.778 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to SequencerServer: TokenRequestMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=43, epoch=1, buf=SlicedAbstractByteBuf(freed), msgType=TOKEN_REQ), streamIDs=[6fb32f53-9543-3e71-ae69-513d904b2ad2], numTokens=1, tokenFlags=[])
09:22:27.778 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: TokenResponseMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=43, epoch=1, buf=null, msgType=TOKEN_RES), token=21, backpointerMap={6fb32f53-9543-3e71-ae69-513d904b2ad2=-1})
09:22:28.212 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to LogUnitServer: LogUnitWriteMsg(super=LogUnitPayloadMsg(payload=null, data=SlicedAbstractByteBuf(ridx: 0, widx: 63999990, cap: 63999990/63999990, unwrapped: PooledUnsafeDirectByteBuf(ridx: 64000098, widx: 64000098, cap: 67108864)), serializer=org.corfudb.util.serializer.CorfuSerializer@274c788a), address=21)
09:22:28.212 TRACE [event-0] o.c.i.LogUnitServer - Handling write request for address 21
09:22:28.212 TRACE [event-0] o.c.i.LogUnitServer - Write[21]
09:22:28.242 INFO [event-0] o.c.i.l.RollingLog - Disk_write[21]: Written to disk.
09:22:28.243 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=44, epoch=1, buf=null, msgType=ERROR_OK)
09:22:28.331 TRACE [ForkJoinPool.commonPool-worker-18] o.c.i.LogUnitServer - Eviction[6]: SIZE
09:22:28.334 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to SequencerServer: TokenRequestMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=45, epoch=1, buf=SlicedAbstractByteBuf(freed), msgType=TOKEN_REQ), streamIDs=[5b7a4ff1-505e-3173-884e-7976cff2a517], numTokens=1, tokenFlags=[])
09:22:28.335 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: TokenResponseMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=45, epoch=1, buf=null, msgType=TOKEN_RES), token=22, backpointerMap={5b7a4ff1-505e-3173-884e-7976cff2a517=-1})
09:22:28.769 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to LogUnitServer: LogUnitWriteMsg(super=LogUnitPayloadMsg(payload=null, data=SlicedAbstractByteBuf(ridx: 0, widx: 63999990, cap: 63999990/63999990, unwrapped: PooledUnsafeDirectByteBuf(ridx: 64000098, widx: 64000098, cap: 67108864)), serializer=org.corfudb.util.serializer.CorfuSerializer@274c788a), address=22)
09:22:28.769 TRACE [event-0] o.c.i.LogUnitServer - Handling write request for address 22
09:22:28.769 TRACE [event-0] o.c.i.LogUnitServer - Write[22]
09:22:28.800 INFO [event-0] o.c.i.l.RollingLog - Disk_write[22]: Written to disk.
09:22:28.864 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=46, epoch=1, buf=null, msgType=ERROR_OK)
09:22:28.865 TRACE [ForkJoinPool.commonPool-worker-18] o.c.i.LogUnitServer - Eviction[7]: SIZE
09:22:28.868 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to SequencerServer: TokenRequestMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=47, epoch=1, buf=SlicedAbstractByteBuf(freed), msgType=TOKEN_REQ), streamIDs=[c0f0b1a3-481e-35f2-bf92-5a8a66e89522], numTokens=1, tokenFlags=[])
09:22:28.869 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: TokenResponseMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=47, epoch=1, buf=null, msgType=TOKEN_RES), token=23, backpointerMap={c0f0b1a3-481e-35f2-bf92-5a8a66e89522=-1})
09:22:29.307 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to LogUnitServer: LogUnitWriteMsg(super=LogUnitPayloadMsg(payload=null, data=SlicedAbstractByteBuf(ridx: 0, widx: 63999990, cap: 63999990/63999990, unwrapped: PooledUnsafeDirectByteBuf(ridx: 64000098, widx: 64000098, cap: 67108864)), serializer=org.corfudb.util.serializer.CorfuSerializer@274c788a), address=23)
09:22:29.307 TRACE [event-0] o.c.i.LogUnitServer - Handling write request for address 23
09:22:29.307 TRACE [event-0] o.c.i.LogUnitServer - Write[23]
09:22:29.348 INFO [event-0] o.c.i.l.RollingLog - Disk_write[23]: Written to disk.
09:22:29.348 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=48, epoch=1, buf=null, msgType=ERROR_OK)
09:22:29.349 TRACE [ForkJoinPool.commonPool-worker-25] o.c.i.LogUnitServer - Eviction[23]: SIZE
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f4f33ef6f9a, pid=29215, tid=139977161094912
#
# JRE version: Java(TM) SE Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V [libjvm.so+0x7f8f9a]09:22:29.352 TRACE [event-0] o.c.i.NettyServerRouter - Message routed to SequencerServer: TokenRequestMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=49, epoch=1, buf=SlicedAbstractByteBuf(freed), msgType=TOKEN_REQ), streamIDs=[5e3bc63e-54f2-3ab3-ae1e-3d399bf2014d], numTokens=1, tokenFlags=[])
09:22:29.352 TRACE [event-0] o.c.i.NettyServerRouter - Sent response: TokenResponseMsg(super=CorfuMsg(clientID=21ec0df5-7554-4ad9-8600-ad4a16a43d83, requestID=49, epoch=1, buf=null, msgType=TOKEN_RES), token=24, backpointerMap={5e3bc63e-54f2-3ab3-ae1e-3d399bf2014d=-1})
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/slf-tmp-delme/c/hs_err_pid29215.log
Compiled method (nm) 19335 1110 n 0 sun.misc.Unsafe::copyMemory (native)
total in heap [0x00007f4f1d55c690,0x00007f4f1d55ca00] = 880
relocation [0x00007f4f1d55c7b8,0x00007f4f1d55c800] = 72
main code [0x00007f4f1d55c800,0x00007f4f1d55ca00] = 512
Compiled method (c1) 19335 1122 3 sun.misc.Unsafe::copyMemory (11 bytes)
total in heap [0x00007f4f1d560990,0x00007f4f1d560da8] = 1048
relocation [0x00007f4f1d560ab8,0x00007f4f1d560af8] = 64
main code [0x00007f4f1d560b00,0x00007f4f1d560c80] = 384
stub code [0x00007f4f1d560c80,0x00007f4f1d560d28] = 168
metadata [0x00007f4f1d560d28,0x00007f4f1d560d30] = 8
scopes data [0x00007f4f1d560d30,0x00007f4f1d560d60] = 48
scopes pcs [0x00007f4f1d560d60,0x00007f4f1d560da0] = 64
dependencies [0x00007f4f1d560da0,0x00007f4f1d560da8] = 8
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
```
| test | sigsegv while running large write simulation workload initially reported by johnny z using master branch as of commit tue oct workload is a single client writing byte blobs sequentially contact me or johnny for details the jvm crashes consistently reliably with sigsegv after writing the or chunk the server is run using a single node layout using env corfudb heap server jvmflags xx maxdirectmemorysize bin corfu server l pwd tmp data dir d trace tee runlog raising or lowering the values of corfudb heap or the maxdirectmemorysize don t make an obvious change in behavior still the or chunk tail of console output when run with d trace i don t know offhand what the o c i logunitserver eviction size messages mean but they start with index numbers at zero and count up incrementally for the first writes and then jump up to concidentally the write number just processed by the server e g jump from to and the segfault happens immediately afterward trace o c i logunitserver eviction size trace o c i nettyserverrouter message routed to sequencerserver tokenrequestmsg super corfumsg clientid requestid epoch buf slicedabstractbytebuf freed msgtype token req streamids numtokens tokenflags trace o c i nettyserverrouter sent response tokenresponsemsg super corfumsg clientid requestid epoch buf null msgtype token res token backpointermap trace o c i nettyserverrouter message routed to logunitserver logunitwritemsg super logunitpayloadmsg payload null data slicedabstractbytebuf ridx widx cap unwrapped pooledunsafedirectbytebuf ridx widx cap serializer org corfudb util serializer corfuserializer address trace o c i logunitserver handling write request for address trace o c i logunitserver write info o c i l rollinglog disk write written to disk trace o c i logunitserver eviction size trace o c i nettyserverrouter sent response corfumsg clientid requestid epoch buf null msgtype error ok trace o c i nettyserverrouter message routed to sequencerserver tokenrequestmsg super corfumsg clientid requestid epoch buf slicedabstractbytebuf freed msgtype token req streamids numtokens tokenflags trace o c i nettyserverrouter sent response tokenresponsemsg super corfumsg clientid requestid epoch buf null msgtype token res token backpointermap trace o c i nettyserverrouter message routed to logunitserver logunitwritemsg super logunitpayloadmsg payload null data slicedabstractbytebuf ridx widx cap unwrapped pooledunsafedirectbytebuf ridx widx cap serializer org corfudb util serializer corfuserializer address trace o c i logunitserver handling write request for address trace o c i logunitserver write info o c i l rollinglog disk write written to disk trace o c i nettyserverrouter sent response corfumsg clientid requestid epoch buf null msgtype error ok trace o c i logunitserver eviction size trace o c i nettyserverrouter message routed to sequencerserver tokenrequestmsg super corfumsg clientid requestid epoch buf slicedabstractbytebuf freed msgtype token req streamids numtokens tokenflags trace o c i nettyserverrouter sent response tokenresponsemsg super corfumsg clientid requestid epoch buf null msgtype token res token backpointermap trace o c i nettyserverrouter message routed to logunitserver logunitwritemsg super logunitpayloadmsg payload null data slicedabstractbytebuf ridx widx cap unwrapped pooledunsafedirectbytebuf ridx widx cap serializer org corfudb util serializer corfuserializer address trace o c i logunitserver handling write request for address trace o c i logunitserver write info o c i l rollinglog disk write written to disk trace o c i nettyserverrouter sent response corfumsg clientid requestid epoch buf null msgtype error ok trace o c i logunitserver eviction size trace o c i nettyserverrouter message routed to sequencerserver tokenrequestmsg super corfumsg clientid requestid epoch buf slicedabstractbytebuf freed msgtype token req streamids numtokens tokenflags trace o c i nettyserverrouter sent response tokenresponsemsg super corfumsg clientid requestid epoch buf null msgtype token res token backpointermap trace o c i nettyserverrouter message routed to logunitserver logunitwritemsg super logunitpayloadmsg payload null data slicedabstractbytebuf ridx widx cap unwrapped pooledunsafedirectbytebuf ridx widx cap serializer org corfudb util serializer corfuserializer address trace o c i logunitserver handling write request for address trace o c i logunitserver write info o c i l rollinglog disk write written to disk trace o c i nettyserverrouter sent response corfumsg clientid requestid epoch buf null msgtype error ok trace o c i logunitserver eviction size a fatal error has been detected by the java runtime environment sigsegv at pc pid tid jre version java tm se runtime environment build java vm java hotspot tm bit server vm mixed mode linux compressed oops problematic frame v trace o c i nettyserverrouter message routed to sequencerserver tokenrequestmsg super corfumsg clientid requestid epoch buf slicedabstractbytebuf freed msgtype token req streamids numtokens tokenflags trace o c i nettyserverrouter sent response tokenresponsemsg super corfumsg clientid requestid epoch buf null msgtype token res token backpointermap failed to write core dump core dumps have been disabled to enable core dumping try ulimit c unlimited before starting java again an error report file with more information is saved as home slf tmp delme c hs err log compiled method nm n sun misc unsafe copymemory native total in heap relocation main code compiled method sun misc unsafe copymemory bytes total in heap relocation main code stub code metadata scopes data scopes pcs dependencies if you would like to submit a bug report please visit | 1 |
65,982 | 16,516,932,158 | IssuesEvent | 2021-05-26 10:39:10 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | TypeError: i.map is not a function | Bug Critical Production Tab Widget UI Building Pod Widgets | Sentry Issue: [APPSMITH-Q7](https://sentry.io/organizations/appsmith/issues/2362960909/?referrer=github_integration)
```
TypeError: i.map is not a function
at generateTabContainers (widgets/TabsWidget.tsx:293:40)
at componentDidMount (widgets/TabsWidget.tsx:359:10)
at apply (utils/WorkerUtil.ts:176:10)
at Worker.r (../../src/helpers.ts:87:17)
...
(45 additional frame(s) were not displayed)
``` | 1.0 | TypeError: i.map is not a function - Sentry Issue: [APPSMITH-Q7](https://sentry.io/organizations/appsmith/issues/2362960909/?referrer=github_integration)
```
TypeError: i.map is not a function
at generateTabContainers (widgets/TabsWidget.tsx:293:40)
at componentDidMount (widgets/TabsWidget.tsx:359:10)
at apply (utils/WorkerUtil.ts:176:10)
at Worker.r (../../src/helpers.ts:87:17)
...
(45 additional frame(s) were not displayed)
``` | non_test | typeerror i map is not a function sentry issue typeerror i map is not a function at generatetabcontainers widgets tabswidget tsx at componentdidmount widgets tabswidget tsx at apply utils workerutil ts at worker r src helpers ts additional frame s were not displayed | 0 |
741,970 | 25,830,274,558 | IssuesEvent | 2022-12-12 15:38:39 | aacitelli/wowcraftingorders.com | https://api.github.com/repos/aacitelli/wowcraftingorders.com | closed | API calls returning 500 due to an Axios 400 | bug priority-critical | Trace ID vwlu82qs71ev
Log #1
```
{
"insertId": "6397471e00013355f1a8b396",
"jsonPayload": {
"req": {
"params": {},
"remoteAddress": "::ffff:169.254.8.129",
"method": "GET",
"headers": {
"accept": "*/*",
"referer": "https://www.wowcraftingorders.com/",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
"sec-ch-ua": "\"Not?A_Brand\";v=\"8\", \"Chromium\";v=\"108\", \"Google Chrome\";v=\"108\"",
"accept-encoding": "gzip, deflate, br",
"x-appengine-timeout-ms": "599999",
"authorization": "Bearer USB1HffnhgFW4L9BQFq11FVIcOUYg1Ppcg",
"origin": "https://www.wowcraftingorders.com",
"x-cloud-trace-context": "403ef86bd481062c27210d1db9c22ff3/15826907538737150272",
"x-forwarded-for": "63.140.73.25",
"x-appengine-https": "on",
"sec-ch-ua-mobile": "?0",
"x-appengine-country": "US",
"sec-fetch-mode": "cors",
"traceparent": "00-403ef86bd481062c27210d1db9c22ff3-dba4788d4b613940-00",
"host": "us-central1-wowtrade.cloudfunctions.net",
"x-appengine-region": "ak",
"x-appengine-citylatlong": "60.487714,-151.155188",
"x-appengine-request-log-id": "6397471d00ff0869ea6cf43f7d0001737e6366663939633465626630313462626336702d7470000134633839376563356463636363343331353333326437346566333331666561633a353600010121",
"x-appengine-appversionid": "s~cff99c4ebf014bbc6p-tp/4c897ec5dcccc4315332d74ef331feac:56.448517545970860976",
"x-appengine-city": "kalifornsky",
"sec-ch-ua-platform": "\"Windows\"",
"sec-fetch-site": "cross-site",
"sec-fetch-dest": "empty",
"x-appengine-user-ip": "63.140.73.25",
"function-execution-id": "vwlu82qs71ev",
"x-forwarded-proto": "https",
"x-appengine-default-version-hostname": "cff99c4ebf014bbc6p-tp.appspot.com",
"connection": "close",
"accept-language": "en-US,en;q=0.9",
"forwarded": "for=\"63.140.73.25\";proto=https"
},
"url": "/us/seller_listings/ping",
"remotePort": 45759,
"id": 1,
"query": {}
},
"err": {
"type": "Error",
"message": "failed with status code 500",
"stack": "Error: failed with status code 500\n at ServerResponse.onResFinished (/workspace/node_modules/pino-http/logger.js:107:40)\n at ServerResponse.emit (node:events:525:35)\n at ServerResponse.emit (node:domain:489:12)\n at onFinish (node:_http_outgoing:950:10)\n at callback (node:internal/streams/writable:554:21)\n at afterWrite (node:internal/streams/writable:499:5)\n at afterWriteTick (node:internal/streams/writable:486:10)\n at processTicksAndRejections (node:internal/process/task_queues:82:21)"
},
"res": {
"statusCode": 500,
"headers": {
"x-powered-by": "Express",
"content-length": "21",
"etag": "W/\"15-/6VXivhc2MKdLfIkLcUE47K6aH0\"",
"vary": "Origin",
"access-control-allow-origin": "https://www.wowcraftingorders.com",
"x-google-status": "crash",
"content-type": "text/plain; charset=utf-8"
}
},
"time": 1670858526077,
"hostname": "localhost",
"pid": 1,
"level": 30,
"responseTime": 480,
"msg": "request errored"
},
"resource": {
"type": "cloud_function",
"labels": {
"region": "us-central1",
"function_name": "app-prod",
"project_id": "wowtrade"
}
},
"timestamp": "2022-12-12T15:22:06.078677Z",
"labels": {
"execution_id": "vwlu82qs71ev",
"instance_id": "00c61b117ccd1d55b31dce4e16326d01c3458f1e92ee680b58c698b2835be225d99ee6708a633e61eb077a5143adf13e0d2ee03581049ad0dd46"
},
"logName": "projects/wowtrade/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"trace": "projects/wowtrade/traces/403ef86bd481062c27210d1db9c22ff3",
"receiveTimestamp": "2022-12-12T15:22:06.335943586Z"
}
```
Log #2
```
{
"textPayload": "AxiosError: Request failed with status code 400\n at settle (/workspace/node_modules/axios/dist/node/axios.cjs:1855:12)\n at IncomingMessage.handleStreamEnd (/workspace/node_modules/axios/dist/node/axios.cjs:2704:11)\n at IncomingMessage.emit (node:events:525:35)\n at IncomingMessage.emit (node:domain:552:15)\n at endReadableNT (node:internal/streams/readable:1358:12)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)",
"insertId": "6397471e00057ab9f986386e",
"resource": {
"type": "cloud_function",
"labels": {
"function_name": "app-prod",
"project_id": "wowtrade",
"region": "us-central1"
}
},
"timestamp": "2022-12-12T15:22:06.359097Z",
"severity": "ERROR",
"labels": {
"instance_id": "00c61b117ccd1d55b31dce4e16326d01c3458f1e92ee680b58c698b2835be225d99ee6708a633e61eb077a5143adf13e0d2ee03581049ad0dd46",
"execution_id": "vwlu82qs71ev"
},
"logName": "projects/wowtrade/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"trace": "projects/wowtrade/traces/403ef86bd481062c27210d1db9c22ff3",
"receiveTimestamp": "2022-12-12T15:22:06.668450249Z"
}
```
| 1.0 | API calls returning 500 due to an Axios 400 - Trace ID vwlu82qs71ev
Log #1
```
{
"insertId": "6397471e00013355f1a8b396",
"jsonPayload": {
"req": {
"params": {},
"remoteAddress": "::ffff:169.254.8.129",
"method": "GET",
"headers": {
"accept": "*/*",
"referer": "https://www.wowcraftingorders.com/",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
"sec-ch-ua": "\"Not?A_Brand\";v=\"8\", \"Chromium\";v=\"108\", \"Google Chrome\";v=\"108\"",
"accept-encoding": "gzip, deflate, br",
"x-appengine-timeout-ms": "599999",
"authorization": "Bearer USB1HffnhgFW4L9BQFq11FVIcOUYg1Ppcg",
"origin": "https://www.wowcraftingorders.com",
"x-cloud-trace-context": "403ef86bd481062c27210d1db9c22ff3/15826907538737150272",
"x-forwarded-for": "63.140.73.25",
"x-appengine-https": "on",
"sec-ch-ua-mobile": "?0",
"x-appengine-country": "US",
"sec-fetch-mode": "cors",
"traceparent": "00-403ef86bd481062c27210d1db9c22ff3-dba4788d4b613940-00",
"host": "us-central1-wowtrade.cloudfunctions.net",
"x-appengine-region": "ak",
"x-appengine-citylatlong": "60.487714,-151.155188",
"x-appengine-request-log-id": "6397471d00ff0869ea6cf43f7d0001737e6366663939633465626630313462626336702d7470000134633839376563356463636363343331353333326437346566333331666561633a353600010121",
"x-appengine-appversionid": "s~cff99c4ebf014bbc6p-tp/4c897ec5dcccc4315332d74ef331feac:56.448517545970860976",
"x-appengine-city": "kalifornsky",
"sec-ch-ua-platform": "\"Windows\"",
"sec-fetch-site": "cross-site",
"sec-fetch-dest": "empty",
"x-appengine-user-ip": "63.140.73.25",
"function-execution-id": "vwlu82qs71ev",
"x-forwarded-proto": "https",
"x-appengine-default-version-hostname": "cff99c4ebf014bbc6p-tp.appspot.com",
"connection": "close",
"accept-language": "en-US,en;q=0.9",
"forwarded": "for=\"63.140.73.25\";proto=https"
},
"url": "/us/seller_listings/ping",
"remotePort": 45759,
"id": 1,
"query": {}
},
"err": {
"type": "Error",
"message": "failed with status code 500",
"stack": "Error: failed with status code 500\n at ServerResponse.onResFinished (/workspace/node_modules/pino-http/logger.js:107:40)\n at ServerResponse.emit (node:events:525:35)\n at ServerResponse.emit (node:domain:489:12)\n at onFinish (node:_http_outgoing:950:10)\n at callback (node:internal/streams/writable:554:21)\n at afterWrite (node:internal/streams/writable:499:5)\n at afterWriteTick (node:internal/streams/writable:486:10)\n at processTicksAndRejections (node:internal/process/task_queues:82:21)"
},
"res": {
"statusCode": 500,
"headers": {
"x-powered-by": "Express",
"content-length": "21",
"etag": "W/\"15-/6VXivhc2MKdLfIkLcUE47K6aH0\"",
"vary": "Origin",
"access-control-allow-origin": "https://www.wowcraftingorders.com",
"x-google-status": "crash",
"content-type": "text/plain; charset=utf-8"
}
},
"time": 1670858526077,
"hostname": "localhost",
"pid": 1,
"level": 30,
"responseTime": 480,
"msg": "request errored"
},
"resource": {
"type": "cloud_function",
"labels": {
"region": "us-central1",
"function_name": "app-prod",
"project_id": "wowtrade"
}
},
"timestamp": "2022-12-12T15:22:06.078677Z",
"labels": {
"execution_id": "vwlu82qs71ev",
"instance_id": "00c61b117ccd1d55b31dce4e16326d01c3458f1e92ee680b58c698b2835be225d99ee6708a633e61eb077a5143adf13e0d2ee03581049ad0dd46"
},
"logName": "projects/wowtrade/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"trace": "projects/wowtrade/traces/403ef86bd481062c27210d1db9c22ff3",
"receiveTimestamp": "2022-12-12T15:22:06.335943586Z"
}
```
Log #2
```
{
"textPayload": "AxiosError: Request failed with status code 400\n at settle (/workspace/node_modules/axios/dist/node/axios.cjs:1855:12)\n at IncomingMessage.handleStreamEnd (/workspace/node_modules/axios/dist/node/axios.cjs:2704:11)\n at IncomingMessage.emit (node:events:525:35)\n at IncomingMessage.emit (node:domain:552:15)\n at endReadableNT (node:internal/streams/readable:1358:12)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)",
"insertId": "6397471e00057ab9f986386e",
"resource": {
"type": "cloud_function",
"labels": {
"function_name": "app-prod",
"project_id": "wowtrade",
"region": "us-central1"
}
},
"timestamp": "2022-12-12T15:22:06.359097Z",
"severity": "ERROR",
"labels": {
"instance_id": "00c61b117ccd1d55b31dce4e16326d01c3458f1e92ee680b58c698b2835be225d99ee6708a633e61eb077a5143adf13e0d2ee03581049ad0dd46",
"execution_id": "vwlu82qs71ev"
},
"logName": "projects/wowtrade/logs/cloudfunctions.googleapis.com%2Fcloud-functions",
"trace": "projects/wowtrade/traces/403ef86bd481062c27210d1db9c22ff3",
"receiveTimestamp": "2022-12-12T15:22:06.668450249Z"
}
```
| non_test | api calls returning due to an axios trace id log insertid jsonpayload req params remoteaddress ffff method get headers accept referer user agent mozilla windows nt applewebkit khtml like gecko chrome safari sec ch ua not a brand v chromium v google chrome v accept encoding gzip deflate br x appengine timeout ms authorization bearer origin x cloud trace context x forwarded for x appengine https on sec ch ua mobile x appengine country us sec fetch mode cors traceparent host us wowtrade cloudfunctions net x appengine region ak x appengine citylatlong x appengine request log id x appengine appversionid s tp x appengine city kalifornsky sec ch ua platform windows sec fetch site cross site sec fetch dest empty x appengine user ip function execution id x forwarded proto https x appengine default version hostname tp appspot com connection close accept language en us en q forwarded for proto https url us seller listings ping remoteport id query err type error message failed with status code stack error failed with status code n at serverresponse onresfinished workspace node modules pino http logger js n at serverresponse emit node events n at serverresponse emit node domain n at onfinish node http outgoing n at callback node internal streams writable n at afterwrite node internal streams writable n at afterwritetick node internal streams writable n at processticksandrejections node internal process task queues res statuscode headers x powered by express content length etag w vary origin access control allow origin x google status crash content type text plain charset utf time hostname localhost pid level responsetime msg request errored resource type cloud function labels region us function name app prod project id wowtrade timestamp labels execution id instance id logname projects wowtrade logs cloudfunctions googleapis com functions trace projects wowtrade traces receivetimestamp log textpayload axioserror request failed with status code n at settle workspace node modules axios dist node axios cjs n at incomingmessage handlestreamend workspace node modules axios dist node axios cjs n at incomingmessage emit node events n at incomingmessage emit node domain n at endreadablent node internal streams readable n at processticksandrejections node internal process task queues insertid resource type cloud function labels function name app prod project id wowtrade region us timestamp severity error labels instance id execution id logname projects wowtrade logs cloudfunctions googleapis com functions trace projects wowtrade traces receivetimestamp | 0 |
261,419 | 22,745,338,584 | IssuesEvent | 2022-07-07 08:41:16 | mercedes-benz/sechub | https://api.github.com/repos/mercedes-benz/sechub | closed | TestAPI.dumpPDSJobOutput can lead to stack overflow | bug testing | ## Situation
The test helper method `TestAPI.dumpPDSJobOutput` can lead to a stack overflow exception when a job report cannot be fetched - e.g. because of network problems . (see stacktrace dump for details)
## Wanted
When job report cannot be fetched, the auto dump mechanism should dump recursive...
## Solution
Change implementation
```
java.lang.StackOverflowError
at java.net.PlainSocketImpl.socketClose0(Native Method)
at java.net.AbstractPlainSocketImpl.socketClose(AbstractPlainSocketImpl.java:700)
at java.net.AbstractPlainSocketImpl.close(AbstractPlainSocketImpl.java:532)
at java.net.SocksSocketImpl.close(SocksSocketImpl.java:1079)
at java.net.Socket.close(Socket.java:1513)
at sun.security.ssl.BaseSSLSocketImpl.close(BaseSSLSocketImpl.java:630)
at sun.security.ssl.SSLSocketImpl.closeSocket(SSLSocketImpl.java:1646)
at sun.security.ssl.SSLSocketImpl.close(SSLSocketImpl.java:572)
at org.apache.http.impl.BHttpConnectionBase.shutdown(BHttpConnectionBase.java:307)
at org.apache.http.impl.conn.DefaultManagedHttpClientConnection.shutdown(DefaultManagedHttpClientConnection.java:95)
at org.apache.http.impl.conn.LoggingManagedHttpClientConnection.shutdown(LoggingManagedHttpClientConnection.java:98)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager$2.process(PoolingHttpClientConnectionManager.java:420)
at org.apache.http.pool.AbstractConnPool.enumLeased(AbstractConnPool.java:599)
at org.apache.http.impl.conn.CPool.enumLeased(CPool.java:81)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.shutdown(PoolingHttpClientConnectionManager.java:413)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:368)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.springframework.http.client.HttpComponentsClientHttpRequest.executeInternal(HttpComponentsClientHttpRequest.java:87)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66)
at org.springframework.http.client.BufferingClientHttpRequestWrapper.executeInternal(BufferingClientHttpRequestWrapper.java:63)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66)
at org.springframework.http.client.InterceptingClientHttpRequest$InterceptingRequestExecution.execute(InterceptingClientHttpRequest.java:109)
at com.mercedesbenz.sechub.integrationtest.internal.TestSecHubRestAPIClientHttpRequestInterceptor.intercept(TestSecHubRestAPIClientHttpRequestInterceptor.java:52)
at org.springframework.http.client.InterceptingClientHttpRequest$InterceptingRequestExecution.execute(InterceptingClientHttpRequest.java:93)
at org.springframework.http.client.InterceptingClientHttpRequest.executeInternal(InterceptingClientHttpRequest.java:77)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:776)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711)
at org.springframework.web.client.RestTemplate.getForEntity(RestTemplate.java:361)
at com.mercedesbenz.sechub.integrationtest.internal.TestRestHelper.getStringFromURL(TestRestHelper.java:236)
at com.mercedesbenz.sechub.integrationtest.api.AsPDSUser.getJobOutputStreamText(AsPDSUser.java:180)
at com.mercedesbenz.sechub.integrationtest.api.TestAPI.dumpPDSJobOutput(TestAPI.java:1244)
at com.mercedesbenz.sechub.integrationtest.api.AsPDSUser.getJobReport(AsPDSUser.java:75)
at com.mercedesbenz.sechub.integrationtest.api.AsPDSUser.getJobReport(AsPDSUser.java:55)
at com.mercedesbenz.sechub.integrationtest.api.TestAPI.dumpPDSJobOutput(TestAPI.java:1258)
at com.mercedesbenz.sechub.integrationtest.api.AsPDSUser.getJobReport(AsPDSUser.java:75)
...
``` | 1.0 | TestAPI.dumpPDSJobOutput can lead to stack overflow - ## Situation
The test helper method `TestAPI.dumpPDSJobOutput` can lead to a stack overflow exception when a job report cannot be fetched - e.g. because of network problems . (see stacktrace dump for details)
## Wanted
When job report cannot be fetched, the auto dump mechanism should dump recursive...
## Solution
Change implementation
```
java.lang.StackOverflowError
at java.net.PlainSocketImpl.socketClose0(Native Method)
at java.net.AbstractPlainSocketImpl.socketClose(AbstractPlainSocketImpl.java:700)
at java.net.AbstractPlainSocketImpl.close(AbstractPlainSocketImpl.java:532)
at java.net.SocksSocketImpl.close(SocksSocketImpl.java:1079)
at java.net.Socket.close(Socket.java:1513)
at sun.security.ssl.BaseSSLSocketImpl.close(BaseSSLSocketImpl.java:630)
at sun.security.ssl.SSLSocketImpl.closeSocket(SSLSocketImpl.java:1646)
at sun.security.ssl.SSLSocketImpl.close(SSLSocketImpl.java:572)
at org.apache.http.impl.BHttpConnectionBase.shutdown(BHttpConnectionBase.java:307)
at org.apache.http.impl.conn.DefaultManagedHttpClientConnection.shutdown(DefaultManagedHttpClientConnection.java:95)
at org.apache.http.impl.conn.LoggingManagedHttpClientConnection.shutdown(LoggingManagedHttpClientConnection.java:98)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager$2.process(PoolingHttpClientConnectionManager.java:420)
at org.apache.http.pool.AbstractConnPool.enumLeased(AbstractConnPool.java:599)
at org.apache.http.impl.conn.CPool.enumLeased(CPool.java:81)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.shutdown(PoolingHttpClientConnectionManager.java:413)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:368)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.springframework.http.client.HttpComponentsClientHttpRequest.executeInternal(HttpComponentsClientHttpRequest.java:87)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66)
at org.springframework.http.client.BufferingClientHttpRequestWrapper.executeInternal(BufferingClientHttpRequestWrapper.java:63)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66)
at org.springframework.http.client.InterceptingClientHttpRequest$InterceptingRequestExecution.execute(InterceptingClientHttpRequest.java:109)
at com.mercedesbenz.sechub.integrationtest.internal.TestSecHubRestAPIClientHttpRequestInterceptor.intercept(TestSecHubRestAPIClientHttpRequestInterceptor.java:52)
at org.springframework.http.client.InterceptingClientHttpRequest$InterceptingRequestExecution.execute(InterceptingClientHttpRequest.java:93)
at org.springframework.http.client.InterceptingClientHttpRequest.executeInternal(InterceptingClientHttpRequest.java:77)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:776)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711)
at org.springframework.web.client.RestTemplate.getForEntity(RestTemplate.java:361)
at com.mercedesbenz.sechub.integrationtest.internal.TestRestHelper.getStringFromURL(TestRestHelper.java:236)
at com.mercedesbenz.sechub.integrationtest.api.AsPDSUser.getJobOutputStreamText(AsPDSUser.java:180)
at com.mercedesbenz.sechub.integrationtest.api.TestAPI.dumpPDSJobOutput(TestAPI.java:1244)
at com.mercedesbenz.sechub.integrationtest.api.AsPDSUser.getJobReport(AsPDSUser.java:75)
at com.mercedesbenz.sechub.integrationtest.api.AsPDSUser.getJobReport(AsPDSUser.java:55)
at com.mercedesbenz.sechub.integrationtest.api.TestAPI.dumpPDSJobOutput(TestAPI.java:1258)
at com.mercedesbenz.sechub.integrationtest.api.AsPDSUser.getJobReport(AsPDSUser.java:75)
...
``` | test | testapi dumppdsjoboutput can lead to stack overflow situation the test helper method testapi dumppdsjoboutput can lead to a stack overflow exception when a job report cannot be fetched e g because of network problems see stacktrace dump for details wanted when job report cannot be fetched the auto dump mechanism should dump recursive solution change implementation java lang stackoverflowerror at java net plainsocketimpl native method at java net abstractplainsocketimpl socketclose abstractplainsocketimpl java at java net abstractplainsocketimpl close abstractplainsocketimpl java at java net sockssocketimpl close sockssocketimpl java at java net socket close socket java at sun security ssl basesslsocketimpl close basesslsocketimpl java at sun security ssl sslsocketimpl closesocket sslsocketimpl java at sun security ssl sslsocketimpl close sslsocketimpl java at org apache http impl bhttpconnectionbase shutdown bhttpconnectionbase java at org apache http impl conn defaultmanagedhttpclientconnection shutdown defaultmanagedhttpclientconnection java at org apache http impl conn loggingmanagedhttpclientconnection shutdown loggingmanagedhttpclientconnection java at org apache http impl conn poolinghttpclientconnectionmanager process poolinghttpclientconnectionmanager java at org apache http pool abstractconnpool enumleased abstractconnpool java at org apache http impl conn cpool enumleased cpool java at org apache http impl conn poolinghttpclientconnectionmanager shutdown poolinghttpclientconnectionmanager java at org apache http impl execchain mainclientexec execute mainclientexec java at org apache http impl execchain protocolexec execute protocolexec java at org apache http impl execchain retryexec execute retryexec java at org apache http impl execchain redirectexec execute redirectexec java at org apache http impl client internalhttpclient doexecute internalhttpclient java at org apache http impl client closeablehttpclient execute closeablehttpclient java at org apache http impl client closeablehttpclient execute closeablehttpclient java at org springframework http client httpcomponentsclienthttprequest executeinternal httpcomponentsclienthttprequest java at org springframework http client abstractbufferingclienthttprequest executeinternal abstractbufferingclienthttprequest java at org springframework http client abstractclienthttprequest execute abstractclienthttprequest java at org springframework http client bufferingclienthttprequestwrapper executeinternal bufferingclienthttprequestwrapper java at org springframework http client abstractbufferingclienthttprequest executeinternal abstractbufferingclienthttprequest java at org springframework http client abstractclienthttprequest execute abstractclienthttprequest java at org springframework http client interceptingclienthttprequest interceptingrequestexecution execute interceptingclienthttprequest java at com mercedesbenz sechub integrationtest internal testsechubrestapiclienthttprequestinterceptor intercept testsechubrestapiclienthttprequestinterceptor java at org springframework http client interceptingclienthttprequest interceptingrequestexecution execute interceptingclienthttprequest java at org springframework http client interceptingclienthttprequest executeinternal interceptingclienthttprequest java at org springframework http client abstractbufferingclienthttprequest executeinternal abstractbufferingclienthttprequest java at org springframework http client abstractclienthttprequest execute abstractclienthttprequest java at org springframework web client resttemplate doexecute resttemplate java at org springframework web client resttemplate execute resttemplate java at org springframework web client resttemplate getforentity resttemplate java at com mercedesbenz sechub integrationtest internal testresthelper getstringfromurl testresthelper java at com mercedesbenz sechub integrationtest api aspdsuser getjoboutputstreamtext aspdsuser java at com mercedesbenz sechub integrationtest api testapi dumppdsjoboutput testapi java at com mercedesbenz sechub integrationtest api aspdsuser getjobreport aspdsuser java at com mercedesbenz sechub integrationtest api aspdsuser getjobreport aspdsuser java at com mercedesbenz sechub integrationtest api testapi dumppdsjoboutput testapi java at com mercedesbenz sechub integrationtest api aspdsuser getjobreport aspdsuser java | 1 |
588,946 | 17,686,292,332 | IssuesEvent | 2021-08-24 02:21:39 | staynomad/Nomad-Front | https://api.github.com/repos/staynomad/Nomad-Front | closed | Fix Expired Token Error with Host Toggle | dev:bug difficulty:medium priority:high | # Background
<!--- Put any relevant background information here. --->
- The host toggle only works once before an expired token error pops up.
# Task
<!--- Put the task here (ideally bullet points). --->
- Find and fix the bug.
# Done When
<!--- Put the completion criteria for the issue here. --->
- Can switch the host toggle unlimited amount of times
| 1.0 | Fix Expired Token Error with Host Toggle - # Background
<!--- Put any relevant background information here. --->
- The host toggle only works once before an expired token error pops up.
# Task
<!--- Put the task here (ideally bullet points). --->
- Find and fix the bug.
# Done When
<!--- Put the completion criteria for the issue here. --->
- Can switch the host toggle unlimited amount of times
| non_test | fix expired token error with host toggle background the host toggle only works once before an expired token error pops up task find and fix the bug done when can switch the host toggle unlimited amount of times | 0 |
69,202 | 7,126,963,874 | IssuesEvent | 2018-01-20 16:29:05 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | ccl/sqlccl: TestPointInTimeRecovery/recovery=inc-backup failed under stress | Robot bulkio test-failure | SHA: https://github.com/cockroachdb/cockroach/commits/4d07d08423a725508abb50056117030b11fe8826
Parameters:
```
TAGS=deadlock
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=366079&tab=buildLog
```
sql_runner.go:43: ccl/sqlccl/backup_test.go:1998: error executing 'RESTORE data.* FROM $1, $2, $3 WITH into_db=incbackup': pq: query execution canceled: importing 10 ranges: split at key /Table/55/1/100 failed: context canceled
``` | 1.0 | ccl/sqlccl: TestPointInTimeRecovery/recovery=inc-backup failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/4d07d08423a725508abb50056117030b11fe8826
Parameters:
```
TAGS=deadlock
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=366079&tab=buildLog
```
sql_runner.go:43: ccl/sqlccl/backup_test.go:1998: error executing 'RESTORE data.* FROM $1, $2, $3 WITH into_db=incbackup': pq: query execution canceled: importing 10 ranges: split at key /Table/55/1/100 failed: context canceled
``` | test | ccl sqlccl testpointintimerecovery recovery inc backup failed under stress sha parameters tags deadlock goflags stress build found a failed test sql runner go ccl sqlccl backup test go error executing restore data from with into db incbackup pq query execution canceled importing ranges split at key table failed context canceled | 1 |
309,273 | 26,659,285,518 | IssuesEvent | 2023-01-25 19:33:22 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | opened | Teste de generalizacao para a tag Concursos Públicos - Cópia do edital do concurso - São Sebastião do Rio Verde | generalization test development | DoD: Realizar o teste de Generalização do validador da tag Concursos Públicos - Cópia do edital do concurso para o Município de São Sebastião do Rio Verde. | 1.0 | Teste de generalizacao para a tag Concursos Públicos - Cópia do edital do concurso - São Sebastião do Rio Verde - DoD: Realizar o teste de Generalização do validador da tag Concursos Públicos - Cópia do edital do concurso para o Município de São Sebastião do Rio Verde. | test | teste de generalizacao para a tag concursos públicos cópia do edital do concurso são sebastião do rio verde dod realizar o teste de generalização do validador da tag concursos públicos cópia do edital do concurso para o município de são sebastião do rio verde | 1 |
401,541 | 11,795,084,941 | IssuesEvent | 2020-03-18 08:13:21 | thaliawww/concrexit | https://api.github.com/repos/thaliawww/concrexit | closed | Cannot update pizza order status with event override permission | bug pizzas priority: medium | In GitLab by @se-bastiaan on Sep 4, 2019, 21:07
### One-sentence description
Cannot update pizza order status with event override permission
### Current behaviour / Reproducing the bug
1. Create a pizza event for a committee you're not a member of
2. Be no superuser but have override event permission
3. Try to change the payment status of an order of said pizza event
4. Unable to change, 404 from API
### Expected behaviour
1. Create a pizza event for a committee you're not a member of
2. Be no superuser but have override event permission
3. Try to change the payment status of an order of said pizza event
4. Working | 1.0 | Cannot update pizza order status with event override permission - In GitLab by @se-bastiaan on Sep 4, 2019, 21:07
### One-sentence description
Cannot update pizza order status with event override permission
### Current behaviour / Reproducing the bug
1. Create a pizza event for a committee you're not a member of
2. Be no superuser but have override event permission
3. Try to change the payment status of an order of said pizza event
4. Unable to change, 404 from API
### Expected behaviour
1. Create a pizza event for a committee you're not a member of
2. Be no superuser but have override event permission
3. Try to change the payment status of an order of said pizza event
4. Working | non_test | cannot update pizza order status with event override permission in gitlab by se bastiaan on sep one sentence description cannot update pizza order status with event override permission current behaviour reproducing the bug create a pizza event for a committee you re not a member of be no superuser but have override event permission try to change the payment status of an order of said pizza event unable to change from api expected behaviour create a pizza event for a committee you re not a member of be no superuser but have override event permission try to change the payment status of an order of said pizza event working | 0 |
351,982 | 32,040,332,181 | IssuesEvent | 2023-09-22 18:44:42 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | opened | Release 4.6.0 - Beta 1 - E2E UX tests - File Integrity Monitoring | type/test level/subtask |
## End-to-End (E2E) Testing Guideline
- **Documentation:** Always consult the development documentation for the current stage tag at [this link](https://documentation-dev.wazuh.com/v4.6.0-beta1/index.html). Be careful because some of the description steps might refer to a current version in production, always navigate using the current development documention for the stage under test.
- **Test Requirements:** Ensure your test comprehensively includes a full stack and agent/s deployment as per the Deployment requirements, detailing the machine OS, installed version, and revision.
- **Deployment Options:** While deployments can be local (using VMs, Vagrant, etc) or on the aws-dev account, opt for local deployments when feasible. For AWS access, coordinate with the CICD team through [this link](https://github.com/wazuh/internal-devel-requests/issues/new?assignees=&labels=level%2Ftask%2C+request%2Foperational%2C+type%2Fchange&projects=&template=operational--request.md&title=%3CTitle%3E).
- **External Accounts:** If tests require third-party accounts (e.g., GitHub, Azure, AWS, GCP), request the necessary access through the CICD team [here](https://github.com/wazuh/internal-devel-requests/issues/new?assignees=&labels=level%2Ftask%2C+request%2Foperational%2C+type%2Fchange&projects=&template=operational--request.md&title=%3CTitle%3E).
- **Alerts:** Every test should generate a minimum of one end-to-end alert, from the agent to the dashboard, irrespective of test type.
- **Multi-node Testing:** For multi-node wazuh-manager tests, ensure agents are connected to both workers and the master node.
- **Package Verification:** Use the pre-release package that matches the current TAG you're testing. Confirm its version and revision.
- **Filebeat Errors:** If you encounter errors with Filebeat during testing, refer to [this Slack discussion](https://wazuh-team.slack.com/archives/C03BDG0K6JC/p1672168163537809) for insights and resolutions.
- **Known Issues:** Familiarize yourself with previously reported issues in the Known Issues section. This helps in identifying already recognized errors during testing.
- **Reporting New Issues:** Any new errors discovered during testing that aren't listed under Known Issues should be reported. Assign the issue to the corresponding team (QA if unsure), add the `Release testing/publication` objective and `Very high` priority. Communicate these to the team and QA via the c-release Slack channel.
- **Test Conduct:** It's imperative to be thorough in your testing, offering enough detail for reviewers. Incomplete tests might necessitate a redo.
- **Documentation Feedback:** Encountering documentation gaps, unclear guidelines, or anything that disrupts the testing or UX? Open an issue, especially if it's not listed under Known Issues.
- **Format:** If this is your first time doing this, refer to the format (but not necessarily the content, as it may vary) of previous E2E tests, here you have an example https://github.com/wazuh/wazuh/issues/13994.
- **Status and completion:** Change the issue status within your team project accordingly. Once you finish testing and write the conclusions, move it to Pending review and notify the @wazuh/watchdogs team via Slack using the [c-release channel](https://wazuh-team.slack.com/archives/C02A737S5MJ). Beware that the reviewers might request additional information or task repetitions.
- **For reviewers:** Please move the issue to Pending final review and notify via Slack using the same thread if everything is ok, otherwise, perform an issue update with the requested changes and move it to On hold, increase the review_cycles in the team project by one and notify the issue assignee via Slack using the same thread.
For the conclusions and the issue testing and updates, use the following legend:
**Status legend**
- 🟢 All checks passed
- 🟡 Found a known issue
- 🔴 Found a new error
## Deployment requirements
| Component | Installation | Type | OS |
|----------|--------------|------|----|
| Indexer | [OVA](https://documentation-dev.wazuh.com/v4.6.0-beta1/deployment-options/virtual-machine/virtual-machine.html) | - | - |
| Server | [OVA](https://documentation-dev.wazuh.com/v4.6.0-beta1/deployment-options/virtual-machine/virtual-machine.html) | - | - |
| Dashboard | [OVA](https://documentation-dev.wazuh.com/v4.6.0-beta1/deployment-options/virtual-machine/virtual-machine.html) | - | - |
| Agent | [Wazuh WUI one-liner deploy using IP](https://documentation-dev.wazuh.com/v4.6.0-beta1/_images/deploy-new-agent-from-ui1.png) | - | Windows 10 x86_64, Debian 12 aarch64, RHEL 9 x86_64, macOS Ventura arm |
## Test description
Test different FIM use cases for Windows, Linux and macOS:
- whodata
- report_changes
- inventory of files
Check that this uses cases still work for the current release under test:
https://documentation-dev.wazuh.com/v4.6.0-beta1/proof-of-concept-guide/poc-file-integrity-monitoring.html
Check that this blog post is still valid for the current release under test (suggest changes otherwise):
https://wazuh.com/blog/preventing-and-detecting-ransomware-with-wazuh/
Navigate trough WUI - FIM section to ensure that data is accurate and updated when needed (inventory/alerts/dashboards)
## Known issues
- https://github.com/wazuh/wazuh/issues/8602
- https://github.com/wazuh/wazuh-documentation/issues/5013
- https://github.com/wazuh/wazuh/issues/18429
## Conclusions
Summarize the errors detected (Known Issues included). Illustrate using the table below, removing current examples:
| **Status** | **Test** | **Failure type** | **Notes** |
|----------------|-------------|---------------------|----------------|
| 🟡 | Example Test: API Integration | Timeout issues on certain endpoints | Known issue: https://github.com/example/repo/issues/12345 |
| 🔴 | Example Test: Data Migration | Data inconsistency in the new version | New issue opened: https://github.com/example/repo/issues/67890 |
## Feedback
We value your feedback. Please provide insights on your testing experience.
- Was the testing guideline clear? Were there any ambiguities?
- Did you face any challenges not covered by the guideline?
- Suggestions for improvement:
## Reviewers validation
The criteria for completing this task is based on the validation of the conclusions and the test results by all reviewers.
All the checkboxes below must be marked in order to close this issue.
- [ ] @havidarou
- [ ] @wazuh/watchdogs | 1.0 | Release 4.6.0 - Beta 1 - E2E UX tests - File Integrity Monitoring -
## End-to-End (E2E) Testing Guideline
- **Documentation:** Always consult the development documentation for the current stage tag at [this link](https://documentation-dev.wazuh.com/v4.6.0-beta1/index.html). Be careful because some of the description steps might refer to a current version in production, always navigate using the current development documention for the stage under test.
- **Test Requirements:** Ensure your test comprehensively includes a full stack and agent/s deployment as per the Deployment requirements, detailing the machine OS, installed version, and revision.
- **Deployment Options:** While deployments can be local (using VMs, Vagrant, etc) or on the aws-dev account, opt for local deployments when feasible. For AWS access, coordinate with the CICD team through [this link](https://github.com/wazuh/internal-devel-requests/issues/new?assignees=&labels=level%2Ftask%2C+request%2Foperational%2C+type%2Fchange&projects=&template=operational--request.md&title=%3CTitle%3E).
- **External Accounts:** If tests require third-party accounts (e.g., GitHub, Azure, AWS, GCP), request the necessary access through the CICD team [here](https://github.com/wazuh/internal-devel-requests/issues/new?assignees=&labels=level%2Ftask%2C+request%2Foperational%2C+type%2Fchange&projects=&template=operational--request.md&title=%3CTitle%3E).
- **Alerts:** Every test should generate a minimum of one end-to-end alert, from the agent to the dashboard, irrespective of test type.
- **Multi-node Testing:** For multi-node wazuh-manager tests, ensure agents are connected to both workers and the master node.
- **Package Verification:** Use the pre-release package that matches the current TAG you're testing. Confirm its version and revision.
- **Filebeat Errors:** If you encounter errors with Filebeat during testing, refer to [this Slack discussion](https://wazuh-team.slack.com/archives/C03BDG0K6JC/p1672168163537809) for insights and resolutions.
- **Known Issues:** Familiarize yourself with previously reported issues in the Known Issues section. This helps in identifying already recognized errors during testing.
- **Reporting New Issues:** Any new errors discovered during testing that aren't listed under Known Issues should be reported. Assign the issue to the corresponding team (QA if unsure), add the `Release testing/publication` objective and `Very high` priority. Communicate these to the team and QA via the c-release Slack channel.
- **Test Conduct:** It's imperative to be thorough in your testing, offering enough detail for reviewers. Incomplete tests might necessitate a redo.
- **Documentation Feedback:** Encountering documentation gaps, unclear guidelines, or anything that disrupts the testing or UX? Open an issue, especially if it's not listed under Known Issues.
- **Format:** If this is your first time doing this, refer to the format (but not necessarily the content, as it may vary) of previous E2E tests, here you have an example https://github.com/wazuh/wazuh/issues/13994.
- **Status and completion:** Change the issue status within your team project accordingly. Once you finish testing and write the conclusions, move it to Pending review and notify the @wazuh/watchdogs team via Slack using the [c-release channel](https://wazuh-team.slack.com/archives/C02A737S5MJ). Beware that the reviewers might request additional information or task repetitions.
- **For reviewers:** Please move the issue to Pending final review and notify via Slack using the same thread if everything is ok, otherwise, perform an issue update with the requested changes and move it to On hold, increase the review_cycles in the team project by one and notify the issue assignee via Slack using the same thread.
For the conclusions and the issue testing and updates, use the following legend:
**Status legend**
- 🟢 All checks passed
- 🟡 Found a known issue
- 🔴 Found a new error
## Deployment requirements
| Component | Installation | Type | OS |
|----------|--------------|------|----|
| Indexer | [OVA](https://documentation-dev.wazuh.com/v4.6.0-beta1/deployment-options/virtual-machine/virtual-machine.html) | - | - |
| Server | [OVA](https://documentation-dev.wazuh.com/v4.6.0-beta1/deployment-options/virtual-machine/virtual-machine.html) | - | - |
| Dashboard | [OVA](https://documentation-dev.wazuh.com/v4.6.0-beta1/deployment-options/virtual-machine/virtual-machine.html) | - | - |
| Agent | [Wazuh WUI one-liner deploy using IP](https://documentation-dev.wazuh.com/v4.6.0-beta1/_images/deploy-new-agent-from-ui1.png) | - | Windows 10 x86_64, Debian 12 aarch64, RHEL 9 x86_64, macOS Ventura arm |
## Test description
Test different FIM use cases for Windows, Linux and macOS:
- whodata
- report_changes
- inventory of files
Check that this uses cases still work for the current release under test:
https://documentation-dev.wazuh.com/v4.6.0-beta1/proof-of-concept-guide/poc-file-integrity-monitoring.html
Check that this blog post is still valid for the current release under test (suggest changes otherwise):
https://wazuh.com/blog/preventing-and-detecting-ransomware-with-wazuh/
Navigate trough WUI - FIM section to ensure that data is accurate and updated when needed (inventory/alerts/dashboards)
## Known issues
- https://github.com/wazuh/wazuh/issues/8602
- https://github.com/wazuh/wazuh-documentation/issues/5013
- https://github.com/wazuh/wazuh/issues/18429
## Conclusions
Summarize the errors detected (Known Issues included). Illustrate using the table below, removing current examples:
| **Status** | **Test** | **Failure type** | **Notes** |
|----------------|-------------|---------------------|----------------|
| 🟡 | Example Test: API Integration | Timeout issues on certain endpoints | Known issue: https://github.com/example/repo/issues/12345 |
| 🔴 | Example Test: Data Migration | Data inconsistency in the new version | New issue opened: https://github.com/example/repo/issues/67890 |
## Feedback
We value your feedback. Please provide insights on your testing experience.
- Was the testing guideline clear? Were there any ambiguities?
- Did you face any challenges not covered by the guideline?
- Suggestions for improvement:
## Reviewers validation
The criteria for completing this task is based on the validation of the conclusions and the test results by all reviewers.
All the checkboxes below must be marked in order to close this issue.
- [ ] @havidarou
- [ ] @wazuh/watchdogs | test | release beta ux tests file integrity monitoring end to end testing guideline documentation always consult the development documentation for the current stage tag at be careful because some of the description steps might refer to a current version in production always navigate using the current development documention for the stage under test test requirements ensure your test comprehensively includes a full stack and agent s deployment as per the deployment requirements detailing the machine os installed version and revision deployment options while deployments can be local using vms vagrant etc or on the aws dev account opt for local deployments when feasible for aws access coordinate with the cicd team through external accounts if tests require third party accounts e g github azure aws gcp request the necessary access through the cicd team alerts every test should generate a minimum of one end to end alert from the agent to the dashboard irrespective of test type multi node testing for multi node wazuh manager tests ensure agents are connected to both workers and the master node package verification use the pre release package that matches the current tag you re testing confirm its version and revision filebeat errors if you encounter errors with filebeat during testing refer to for insights and resolutions known issues familiarize yourself with previously reported issues in the known issues section this helps in identifying already recognized errors during testing reporting new issues any new errors discovered during testing that aren t listed under known issues should be reported assign the issue to the corresponding team qa if unsure add the release testing publication objective and very high priority communicate these to the team and qa via the c release slack channel test conduct it s imperative to be thorough in your testing offering enough detail for reviewers incomplete tests might necessitate a redo documentation feedback encountering documentation gaps unclear guidelines or anything that disrupts the testing or ux open an issue especially if it s not listed under known issues format if this is your first time doing this refer to the format but not necessarily the content as it may vary of previous tests here you have an example status and completion change the issue status within your team project accordingly once you finish testing and write the conclusions move it to pending review and notify the wazuh watchdogs team via slack using the beware that the reviewers might request additional information or task repetitions for reviewers please move the issue to pending final review and notify via slack using the same thread if everything is ok otherwise perform an issue update with the requested changes and move it to on hold increase the review cycles in the team project by one and notify the issue assignee via slack using the same thread for the conclusions and the issue testing and updates use the following legend status legend 🟢 all checks passed 🟡 found a known issue 🔴 found a new error deployment requirements component installation type os indexer server dashboard agent windows debian rhel macos ventura arm test description test different fim use cases for windows linux and macos whodata report changes inventory of files check that this uses cases still work for the current release under test check that this blog post is still valid for the current release under test suggest changes otherwise navigate trough wui fim section to ensure that data is accurate and updated when needed inventory alerts dashboards known issues conclusions summarize the errors detected known issues included illustrate using the table below removing current examples status test failure type notes 🟡 example test api integration timeout issues on certain endpoints known issue 🔴 example test data migration data inconsistency in the new version new issue opened feedback we value your feedback please provide insights on your testing experience was the testing guideline clear were there any ambiguities did you face any challenges not covered by the guideline suggestions for improvement reviewers validation the criteria for completing this task is based on the validation of the conclusions and the test results by all reviewers all the checkboxes below must be marked in order to close this issue havidarou wazuh watchdogs | 1 |
296,537 | 9,121,586,922 | IssuesEvent | 2019-02-23 00:09:34 | DrylandEcology/STEPWAT2 | https://api.github.com/repos/DrylandEcology/STEPWAT2 | closed | Add two output columns that indicate fire years - wildfire and prescribed fire | enhancement highpriority | We need to add two columns to the biomass output file that indicate which years had fires. One column for prescribed fire and one column for wildfire. For a single iteration, this should be a 0 (no fire) or 1 (fire occurred). For multiple iterations, this should be a count of how many fires occurred for each year across those iterations (i.e. simulation run for 50 iterations, year 10 = 5 (5 fires in year 10 among the 50 iterations).
| 1.0 | Add two output columns that indicate fire years - wildfire and prescribed fire - We need to add two columns to the biomass output file that indicate which years had fires. One column for prescribed fire and one column for wildfire. For a single iteration, this should be a 0 (no fire) or 1 (fire occurred). For multiple iterations, this should be a count of how many fires occurred for each year across those iterations (i.e. simulation run for 50 iterations, year 10 = 5 (5 fires in year 10 among the 50 iterations).
| non_test | add two output columns that indicate fire years wildfire and prescribed fire we need to add two columns to the biomass output file that indicate which years had fires one column for prescribed fire and one column for wildfire for a single iteration this should be a no fire or fire occurred for multiple iterations this should be a count of how many fires occurred for each year across those iterations i e simulation run for iterations year fires in year among the iterations | 0 |
175,612 | 13,572,426,551 | IssuesEvent | 2020-09-19 00:17:51 | timescale/timescaledb | https://api.github.com/repos/timescale/timescaledb | closed | Apache regression tests are failing, since tests don't respect different between OSS and TSL | testing | [Apache build](https://github.com/timescale/timescaledb/runs/1114025606) failed with the diff:
```diff
--- /home/runner/work/timescaledb/timescaledb/test/expected/plan_hypertable_cache-11.out 2020-09-14 19:21:53.120502879 +0000
+++ /home/runner/work/timescaledb/timescaledb/build/test/results/plan_hypertable_cache-11.out 2020-09-14 19:22:32.154864169 +0000
@@ -74,30 +74,4 @@
WHERE res = 0;
$sql$;
ALTER TABLE metrics SET (timescaledb.compress);
-SELECT compress_chunk(show_chunks('metrics'));
- compress_chunk
-----------------------------------------
- _timescaledb_internal._hyper_2_5_chunk
-(1 row)
-
--- should have decompresschunk node
-:PREFIX SELECT * FROM ht_func();
- QUERY PLAN
----------------------------------------------------------
- Append
- -> Seq Scan on metrics
- -> Custom Scan (DecompressChunk) on _hyper_2_5_chunk
- -> Seq Scan on compress_hyper_3_6_chunk
-(4 rows)
-
-\c
--- plan should be identical to previous plan in fresh session
-:PREFIX SELECT * FROM ht_func();
- QUERY PLAN
----------------------------------------------------------
- Append
- -> Seq Scan on metrics
- -> Custom Scan (DecompressChunk) on _hyper_2_5_chunk
- -> Seq Scan on compress_hyper_3_6_chunk
-(4 rows)
-
+ERROR: functionality not supported under the current license "ApacheOnly", license
======================================================================
``` | 1.0 | Apache regression tests are failing, since tests don't respect different between OSS and TSL - [Apache build](https://github.com/timescale/timescaledb/runs/1114025606) failed with the diff:
```diff
--- /home/runner/work/timescaledb/timescaledb/test/expected/plan_hypertable_cache-11.out 2020-09-14 19:21:53.120502879 +0000
+++ /home/runner/work/timescaledb/timescaledb/build/test/results/plan_hypertable_cache-11.out 2020-09-14 19:22:32.154864169 +0000
@@ -74,30 +74,4 @@
WHERE res = 0;
$sql$;
ALTER TABLE metrics SET (timescaledb.compress);
-SELECT compress_chunk(show_chunks('metrics'));
- compress_chunk
-----------------------------------------
- _timescaledb_internal._hyper_2_5_chunk
-(1 row)
-
--- should have decompresschunk node
-:PREFIX SELECT * FROM ht_func();
- QUERY PLAN
----------------------------------------------------------
- Append
- -> Seq Scan on metrics
- -> Custom Scan (DecompressChunk) on _hyper_2_5_chunk
- -> Seq Scan on compress_hyper_3_6_chunk
-(4 rows)
-
-\c
--- plan should be identical to previous plan in fresh session
-:PREFIX SELECT * FROM ht_func();
- QUERY PLAN
----------------------------------------------------------
- Append
- -> Seq Scan on metrics
- -> Custom Scan (DecompressChunk) on _hyper_2_5_chunk
- -> Seq Scan on compress_hyper_3_6_chunk
-(4 rows)
-
+ERROR: functionality not supported under the current license "ApacheOnly", license
======================================================================
``` | test | apache regression tests are failing since tests don t respect different between oss and tsl failed with the diff diff home runner work timescaledb timescaledb test expected plan hypertable cache out home runner work timescaledb timescaledb build test results plan hypertable cache out where res sql alter table metrics set timescaledb compress select compress chunk show chunks metrics compress chunk timescaledb internal hyper chunk row should have decompresschunk node prefix select from ht func query plan append seq scan on metrics custom scan decompresschunk on hyper chunk seq scan on compress hyper chunk rows c plan should be identical to previous plan in fresh session prefix select from ht func query plan append seq scan on metrics custom scan decompresschunk on hyper chunk seq scan on compress hyper chunk rows error functionality not supported under the current license apacheonly license | 1 |
217,818 | 16,889,821,570 | IssuesEvent | 2021-06-23 07:53:35 | spring-projects/spring-graphql | https://api.github.com/repos/spring-projects/spring-graphql | opened | GraphqlTester should not have web dependencies | in: test in: web type: enhancement | There is nothing in `GraphQlTester` that is web specific, apart from the factory methods accepting `WebTestClient` and `WebGraphqlHandler`, but that'll change as we add web specific inputs in #64.
We can create a `WebGraphQlTester` that extends `GraphQlTester`, allows additional, web-specific inputs, and exposes web-specific factory methods, while `GraphQlTester` remains neutral to the underlying transport it is used it. In the future we may have other such extensions, for example over RSocket. | 1.0 | GraphqlTester should not have web dependencies - There is nothing in `GraphQlTester` that is web specific, apart from the factory methods accepting `WebTestClient` and `WebGraphqlHandler`, but that'll change as we add web specific inputs in #64.
We can create a `WebGraphQlTester` that extends `GraphQlTester`, allows additional, web-specific inputs, and exposes web-specific factory methods, while `GraphQlTester` remains neutral to the underlying transport it is used it. In the future we may have other such extensions, for example over RSocket. | test | graphqltester should not have web dependencies there is nothing in graphqltester that is web specific apart from the factory methods accepting webtestclient and webgraphqlhandler but that ll change as we add web specific inputs in we can create a webgraphqltester that extends graphqltester allows additional web specific inputs and exposes web specific factory methods while graphqltester remains neutral to the underlying transport it is used it in the future we may have other such extensions for example over rsocket | 1 |
626 | 2,506,064,490 | IssuesEvent | 2015-01-12 05:07:18 | opencog/opencog | https://api.github.com/repos/opencog/opencog | reopened | Testing game character dialogue | embodiment nlp unit-test | Making a set of test cases for game character dialogue, doing testing, and fixing what’s broken. | 1.0 | Testing game character dialogue - Making a set of test cases for game character dialogue, doing testing, and fixing what’s broken. | test | testing game character dialogue making a set of test cases for game character dialogue doing testing and fixing what’s broken | 1 |
51,842 | 6,199,846,856 | IssuesEvent | 2017-07-05 22:45:01 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | Kubectl taint should update the taint on a node fails with "Unable to connect to the server: i/o timeout" | component/kubernetes kind/test-flake priority/P1 | <details>
<summary>Stacktrace</summary>
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1476
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl [kubectl --server=https://internal-api.prtest-5a37c28-2600.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig taint nodes ci-prtest-5a37c28-2600-ig-n-8d5b kubernetes.io/e2e-taint-key-001-2b3a434b-4524-11e7-8ce8-0ee4c522aca2=testing-taint-value:NoSchedule] [] <nil> Unable to connect to the server: dial tcp 35.188.192.152:8443: i/o timeout\n [] <nil> 0xc420771770 exit status 1 <nil> <nil> true [0xc420034600 0xc420034618 0xc420034630] [0xc420034600 0xc420034618 0xc420034630] [0xc420034610 0xc420034628] [0xdd8390 0xdd8390] 0xc421be8a20 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 35.188.192.152:8443: i/o timeout\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl [kubectl --server=https://internal-api.prtest-5a37c28-2600.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig taint nodes ci-prtest-5a37c28-2600-ig-n-8d5b kubernetes.io/e2e-taint-key-001-2b3a434b-4524-11e7-8ce8-0ee4c522aca2=testing-taint-value:NoSchedule] [] <nil> Unable to connect to the server: dial tcp 35.188.192.152:8443: i/o timeout
[] <nil> 0xc420771770 exit status 1 <nil> <nil> true [0xc420034600 0xc420034618 0xc420034630] [0xc420034600 0xc420034618 0xc420034630] [0xc420034610 0xc420034628] [0xdd8390 0xdd8390] 0xc421be8a20 <nil>}:
Command stdout:
stderr:
Unable to connect to the server: dial tcp 35.188.192.152:8443: i/o timeout
error:
exit status 1
not to have occurred
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:175
</details>
<details>
<summary>Standard Output</summary>
[BeforeEach] [Top Level]
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Kubectl client
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120
STEP: Creating a kubernetes client
May 30 06:38:49.939: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
May 30 06:38:50.030: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubectl client
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:289
[It] should update the taint on a node
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1476
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: adding the taint kubernetes.io/e2e-taint-key-001-2b3a434b-4524-11e7-8ce8-0ee4c522aca2=testing-taint-value:NoSchedule to a node
May 30 06:38:52.629: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://internal-api.prtest-5a37c28-2600.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig taint nodes ci-prtest-5a37c28-2600-ig-n-8d5b kubernetes.io/e2e-taint-key-001-2b3a434b-4524-11e7-8ce8-0ee4c522aca2=testing-taint-value:NoSchedule'
May 30 06:39:22.842: INFO: rc: 127
May 30 06:39:22.842: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl client
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121
STEP: Collecting events from namespace "e2e-tests-kubectl-5qdr5".
STEP: Found 8 events.
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:50 -0400 EDT - event for without-label: {default-scheduler } Scheduled: Successfully assigned without-label to ci-prtest-5a37c28-2600-ig-n-8d5b
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:51 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:51 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} Created: Created container with id 8a89ca1e533de41a58a5762ea85861260ac34e406b741006cd758c278fe1650f
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:51 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} Started: Started container with id 8a89ca1e533de41a58a5762ea85861260ac34e406b741006cd758c278fe1650f
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:53 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} FailedMount: MountVolume.SetUp failed for volume "kubernetes.io/secret/2b413eca-4524-11e7-bb4c-42010a800005-default-token-gbhnf" (spec.Name: "default-token-gbhnf") pod "2b413eca-4524-11e7-bb4c-42010a800005" (UID: "2b413eca-4524-11e7-bb4c-42010a800005") with: secret "e2e-tests-kubectl-5qdr5"/"default-token-gbhnf" not registered
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:53 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} Killing: Killing container with id docker://8a89ca1e533de41a58a5762ea85861260ac34e406b741006cd758c278fe1650f:Need to kill Pod
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:53 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:53 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} FailedSync: Error syncing pod, skipping: failed to "CreatePodSandbox" for "without-label_e2e-tests-kubectl-5qdr5(2b413eca-4524-11e7-bb4c-42010a800005)" with CreatePodSandboxError: "CreatePodSandbox for pod \"without-label_e2e-tests-kubectl-5qdr5(2b413eca-4524-11e7-bb4c-42010a800005)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"without-label_e2e-tests-kubectl-5qdr5\" network: CNI request failed with status 400: 'pods \"without-label\" not found\n'"
May 30 06:39:22.962: INFO: POD NODE PHASE GRACE CONDITIONS
May 30 06:39:22.962: INFO: docker-registry-2-vxftf ci-prtest-5a37c28-2600-ig-m-v6bp Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:20:16 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:20:25 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:20:16 -0400 EDT }]
May 30 06:39:22.962: INFO: registry-console-1-deploy ci-prtest-5a37c28-2600-ig-n-qxpq Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:19:43 -0400 EDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:20:32 -0400 EDT ContainersNotReady containers with unready status: [deployment]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:19:43 -0400 EDT }]
May 30 06:39:22.962: INFO: router-1-9z7km ci-prtest-5a37c28-2600-ig-m-v6bp Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:19:13 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:19:33 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:19:13 -0400 EDT }]
May 30 06:39:22.962: INFO:
May 30 06:39:23.019: INFO:
Logging node info for node ci-prtest-5a37c28-2600-ig-m-v6bp
May 30 06:39:23.049: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ci-prtest-5a37c28-2600-ig-m-v6bp,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ci-prtest-5a37c28-2600-ig-m-v6bp,UID:1c72db27-4521-11e7-bb4c-42010a800005,ResourceVersion:24963,Generation:0,CreationTimestamp:2017-05-30 06:16:57 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-a,kubernetes.io/hostname: ci-prtest-5a37c28-2600-ig-m-v6bp,role: infra,subrole: master,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:7168586717522361702,ProviderID:gce://openshift-gce-devel-ci/us-central1-a/ci-prtest-5a37c28-2600-ig-m-v6bp,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7674597376 0} {<nil>} 7494724Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7569739776 0} {<nil>} 7392324Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:16:57 -0400 EDT RouteCreated openshift-sdn cleared kubelet-set NoRouteCreated} {OutOfDisk False 2017-05-30 06:39:13 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-05-30 06:39:13 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-05-30 06:39:13 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-05-30 06:39:13 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.128.0.5} {ExternalIP 130.211.115.191} {Hostname ci-prtest-5a37c28-2600-ig-m-v6bp}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:02f1ddb1415c4feba9880b2b8c4c5925,SystemUUID:E5F80BC7-12E6-DDF0-66C7-497EECA19755,BootID:d09ecdfb-ec29-4e9c-b118-3028f7797fa0,KernelVersion:3.10.0-514.6.1.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.3 (Maipo),ContainerRuntimeVersion:docker://1.12.5,KubeletVersion:v1.6.1+5115d708d7,KubeProxyVersion:v1.6.1+5115d708d7,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/openshift/origin-haproxy-router@sha256:8ea695c0608e086ede7786e3b9bf6ba81a2ed3d2952506dbaa3a6a385047bf76 docker.io/openshift/origin-haproxy-router:v3.6.0-alpha.1] 656266633} {[docker.io/openshift/origin-deployer@sha256:393b6ff0ceead1efaece9b2bb8b508b644cbd0c6afb8f2abb4bb6e4f540ccb65 docker.io/openshift/origin-deployer:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-docker-registry@sha256:ec8130ec4591925e1b8609e03a5641e6f2be62a4859f27f59f6267a415b6c01d docker.io/openshift/origin-docker-registry:v3.6.0-alpha.1] 429239546} {[gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25 gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/nginx-slim@sha256:8b4501fe0fe221df663c22e16539f399e89594552f400408303c42f3dd8d0e52 gcr.io/google_containers/nginx-slim:0.8] 110487599} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86864428} {[gcr.io/google_containers/hostexec@sha256:cab8d4e2526f8f767c64febe4ce9e0f0e58cd35fdff81b3aadba4dd041ba9f00 gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[docker.io/openshift/origin-pod@sha256:478fd0553a9600014256dede2ad4afb0b620421f5e0353a667be3a94d06dc9b0 docker.io/openshift/origin-pod:v3.6.0-alpha.1] 1138998} {[gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 gcr.io/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
May 30 06:39:23.049: INFO:
Logging kubelet events for node ci-prtest-5a37c28-2600-ig-m-v6bp
May 30 06:39:23.078: INFO:
Logging pods the kubelet thinks is on node ci-prtest-5a37c28-2600-ig-m-v6bp
May 30 06:39:23.138: INFO: router-1-9z7km started at 2017-05-30 06:19:13 -0400 EDT (0+1 container statuses recorded)
May 30 06:39:23.138: INFO: Container router ready: true, restart count 0
May 30 06:39:23.138: INFO: docker-registry-2-vxftf started at 2017-05-30 06:20:16 -0400 EDT (0+1 container statuses recorded)
May 30 06:39:23.138: INFO: Container registry ready: true, restart count 0
May 30 06:39:23.376: INFO:
Latency metrics for node ci-prtest-5a37c28-2600-ig-m-v6bp
May 30 06:39:23.376: INFO:
Logging node info for node ci-prtest-5a37c28-2600-ig-n-8d5b
May 30 06:39:23.406: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ci-prtest-5a37c28-2600-ig-n-8d5b,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ci-prtest-5a37c28-2600-ig-n-8d5b,UID:1c6afdb5-4521-11e7-bb4c-42010a800005,ResourceVersion:24967,Generation:0,CreationTimestamp:2017-05-30 06:16:57 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-a,kubernetes.io/hostname: ci-prtest-5a37c28-2600-ig-n-8d5b,role: app,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:525145833943924073,ProviderID:gce://openshift-gce-devel-ci/us-central1-a/ci-prtest-5a37c28-2600-ig-n-8d5b,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7674597376 0} {<nil>} 7494724Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7569739776 0} {<nil>} 7392324Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:16:57 -0400 EDT RouteCreated openshift-sdn cleared kubelet-set NoRouteCreated} {OutOfDisk False 2017-05-30 06:39:20 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-05-30 06:39:20 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-05-30 06:39:20 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-05-30 06:39:20 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.128.0.4} {ExternalIP 35.192.2.246} {Hostname ci-prtest-5a37c28-2600-ig-n-8d5b}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:02f1ddb1415c4feba9880b2b8c4c5925,SystemUUID:A9707A90-AB4B-146E-45E6-86E62849AF06,BootID:6a692fc9-c920-43a3-a1b7-420d6c2c845f,KernelVersion:3.10.0-514.6.1.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.3 (Maipo),ContainerRuntimeVersion:docker://1.12.5,KubeletVersion:v1.6.1+5115d708d7,KubeProxyVersion:v1.6.1+5115d708d7,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/openshift/origin-sti-builder@sha256:88f85945c4bffaf226fce4e14f7f30158bd7a2a0f70eebe134e26ae89360d458 docker.io/openshift/origin-sti-builder:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-docker-builder@sha256:a76c1a9da9d17f59ded0070ae70f65159b861348b856b8d5e1f0fdcf29f10085 docker.io/openshift/origin-docker-builder:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-deployer@sha256:393b6ff0ceead1efaece9b2bb8b508b644cbd0c6afb8f2abb4bb6e4f540ccb65 docker.io/openshift/origin-deployer:v3.6.0-alpha.1] 635306029} {[docker.io/centos/ruby-22-centos7@sha256:ca01a63f8d2f0dec3f8edf9964e7903908fcc5485e0b7f75f668817609874ad0 docker.io/centos/ruby-22-centos7:latest] 472191707} {[gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25 gcr.io/google_containers/redis:e2e] 419003740} {[docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077 docker.io/centos:centos7] 192556318} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86864428} {[gcr.io/google_containers/hostexec@sha256:cab8d4e2526f8f767c64febe4ce9e0f0e58cd35fdff81b3aadba4dd041ba9f00 gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/fakegitserver@sha256:e974692bb4d422a4e9ea6ff9df85fa36f189010703400496fea44aac6589d0dd gcr.io/google_containers/fakegitserver:0.1] 5007469} {[gcr.io/google_containers/update-demo@sha256:89ac104fa7c43880d2324f377b79be95b0b2b3fb32e4bd03b8d1e6d91a41f009 gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/eptest@sha256:bb088b26ed78613cce171420168db9a6c62a8dbea17d7be13077e7010bae162f gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 gcr.io/google_containers/busybox:latest] 2433303} {[gcr.io/google_containers/mounttest@sha256:c4dcedb26013ab4231a2b2aaa4eebd5c2a44d5c597fa0613c9ff8bde4fb9fe02 gcr.io/google_containers/mounttest:0.7] 2052704} {[gcr.io/google_containers/portforwardtester@sha256:306879729d3eff635a11b89f3e62e440c9f2fe4dabdfb9ef02bc67f2275f67ab gcr.io/google_containers/portforwardtester:1.2] 1892642} {[gcr.io/google_containers/mounttest-user@sha256:5487c126b03abf4119a8f7950cd5f591f72dbe4ab15623f3387d3917e1268b4e gcr.io/google_containers/mounttest-user:0.5] 1450761} {[gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 gcr.io/google_containers/mounttest:0.8] 1450761} {[docker.io/openshift/origin-pod@sha256:478fd0553a9600014256dede2ad4afb0b620421f5e0353a667be3a94d06dc9b0 docker.io/openshift/origin-pod:v3.6.0-alpha.1] 1138998} {[gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff gcr.io/google_containers/busybox:1.24] 1113554} {[docker.io/busybox@sha256:c79345819a6882c31b41bc771d9a94fc52872fa651b36771fbe0c8461d7ee558] 1106304} {[gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google-containers/pause@sha256:9ce5316f9752b8347484ab0f6778573af15524124d52b93230b9a0dcc987e73e gcr.io/google-containers/pause:2.0] 350164}],VolumesInUse:[],VolumesAttached:[],},}
May 30 06:39:23.406: INFO:
Logging kubelet events for node ci-prtest-5a37c28-2600-ig-n-8d5b
May 30 06:39:23.435: INFO:
Logging pods the kubelet thinks is on node ci-prtest-5a37c28-2600-ig-n-8d5b
May 30 06:39:23.719: INFO:
Latency metrics for node ci-prtest-5a37c28-2600-ig-n-8d5b
May 30 06:39:23.719: INFO:
Logging node info for node ci-prtest-5a37c28-2600-ig-n-qxpq
May 30 06:39:23.749: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ci-prtest-5a37c28-2600-ig-n-qxpq,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ci-prtest-5a37c28-2600-ig-n-qxpq,UID:1c7ebc81-4521-11e7-bb4c-42010a800005,ResourceVersion:24965,Generation:0,CreationTimestamp:2017-05-30 06:16:57 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-a,kubernetes.io/hostname: ci-prtest-5a37c28-2600-ig-n-qxpq,role: app,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:7766244258712976745,ProviderID:gce://openshift-gce-devel-ci/us-central1-a/ci-prtest-5a37c28-2600-ig-n-qxpq,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7674597376 0} {<nil>} 7494724Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7569739776 0} {<nil>} 7392324Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:16:57 -0400 EDT RouteCreated openshift-sdn cleared kubelet-set NoRouteCreated} {OutOfDisk False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.128.0.2} {ExternalIP 35.188.205.43} {Hostname ci-prtest-5a37c28-2600-ig-n-qxpq}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:02f1ddb1415c4feba9880b2b8c4c5925,SystemUUID:85D0B28A-CE77-C767-63B8-3B8FFA1FEADA,BootID:a8d41cf2-e920-48b5-ba89-58ef65a40c34,KernelVersion:3.10.0-514.6.1.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.3 (Maipo),ContainerRuntimeVersion:docker://1.12.5,KubeletVersion:v1.6.1+5115d708d7,KubeProxyVersion:v1.6.1+5115d708d7,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/openshift/origin-docker-builder@sha256:a76c1a9da9d17f59ded0070ae70f65159b861348b856b8d5e1f0fdcf29f10085 docker.io/openshift/origin-docker-builder:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-deployer@sha256:393b6ff0ceead1efaece9b2bb8b508b644cbd0c6afb8f2abb4bb6e4f540ccb65 docker.io/openshift/origin-deployer:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-sti-builder@sha256:88f85945c4bffaf226fce4e14f7f30158bd7a2a0f70eebe134e26ae89360d458 docker.io/openshift/origin-sti-builder:v3.6.0-alpha.1] 635306029} {[docker.io/centos/mongodb-32-centos7@sha256:f9aaaf2b6ea764e44c4c3b09375144da797d3a3c4ab09f588f233c01c9d6302a] 567708657} {[172.30.240.85:5000/extended-test-new-app-w3lpf-lwllz/a234567890123456789012345678901234567890123456789012345678@sha256:b28676585758e3fb02ec1996c6b709d7f38e4f543c96be59743d632c2af48d7b 172.30.240.85:5000/extended-test-new-app-w3lpf-lwllz/a234567890123456789012345678901234567890123456789012345678:latest] 487441316} {[docker.io/centos/ruby-23-centos7@sha256:4f8eea80f8f76abadda0c2956ff6f174cbf33ae559f1ed9276282e12682ff4b9] 486837919} {[docker.io/centos/ruby-22-centos7@sha256:ca01a63f8d2f0dec3f8edf9964e7903908fcc5485e0b7f75f668817609874ad0 docker.io/centos/ruby-22-centos7:latest] 472191707} {[docker.io/centos/nodejs-4-centos7@sha256:90da46745a770eded68c38bd6e0eaafd6aef6634ee0dce97ee3094b19278db51] 472020649} {[gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25 gcr.io/google_containers/redis:e2e] 419003740} {[docker.io/openshift/origin-base@sha256:3848ab52436662e4193f34063bbfd259c0c09cbe91562acec7dd6eb510ca2e94 docker.io/openshift/origin-base:latest] 363035993} {[docker.io/openshift/origin-pod@sha256:b0a14ec3da6f2c94b2e16ea0d2f6124cb856289dec38572509d3fb0bcf20a67d docker.io/openshift/origin-pod:latest] 213197942} {[172.30.240.85:5000/extended-test-docker-build-pullsecret-jt7g5-zm0kj/image1@sha256:cf5f304e28e9c275d914339a72e3613a096537a26b151889e13ed6d1c7b54c54] 192556318} {[docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077 docker.io/centos:centos7] 192556318} {[gcr.io/google_containers/jessie-dnsutils@sha256:2460d596912244b5f8973573f7150e7264b570015f4becc2d0096f0bd1d17e36 gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86864428} {[gcr.io/google_containers/nettest@sha256:8af3a0e8b8ab906b0648dd575e8785e04c19113531f8ffbaab9e149aa1a60763 gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter@sha256:076acdada33f35b917c9eebe89eba95923601302beac57274985e418b70067e2 gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo@sha256:89ac104fa7c43880d2324f377b79be95b0b2b3fb32e4bd03b8d1e6d91a41f009 gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver@sha256:f804e8837490d1dfdb5002e073f715fd0a08115de74e5a4847ca952315739372 gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/liveness@sha256:90994881062c7de7bb1761f2f3d020fe9aa3d332a90e00ebd3ca9dcc1ed74f1c gcr.io/google_containers/liveness:e2e] 4387474} {[gcr.io/google_containers/eptest@sha256:bb088b26ed78613cce171420168db9a6c62a8dbea17d7be13077e7010bae162f gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/mounttest@sha256:c4dcedb26013ab4231a2b2aaa4eebd5c2a44d5c597fa0613c9ff8bde4fb9fe02 gcr.io/google_containers/mounttest:0.7] 2052704} {[gcr.io/google_containers/mounttest-user@sha256:5487c126b03abf4119a8f7950cd5f591f72dbe4ab15623f3387d3917e1268b4e gcr.io/google_containers/mounttest-user:0.5] 1450761} {[gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 gcr.io/google_containers/mounttest:0.8] 1450761} {[docker.io/openshift/origin-pod@sha256:478fd0553a9600014256dede2ad4afb0b620421f5e0353a667be3a94d06dc9b0 docker.io/openshift/origin-pod:v3.6.0-alpha.1] 1138998} {[gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff gcr.io/google_containers/busybox:1.24] 1113554} {[docker.io/busybox@sha256:c79345819a6882c31b41bc771d9a94fc52872fa651b36771fbe0c8461d7ee558] 1106304} {[docker.io/busybox@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912] 1093484} {[gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 gcr.io/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
May 30 06:39:23.749: INFO:
Logging kubelet events for node ci-prtest-5a37c28-2600-ig-n-qxpq
May 30 06:39:23.778: INFO:
Logging pods the kubelet thinks is on node ci-prtest-5a37c28-2600-ig-n-qxpq
May 30 06:39:23.838: INFO: registry-console-1-deploy started at 2017-05-30 06:19:43 -0400 EDT (0+1 container statuses recorded)
May 30 06:39:23.838: INFO: Container deployment ready: false, restart count 0
May 30 06:39:24.082: INFO:
Latency metrics for node ci-prtest-5a37c28-2600-ig-n-qxpq
May 30 06:39:24.082: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m1.092679s}
May 30 06:39:24.082: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m1.092679s}
May 30 06:39:24.082: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m1.092679s}
May 30 06:39:24.082: INFO:
Logging node info for node ci-prtest-5a37c28-2600-ig-n-s146
May 30 06:39:24.112: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ci-prtest-5a37c28-2600-ig-n-s146,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ci-prtest-5a37c28-2600-ig-n-s146,UID:1c78543d-4521-11e7-bb4c-42010a800005,ResourceVersion:24966,Generation:0,CreationTimestamp:2017-05-30 06:16:57 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-a,kubernetes.io/hostname: ci-prtest-5a37c28-2600-ig-n-s146,role: app,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:3595964274247269737,ProviderID:gce://openshift-gce-devel-ci/us-central1-a/ci-prtest-5a37c28-2600-ig-n-s146,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7674597376 0} {<nil>} 7494724Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7569739776 0} {<nil>} 7392324Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:16:57 -0400 EDT RouteCreated openshift-sdn cleared kubelet-set NoRouteCreated} {OutOfDisk False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.128.0.3} {ExternalIP 35.188.223.91} {Hostname ci-prtest-5a37c28-2600-ig-n-s146}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:02f1ddb1415c4feba9880b2b8c4c5925,SystemUUID:3B056C4F-99C0-D13B-0A38-B9B29FF68D2A,BootID:d67b4bc6-5a8a-45a0-96ea-c30d674e61b6,KernelVersion:3.10.0-514.6.1.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.3 (Maipo),ContainerRuntimeVersion:docker://1.12.5,KubeletVersion:v1.6.1+5115d708d7,KubeProxyVersion:v1.6.1+5115d708d7,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/openshift/origin-docker-builder@sha256:a76c1a9da9d17f59ded0070ae70f65159b861348b856b8d5e1f0fdcf29f10085 docker.io/openshift/origin-docker-builder:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-deployer@sha256:393b6ff0ceead1efaece9b2bb8b508b644cbd0c6afb8f2abb4bb6e4f540ccb65 docker.io/openshift/origin-deployer:v3.6.0-alpha.1] 635306029} {[172.30.240.85:5000/extended-test-new-app-w3lpf-lwllz/a234567890123456789012345678901234567890123456789012345678@sha256:b28676585758e3fb02ec1996c6b709d7f38e4f543c96be59743d632c2af48d7b] 487441316} {[docker.io/centos/ruby-23-centos7@sha256:4f8eea80f8f76abadda0c2956ff6f174cbf33ae559f1ed9276282e12682ff4b9] 486837919} {[gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25 gcr.io/google_containers/redis:e2e] 419003740} {[docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077 docker.io/centos:7 docker.io/centos:centos7] 192556318} {[172.30.240.85:5000/extended-test-docker-build-pullsecret-jt7g5-zm0kj/image1@sha256:cf5f304e28e9c275d914339a72e3613a096537a26b151889e13ed6d1c7b54c54 172.30.240.85:5000/extended-test-docker-build-pullsecret-jt7g5-zm0kj/image1:latest] 192556318} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86864428} {[gcr.io/google_containers/hostexec@sha256:cab8d4e2526f8f767c64febe4ce9e0f0e58cd35fdff81b3aadba4dd041ba9f00 gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo@sha256:89ac104fa7c43880d2324f377b79be95b0b2b3fb32e4bd03b8d1e6d91a41f009 gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver@sha256:f804e8837490d1dfdb5002e073f715fd0a08115de74e5a4847ca952315739372 gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/mounttest@sha256:c4dcedb26013ab4231a2b2aaa4eebd5c2a44d5c597fa0613c9ff8bde4fb9fe02 gcr.io/google_containers/mounttest:0.7] 2052704} {[gcr.io/google_containers/portforwardtester@sha256:306879729d3eff635a11b89f3e62e440c9f2fe4dabdfb9ef02bc67f2275f67ab gcr.io/google_containers/portforwardtester:1.2] 1892642} {[gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 gcr.io/google_containers/mounttest:0.8] 1450761} {[gcr.io/google_containers/mounttest-user@sha256:5487c126b03abf4119a8f7950cd5f591f72dbe4ab15623f3387d3917e1268b4e gcr.io/google_containers/mounttest-user:0.5] 1450761} {[docker.io/openshift/origin-pod@sha256:478fd0553a9600014256dede2ad4afb0b620421f5e0353a667be3a94d06dc9b0 docker.io/openshift/origin-pod:v3.6.0-alpha.1] 1138998} {[gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 gcr.io/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
May 30 06:39:24.112: INFO:
Logging kubelet events for node ci-prtest-5a37c28-2600-ig-n-s146
May 30 06:39:24.141: INFO:
Logging pods the kubelet thinks is on node ci-prtest-5a37c28-2600-ig-n-s146
May 30 06:39:24.446: INFO:
Latency metrics for node ci-prtest-5a37c28-2600-ig-n-s146
May 30 06:39:24.446: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.5 Latency:14.242243s}
May 30 06:39:24.446: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:14.242243s}
May 30 06:39:24.446: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.9 Latency:14.242243s}
STEP: Dumping a list of prepulled images on each node
May 30 06:39:24.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5qdr5" for this suite.
May 30 06:39:36.896: INFO: namespace: e2e-tests-kubectl-5qdr5, resource: bindings, ignored listing per whitelist
</details>
Seen here: https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/837/testReport/junit/(root)/Extended/_k8s_io__Kubectl_client__k8s_io__Kubectl_taint__Serial__should_update_the_taint_on_a_node/ | 1.0 | Kubectl taint should update the taint on a node fails with "Unable to connect to the server: i/o timeout" - <details>
<summary>Stacktrace</summary>
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1476
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl [kubectl --server=https://internal-api.prtest-5a37c28-2600.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig taint nodes ci-prtest-5a37c28-2600-ig-n-8d5b kubernetes.io/e2e-taint-key-001-2b3a434b-4524-11e7-8ce8-0ee4c522aca2=testing-taint-value:NoSchedule] [] <nil> Unable to connect to the server: dial tcp 35.188.192.152:8443: i/o timeout\n [] <nil> 0xc420771770 exit status 1 <nil> <nil> true [0xc420034600 0xc420034618 0xc420034630] [0xc420034600 0xc420034618 0xc420034630] [0xc420034610 0xc420034628] [0xdd8390 0xdd8390] 0xc421be8a20 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 35.188.192.152:8443: i/o timeout\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl [kubectl --server=https://internal-api.prtest-5a37c28-2600.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig taint nodes ci-prtest-5a37c28-2600-ig-n-8d5b kubernetes.io/e2e-taint-key-001-2b3a434b-4524-11e7-8ce8-0ee4c522aca2=testing-taint-value:NoSchedule] [] <nil> Unable to connect to the server: dial tcp 35.188.192.152:8443: i/o timeout
[] <nil> 0xc420771770 exit status 1 <nil> <nil> true [0xc420034600 0xc420034618 0xc420034630] [0xc420034600 0xc420034618 0xc420034630] [0xc420034610 0xc420034628] [0xdd8390 0xdd8390] 0xc421be8a20 <nil>}:
Command stdout:
stderr:
Unable to connect to the server: dial tcp 35.188.192.152:8443: i/o timeout
error:
exit status 1
not to have occurred
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:175
</details>
<details>
<summary>Standard Output</summary>
[BeforeEach] [Top Level]
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [k8s.io] Kubectl client
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120
STEP: Creating a kubernetes client
May 30 06:38:49.939: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
May 30 06:38:50.030: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubectl client
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:289
[It] should update the taint on a node
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1476
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: adding the taint kubernetes.io/e2e-taint-key-001-2b3a434b-4524-11e7-8ce8-0ee4c522aca2=testing-taint-value:NoSchedule to a node
May 30 06:38:52.629: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://internal-api.prtest-5a37c28-2600.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig taint nodes ci-prtest-5a37c28-2600-ig-n-8d5b kubernetes.io/e2e-taint-key-001-2b3a434b-4524-11e7-8ce8-0ee4c522aca2=testing-taint-value:NoSchedule'
May 30 06:39:22.842: INFO: rc: 127
May 30 06:39:22.842: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl client
/openshifttmp/openshift/build-rpm-release/tito/rpmbuild-originqOuLw5/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121
STEP: Collecting events from namespace "e2e-tests-kubectl-5qdr5".
STEP: Found 8 events.
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:50 -0400 EDT - event for without-label: {default-scheduler } Scheduled: Successfully assigned without-label to ci-prtest-5a37c28-2600-ig-n-8d5b
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:51 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} Pulled: Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:51 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} Created: Created container with id 8a89ca1e533de41a58a5762ea85861260ac34e406b741006cd758c278fe1650f
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:51 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} Started: Started container with id 8a89ca1e533de41a58a5762ea85861260ac34e406b741006cd758c278fe1650f
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:53 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} FailedMount: MountVolume.SetUp failed for volume "kubernetes.io/secret/2b413eca-4524-11e7-bb4c-42010a800005-default-token-gbhnf" (spec.Name: "default-token-gbhnf") pod "2b413eca-4524-11e7-bb4c-42010a800005" (UID: "2b413eca-4524-11e7-bb4c-42010a800005") with: secret "e2e-tests-kubectl-5qdr5"/"default-token-gbhnf" not registered
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:53 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} Killing: Killing container with id docker://8a89ca1e533de41a58a5762ea85861260ac34e406b741006cd758c278fe1650f:Need to kill Pod
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:53 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
May 30 06:39:22.873: INFO: At 2017-05-30 06:38:53 -0400 EDT - event for without-label: {kubelet ci-prtest-5a37c28-2600-ig-n-8d5b} FailedSync: Error syncing pod, skipping: failed to "CreatePodSandbox" for "without-label_e2e-tests-kubectl-5qdr5(2b413eca-4524-11e7-bb4c-42010a800005)" with CreatePodSandboxError: "CreatePodSandbox for pod \"without-label_e2e-tests-kubectl-5qdr5(2b413eca-4524-11e7-bb4c-42010a800005)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"without-label_e2e-tests-kubectl-5qdr5\" network: CNI request failed with status 400: 'pods \"without-label\" not found\n'"
May 30 06:39:22.962: INFO: POD NODE PHASE GRACE CONDITIONS
May 30 06:39:22.962: INFO: docker-registry-2-vxftf ci-prtest-5a37c28-2600-ig-m-v6bp Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:20:16 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:20:25 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:20:16 -0400 EDT }]
May 30 06:39:22.962: INFO: registry-console-1-deploy ci-prtest-5a37c28-2600-ig-n-qxpq Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:19:43 -0400 EDT } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:20:32 -0400 EDT ContainersNotReady containers with unready status: [deployment]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:19:43 -0400 EDT }]
May 30 06:39:22.962: INFO: router-1-9z7km ci-prtest-5a37c28-2600-ig-m-v6bp Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:19:13 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:19:33 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:19:13 -0400 EDT }]
May 30 06:39:22.962: INFO:
May 30 06:39:23.019: INFO:
Logging node info for node ci-prtest-5a37c28-2600-ig-m-v6bp
May 30 06:39:23.049: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ci-prtest-5a37c28-2600-ig-m-v6bp,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ci-prtest-5a37c28-2600-ig-m-v6bp,UID:1c72db27-4521-11e7-bb4c-42010a800005,ResourceVersion:24963,Generation:0,CreationTimestamp:2017-05-30 06:16:57 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-a,kubernetes.io/hostname: ci-prtest-5a37c28-2600-ig-m-v6bp,role: infra,subrole: master,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:7168586717522361702,ProviderID:gce://openshift-gce-devel-ci/us-central1-a/ci-prtest-5a37c28-2600-ig-m-v6bp,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7674597376 0} {<nil>} 7494724Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7569739776 0} {<nil>} 7392324Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:16:57 -0400 EDT RouteCreated openshift-sdn cleared kubelet-set NoRouteCreated} {OutOfDisk False 2017-05-30 06:39:13 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-05-30 06:39:13 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-05-30 06:39:13 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-05-30 06:39:13 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.128.0.5} {ExternalIP 130.211.115.191} {Hostname ci-prtest-5a37c28-2600-ig-m-v6bp}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:02f1ddb1415c4feba9880b2b8c4c5925,SystemUUID:E5F80BC7-12E6-DDF0-66C7-497EECA19755,BootID:d09ecdfb-ec29-4e9c-b118-3028f7797fa0,KernelVersion:3.10.0-514.6.1.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.3 (Maipo),ContainerRuntimeVersion:docker://1.12.5,KubeletVersion:v1.6.1+5115d708d7,KubeProxyVersion:v1.6.1+5115d708d7,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/openshift/origin-haproxy-router@sha256:8ea695c0608e086ede7786e3b9bf6ba81a2ed3d2952506dbaa3a6a385047bf76 docker.io/openshift/origin-haproxy-router:v3.6.0-alpha.1] 656266633} {[docker.io/openshift/origin-deployer@sha256:393b6ff0ceead1efaece9b2bb8b508b644cbd0c6afb8f2abb4bb6e4f540ccb65 docker.io/openshift/origin-deployer:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-docker-registry@sha256:ec8130ec4591925e1b8609e03a5641e6f2be62a4859f27f59f6267a415b6c01d docker.io/openshift/origin-docker-registry:v3.6.0-alpha.1] 429239546} {[gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25 gcr.io/google_containers/redis:e2e] 419003740} {[gcr.io/google_containers/nginx-slim@sha256:8b4501fe0fe221df663c22e16539f399e89594552f400408303c42f3dd8d0e52 gcr.io/google_containers/nginx-slim:0.8] 110487599} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86864428} {[gcr.io/google_containers/hostexec@sha256:cab8d4e2526f8f767c64febe4ce9e0f0e58cd35fdff81b3aadba4dd041ba9f00 gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[docker.io/openshift/origin-pod@sha256:478fd0553a9600014256dede2ad4afb0b620421f5e0353a667be3a94d06dc9b0 docker.io/openshift/origin-pod:v3.6.0-alpha.1] 1138998} {[gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 gcr.io/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
May 30 06:39:23.049: INFO:
Logging kubelet events for node ci-prtest-5a37c28-2600-ig-m-v6bp
May 30 06:39:23.078: INFO:
Logging pods the kubelet thinks is on node ci-prtest-5a37c28-2600-ig-m-v6bp
May 30 06:39:23.138: INFO: router-1-9z7km started at 2017-05-30 06:19:13 -0400 EDT (0+1 container statuses recorded)
May 30 06:39:23.138: INFO: Container router ready: true, restart count 0
May 30 06:39:23.138: INFO: docker-registry-2-vxftf started at 2017-05-30 06:20:16 -0400 EDT (0+1 container statuses recorded)
May 30 06:39:23.138: INFO: Container registry ready: true, restart count 0
May 30 06:39:23.376: INFO:
Latency metrics for node ci-prtest-5a37c28-2600-ig-m-v6bp
May 30 06:39:23.376: INFO:
Logging node info for node ci-prtest-5a37c28-2600-ig-n-8d5b
May 30 06:39:23.406: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ci-prtest-5a37c28-2600-ig-n-8d5b,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ci-prtest-5a37c28-2600-ig-n-8d5b,UID:1c6afdb5-4521-11e7-bb4c-42010a800005,ResourceVersion:24967,Generation:0,CreationTimestamp:2017-05-30 06:16:57 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-a,kubernetes.io/hostname: ci-prtest-5a37c28-2600-ig-n-8d5b,role: app,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:525145833943924073,ProviderID:gce://openshift-gce-devel-ci/us-central1-a/ci-prtest-5a37c28-2600-ig-n-8d5b,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7674597376 0} {<nil>} 7494724Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7569739776 0} {<nil>} 7392324Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:16:57 -0400 EDT RouteCreated openshift-sdn cleared kubelet-set NoRouteCreated} {OutOfDisk False 2017-05-30 06:39:20 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-05-30 06:39:20 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-05-30 06:39:20 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-05-30 06:39:20 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.128.0.4} {ExternalIP 35.192.2.246} {Hostname ci-prtest-5a37c28-2600-ig-n-8d5b}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:02f1ddb1415c4feba9880b2b8c4c5925,SystemUUID:A9707A90-AB4B-146E-45E6-86E62849AF06,BootID:6a692fc9-c920-43a3-a1b7-420d6c2c845f,KernelVersion:3.10.0-514.6.1.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.3 (Maipo),ContainerRuntimeVersion:docker://1.12.5,KubeletVersion:v1.6.1+5115d708d7,KubeProxyVersion:v1.6.1+5115d708d7,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/openshift/origin-sti-builder@sha256:88f85945c4bffaf226fce4e14f7f30158bd7a2a0f70eebe134e26ae89360d458 docker.io/openshift/origin-sti-builder:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-docker-builder@sha256:a76c1a9da9d17f59ded0070ae70f65159b861348b856b8d5e1f0fdcf29f10085 docker.io/openshift/origin-docker-builder:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-deployer@sha256:393b6ff0ceead1efaece9b2bb8b508b644cbd0c6afb8f2abb4bb6e4f540ccb65 docker.io/openshift/origin-deployer:v3.6.0-alpha.1] 635306029} {[docker.io/centos/ruby-22-centos7@sha256:ca01a63f8d2f0dec3f8edf9964e7903908fcc5485e0b7f75f668817609874ad0 docker.io/centos/ruby-22-centos7:latest] 472191707} {[gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25 gcr.io/google_containers/redis:e2e] 419003740} {[docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077 docker.io/centos:centos7] 192556318} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86864428} {[gcr.io/google_containers/hostexec@sha256:cab8d4e2526f8f767c64febe4ce9e0f0e58cd35fdff81b3aadba4dd041ba9f00 gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/fakegitserver@sha256:e974692bb4d422a4e9ea6ff9df85fa36f189010703400496fea44aac6589d0dd gcr.io/google_containers/fakegitserver:0.1] 5007469} {[gcr.io/google_containers/update-demo@sha256:89ac104fa7c43880d2324f377b79be95b0b2b3fb32e4bd03b8d1e6d91a41f009 gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/eptest@sha256:bb088b26ed78613cce171420168db9a6c62a8dbea17d7be13077e7010bae162f gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 gcr.io/google_containers/busybox:latest] 2433303} {[gcr.io/google_containers/mounttest@sha256:c4dcedb26013ab4231a2b2aaa4eebd5c2a44d5c597fa0613c9ff8bde4fb9fe02 gcr.io/google_containers/mounttest:0.7] 2052704} {[gcr.io/google_containers/portforwardtester@sha256:306879729d3eff635a11b89f3e62e440c9f2fe4dabdfb9ef02bc67f2275f67ab gcr.io/google_containers/portforwardtester:1.2] 1892642} {[gcr.io/google_containers/mounttest-user@sha256:5487c126b03abf4119a8f7950cd5f591f72dbe4ab15623f3387d3917e1268b4e gcr.io/google_containers/mounttest-user:0.5] 1450761} {[gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 gcr.io/google_containers/mounttest:0.8] 1450761} {[docker.io/openshift/origin-pod@sha256:478fd0553a9600014256dede2ad4afb0b620421f5e0353a667be3a94d06dc9b0 docker.io/openshift/origin-pod:v3.6.0-alpha.1] 1138998} {[gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff gcr.io/google_containers/busybox:1.24] 1113554} {[docker.io/busybox@sha256:c79345819a6882c31b41bc771d9a94fc52872fa651b36771fbe0c8461d7ee558] 1106304} {[gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 gcr.io/google_containers/pause-amd64:3.0] 746888} {[gcr.io/google-containers/pause@sha256:9ce5316f9752b8347484ab0f6778573af15524124d52b93230b9a0dcc987e73e gcr.io/google-containers/pause:2.0] 350164}],VolumesInUse:[],VolumesAttached:[],},}
May 30 06:39:23.406: INFO:
Logging kubelet events for node ci-prtest-5a37c28-2600-ig-n-8d5b
May 30 06:39:23.435: INFO:
Logging pods the kubelet thinks is on node ci-prtest-5a37c28-2600-ig-n-8d5b
May 30 06:39:23.719: INFO:
Latency metrics for node ci-prtest-5a37c28-2600-ig-n-8d5b
May 30 06:39:23.719: INFO:
Logging node info for node ci-prtest-5a37c28-2600-ig-n-qxpq
May 30 06:39:23.749: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ci-prtest-5a37c28-2600-ig-n-qxpq,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ci-prtest-5a37c28-2600-ig-n-qxpq,UID:1c7ebc81-4521-11e7-bb4c-42010a800005,ResourceVersion:24965,Generation:0,CreationTimestamp:2017-05-30 06:16:57 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-a,kubernetes.io/hostname: ci-prtest-5a37c28-2600-ig-n-qxpq,role: app,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:7766244258712976745,ProviderID:gce://openshift-gce-devel-ci/us-central1-a/ci-prtest-5a37c28-2600-ig-n-qxpq,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7674597376 0} {<nil>} 7494724Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7569739776 0} {<nil>} 7392324Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:16:57 -0400 EDT RouteCreated openshift-sdn cleared kubelet-set NoRouteCreated} {OutOfDisk False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.128.0.2} {ExternalIP 35.188.205.43} {Hostname ci-prtest-5a37c28-2600-ig-n-qxpq}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:02f1ddb1415c4feba9880b2b8c4c5925,SystemUUID:85D0B28A-CE77-C767-63B8-3B8FFA1FEADA,BootID:a8d41cf2-e920-48b5-ba89-58ef65a40c34,KernelVersion:3.10.0-514.6.1.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.3 (Maipo),ContainerRuntimeVersion:docker://1.12.5,KubeletVersion:v1.6.1+5115d708d7,KubeProxyVersion:v1.6.1+5115d708d7,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/openshift/origin-docker-builder@sha256:a76c1a9da9d17f59ded0070ae70f65159b861348b856b8d5e1f0fdcf29f10085 docker.io/openshift/origin-docker-builder:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-deployer@sha256:393b6ff0ceead1efaece9b2bb8b508b644cbd0c6afb8f2abb4bb6e4f540ccb65 docker.io/openshift/origin-deployer:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-sti-builder@sha256:88f85945c4bffaf226fce4e14f7f30158bd7a2a0f70eebe134e26ae89360d458 docker.io/openshift/origin-sti-builder:v3.6.0-alpha.1] 635306029} {[docker.io/centos/mongodb-32-centos7@sha256:f9aaaf2b6ea764e44c4c3b09375144da797d3a3c4ab09f588f233c01c9d6302a] 567708657} {[172.30.240.85:5000/extended-test-new-app-w3lpf-lwllz/a234567890123456789012345678901234567890123456789012345678@sha256:b28676585758e3fb02ec1996c6b709d7f38e4f543c96be59743d632c2af48d7b 172.30.240.85:5000/extended-test-new-app-w3lpf-lwllz/a234567890123456789012345678901234567890123456789012345678:latest] 487441316} {[docker.io/centos/ruby-23-centos7@sha256:4f8eea80f8f76abadda0c2956ff6f174cbf33ae559f1ed9276282e12682ff4b9] 486837919} {[docker.io/centos/ruby-22-centos7@sha256:ca01a63f8d2f0dec3f8edf9964e7903908fcc5485e0b7f75f668817609874ad0 docker.io/centos/ruby-22-centos7:latest] 472191707} {[docker.io/centos/nodejs-4-centos7@sha256:90da46745a770eded68c38bd6e0eaafd6aef6634ee0dce97ee3094b19278db51] 472020649} {[gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25 gcr.io/google_containers/redis:e2e] 419003740} {[docker.io/openshift/origin-base@sha256:3848ab52436662e4193f34063bbfd259c0c09cbe91562acec7dd6eb510ca2e94 docker.io/openshift/origin-base:latest] 363035993} {[docker.io/openshift/origin-pod@sha256:b0a14ec3da6f2c94b2e16ea0d2f6124cb856289dec38572509d3fb0bcf20a67d docker.io/openshift/origin-pod:latest] 213197942} {[172.30.240.85:5000/extended-test-docker-build-pullsecret-jt7g5-zm0kj/image1@sha256:cf5f304e28e9c275d914339a72e3613a096537a26b151889e13ed6d1c7b54c54] 192556318} {[docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077 docker.io/centos:centos7] 192556318} {[gcr.io/google_containers/jessie-dnsutils@sha256:2460d596912244b5f8973573f7150e7264b570015f4becc2d0096f0bd1d17e36 gcr.io/google_containers/jessie-dnsutils:e2e] 190148402} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86864428} {[gcr.io/google_containers/nettest@sha256:8af3a0e8b8ab906b0648dd575e8785e04c19113531f8ffbaab9e149aa1a60763 gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/porter@sha256:076acdada33f35b917c9eebe89eba95923601302beac57274985e418b70067e2 gcr.io/google_containers/porter:cd5cb5791ebaa8641955f0e8c2a9bed669b1eaab] 5010921} {[gcr.io/google_containers/update-demo@sha256:89ac104fa7c43880d2324f377b79be95b0b2b3fb32e4bd03b8d1e6d91a41f009 gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver@sha256:f804e8837490d1dfdb5002e073f715fd0a08115de74e5a4847ca952315739372 gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/liveness@sha256:90994881062c7de7bb1761f2f3d020fe9aa3d332a90e00ebd3ca9dcc1ed74f1c gcr.io/google_containers/liveness:e2e] 4387474} {[gcr.io/google_containers/eptest@sha256:bb088b26ed78613cce171420168db9a6c62a8dbea17d7be13077e7010bae162f gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/mounttest@sha256:c4dcedb26013ab4231a2b2aaa4eebd5c2a44d5c597fa0613c9ff8bde4fb9fe02 gcr.io/google_containers/mounttest:0.7] 2052704} {[gcr.io/google_containers/mounttest-user@sha256:5487c126b03abf4119a8f7950cd5f591f72dbe4ab15623f3387d3917e1268b4e gcr.io/google_containers/mounttest-user:0.5] 1450761} {[gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 gcr.io/google_containers/mounttest:0.8] 1450761} {[docker.io/openshift/origin-pod@sha256:478fd0553a9600014256dede2ad4afb0b620421f5e0353a667be3a94d06dc9b0 docker.io/openshift/origin-pod:v3.6.0-alpha.1] 1138998} {[gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff gcr.io/google_containers/busybox:1.24] 1113554} {[docker.io/busybox@sha256:c79345819a6882c31b41bc771d9a94fc52872fa651b36771fbe0c8461d7ee558] 1106304} {[docker.io/busybox@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912] 1093484} {[gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 gcr.io/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
May 30 06:39:23.749: INFO:
Logging kubelet events for node ci-prtest-5a37c28-2600-ig-n-qxpq
May 30 06:39:23.778: INFO:
Logging pods the kubelet thinks is on node ci-prtest-5a37c28-2600-ig-n-qxpq
May 30 06:39:23.838: INFO: registry-console-1-deploy started at 2017-05-30 06:19:43 -0400 EDT (0+1 container statuses recorded)
May 30 06:39:23.838: INFO: Container deployment ready: false, restart count 0
May 30 06:39:24.082: INFO:
Latency metrics for node ci-prtest-5a37c28-2600-ig-n-qxpq
May 30 06:39:24.082: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m1.092679s}
May 30 06:39:24.082: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m1.092679s}
May 30 06:39:24.082: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m1.092679s}
May 30 06:39:24.082: INFO:
Logging node info for node ci-prtest-5a37c28-2600-ig-n-s146
May 30 06:39:24.112: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ci-prtest-5a37c28-2600-ig-n-s146,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ci-prtest-5a37c28-2600-ig-n-s146,UID:1c78543d-4521-11e7-bb4c-42010a800005,ResourceVersion:24966,Generation:0,CreationTimestamp:2017-05-30 06:16:57 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-a,kubernetes.io/hostname: ci-prtest-5a37c28-2600-ig-n-s146,role: app,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:3595964274247269737,ProviderID:gce://openshift-gce-devel-ci/us-central1-a/ci-prtest-5a37c28-2600-ig-n-s146,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7674597376 0} {<nil>} 7494724Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{7569739776 0} {<nil>} 7392324Ki BinarySI},pods: {{20 0} {<nil>} 20 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 0001-01-01 00:00:00 +0000 UTC 2017-05-30 06:16:57 -0400 EDT RouteCreated openshift-sdn cleared kubelet-set NoRouteCreated} {OutOfDisk False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-05-30 06:39:19 -0400 EDT 2017-05-30 06:16:57 -0400 EDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.128.0.3} {ExternalIP 35.188.223.91} {Hostname ci-prtest-5a37c28-2600-ig-n-s146}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:02f1ddb1415c4feba9880b2b8c4c5925,SystemUUID:3B056C4F-99C0-D13B-0A38-B9B29FF68D2A,BootID:d67b4bc6-5a8a-45a0-96ea-c30d674e61b6,KernelVersion:3.10.0-514.6.1.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.3 (Maipo),ContainerRuntimeVersion:docker://1.12.5,KubeletVersion:v1.6.1+5115d708d7,KubeProxyVersion:v1.6.1+5115d708d7,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/openshift/origin-docker-builder@sha256:a76c1a9da9d17f59ded0070ae70f65159b861348b856b8d5e1f0fdcf29f10085 docker.io/openshift/origin-docker-builder:v3.6.0-alpha.1] 635306029} {[docker.io/openshift/origin-deployer@sha256:393b6ff0ceead1efaece9b2bb8b508b644cbd0c6afb8f2abb4bb6e4f540ccb65 docker.io/openshift/origin-deployer:v3.6.0-alpha.1] 635306029} {[172.30.240.85:5000/extended-test-new-app-w3lpf-lwllz/a234567890123456789012345678901234567890123456789012345678@sha256:b28676585758e3fb02ec1996c6b709d7f38e4f543c96be59743d632c2af48d7b] 487441316} {[docker.io/centos/ruby-23-centos7@sha256:4f8eea80f8f76abadda0c2956ff6f174cbf33ae559f1ed9276282e12682ff4b9] 486837919} {[gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25 gcr.io/google_containers/redis:e2e] 419003740} {[docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077 docker.io/centos:7 docker.io/centos:centos7] 192556318} {[172.30.240.85:5000/extended-test-docker-build-pullsecret-jt7g5-zm0kj/image1@sha256:cf5f304e28e9c275d914339a72e3613a096537a26b151889e13ed6d1c7b54c54 172.30.240.85:5000/extended-test-docker-build-pullsecret-jt7g5-zm0kj/image1:latest] 192556318} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86864428} {[gcr.io/google_containers/hostexec@sha256:cab8d4e2526f8f767c64febe4ce9e0f0e58cd35fdff81b3aadba4dd041ba9f00 gcr.io/google_containers/hostexec:1.2] 13209617} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8897789} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[gcr.io/google_containers/update-demo@sha256:89ac104fa7c43880d2324f377b79be95b0b2b3fb32e4bd03b8d1e6d91a41f009 gcr.io/google_containers/update-demo:nautilus] 4555533} {[gcr.io/google_containers/test-webserver@sha256:f804e8837490d1dfdb5002e073f715fd0a08115de74e5a4847ca952315739372 gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/mounttest@sha256:c4dcedb26013ab4231a2b2aaa4eebd5c2a44d5c597fa0613c9ff8bde4fb9fe02 gcr.io/google_containers/mounttest:0.7] 2052704} {[gcr.io/google_containers/portforwardtester@sha256:306879729d3eff635a11b89f3e62e440c9f2fe4dabdfb9ef02bc67f2275f67ab gcr.io/google_containers/portforwardtester:1.2] 1892642} {[gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 gcr.io/google_containers/mounttest:0.8] 1450761} {[gcr.io/google_containers/mounttest-user@sha256:5487c126b03abf4119a8f7950cd5f591f72dbe4ab15623f3387d3917e1268b4e gcr.io/google_containers/mounttest-user:0.5] 1450761} {[docker.io/openshift/origin-pod@sha256:478fd0553a9600014256dede2ad4afb0b620421f5e0353a667be3a94d06dc9b0 docker.io/openshift/origin-pod:v3.6.0-alpha.1] 1138998} {[gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff gcr.io/google_containers/busybox:1.24] 1113554} {[gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 gcr.io/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
May 30 06:39:24.112: INFO:
Logging kubelet events for node ci-prtest-5a37c28-2600-ig-n-s146
May 30 06:39:24.141: INFO:
Logging pods the kubelet thinks is on node ci-prtest-5a37c28-2600-ig-n-s146
May 30 06:39:24.446: INFO:
Latency metrics for node ci-prtest-5a37c28-2600-ig-n-s146
May 30 06:39:24.446: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.5 Latency:14.242243s}
May 30 06:39:24.446: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:14.242243s}
May 30 06:39:24.446: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.9 Latency:14.242243s}
STEP: Dumping a list of prepulled images on each node
May 30 06:39:24.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5qdr5" for this suite.
May 30 06:39:36.896: INFO: namespace: e2e-tests-kubectl-5qdr5, resource: bindings, ignored listing per whitelist
</details>
Seen here: https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/837/testReport/junit/(root)/Extended/_k8s_io__Kubectl_client__k8s_io__Kubectl_taint__Serial__should_update_the_taint_on_a_node/ | test | kubectl taint should update the taint on a node fails with unable to connect to the server i o timeout stacktrace openshifttmp openshift build rpm release tito rpmbuild build origin output local go src github com openshift origin vendor io kubernetes test kubectl go expected error err s error running data src github com openshift origin output local bin linux kubectl unable to connect to the server dial tcp i o timeout n exit status true ncommand stdout n nstderr nunable to connect to the server dial tcp i o timeout n nerror nexit status n code error running data src github com openshift origin output local bin linux kubectl unable to connect to the server dial tcp i o timeout exit status true command stdout stderr unable to connect to the server dial tcp i o timeout error exit status not to have occurred openshifttmp openshift build rpm release tito rpmbuild build origin output local go src github com openshift origin vendor io kubernetes test kubectl go standard output openshifttmp openshift build rpm release tito rpmbuild build origin output local go src github com openshift origin test extended util test go kubectl client openshifttmp openshift build rpm release tito rpmbuild build origin output local go src github com openshift origin vendor io kubernetes test framework framework go step creating a kubernetes client may info kubeconfig tmp cluster admin kubeconfig step building a namespace api object may info about to run a kube test ensuring namespace is privileged step waiting for a default service account to be provisioned in namespace kubectl client openshifttmp openshift build rpm release tito rpmbuild build origin output local go src github com openshift origin vendor io kubernetes test kubectl go should update the taint on a node openshifttmp openshift build rpm release tito rpmbuild build origin output local go src github com openshift origin vendor io kubernetes test kubectl go step trying to launch a pod without a label to get a node which can launch it step explicitly delete pod here to free the resource it takes step adding the taint kubernetes io taint key testing taint value noschedule to a node may info running data src github com openshift origin output local bin linux kubectl server kubeconfig tmp cluster admin kubeconfig taint nodes ci prtest ig n kubernetes io taint key testing taint value noschedule may info rc may info stdout kubectl client openshifttmp openshift build rpm release tito rpmbuild build origin output local go src github com openshift origin vendor io kubernetes test framework framework go step collecting events from namespace tests kubectl step found events may info at edt event for without label default scheduler scheduled successfully assigned without label to ci prtest ig n may info at edt event for without label kubelet ci prtest ig n pulled container image gcr io google containers pause already present on machine may info at edt event for without label kubelet ci prtest ig n created created container with id may info at edt event for without label kubelet ci prtest ig n started started container with id may info at edt event for without label kubelet ci prtest ig n failedmount mountvolume setup failed for volume kubernetes io secret default token gbhnf spec name default token gbhnf pod uid with secret tests kubectl default token gbhnf not registered may info at edt event for without label kubelet ci prtest ig n killing killing container with id docker need to kill pod may info at edt event for without label kubelet ci prtest ig n sandboxchanged pod sandbox changed it will be killed and re created may info at edt event for without label kubelet ci prtest ig n failedsync error syncing pod skipping failed to createpodsandbox for without label tests kubectl with createpodsandboxerror createpodsandbox for pod without label tests kubectl failed rpc error code desc networkplugin cni failed to set up pod without label tests kubectl network cni request failed with status pods without label not found n may info pod node phase grace conditions may info docker registry vxftf ci prtest ig m running may info registry console deploy ci prtest ig n qxpq failed podscheduled true utc edt may info router ci prtest ig m running may info may info logging node info for node ci prtest ig m may info node info node objectmeta io apimachinery pkg apis meta objectmeta name ci prtest ig m generatename namespace selflink api nodes ci prtest ig m uid resourceversion generation creationtimestamp edt deletiontimestamp deletiongraceperiodseconds nil labels map string beta kubernetes io arch beta kubernetes io instance type standard beta kubernetes io os linux failure domain beta kubernetes io region us failure domain beta kubernetes io zone us a kubernetes io hostname ci prtest ig m role infra subrole master annotations map string volumes kubernetes io controller managed attach detach true ownerreferences finalizers clustername spec nodespec podcidr externalid providerid gce openshift gce devel ci us a ci prtest ig m unschedulable false taints status nodestatus capacity resourcelist cpu decimalsi memory binarysi pods decimalsi allocatable resourcelist cpu decimalsi memory binarysi pods decimalsi phase conditions addresses daemonendpoints nodedaemonendpoints kubeletendpoint daemonendpoint port nodeinfo nodesysteminfo machineid systemuuid bootid kernelversion osimage red hat enterprise linux server maipo containerruntimeversion docker kubeletversion kubeproxyversion operatingsystem linux architecture images volumesinuse volumesattached may info logging kubelet events for node ci prtest ig m may info logging pods the kubelet thinks is on node ci prtest ig m may info router started at edt container statuses recorded may info container router ready true restart count may info docker registry vxftf started at edt container statuses recorded may info container registry ready true restart count may info latency metrics for node ci prtest ig m may info logging node info for node ci prtest ig n may info node info node objectmeta io apimachinery pkg apis meta objectmeta name ci prtest ig n generatename namespace selflink api nodes ci prtest ig n uid resourceversion generation creationtimestamp edt deletiontimestamp deletiongraceperiodseconds nil labels map string beta kubernetes io arch beta kubernetes io instance type standard beta kubernetes io os linux failure domain beta kubernetes io region us failure domain beta kubernetes io zone us a kubernetes io hostname ci prtest ig n role app annotations map string volumes kubernetes io controller managed attach detach true ownerreferences finalizers clustername spec nodespec podcidr externalid providerid gce openshift gce devel ci us a ci prtest ig n unschedulable false taints status nodestatus capacity resourcelist cpu decimalsi memory binarysi pods decimalsi allocatable resourcelist cpu decimalsi memory binarysi pods decimalsi phase conditions addresses daemonendpoints nodedaemonendpoints kubeletendpoint daemonendpoint port nodeinfo nodesysteminfo machineid systemuuid bootid kernelversion osimage red hat enterprise linux server maipo containerruntimeversion docker kubeletversion kubeproxyversion operatingsystem linux architecture images volumesinuse volumesattached may info logging kubelet events for node ci prtest ig n may info logging pods the kubelet thinks is on node ci prtest ig n may info latency metrics for node ci prtest ig n may info logging node info for node ci prtest ig n qxpq may info node info node objectmeta io apimachinery pkg apis meta objectmeta name ci prtest ig n qxpq generatename namespace selflink api nodes ci prtest ig n qxpq uid resourceversion generation creationtimestamp edt deletiontimestamp deletiongraceperiodseconds nil labels map string beta kubernetes io arch beta kubernetes io instance type standard beta kubernetes io os linux failure domain beta kubernetes io region us failure domain beta kubernetes io zone us a kubernetes io hostname ci prtest ig n qxpq role app annotations map string volumes kubernetes io controller managed attach detach true ownerreferences finalizers clustername spec nodespec podcidr externalid providerid gce openshift gce devel ci us a ci prtest ig n qxpq unschedulable false taints status nodestatus capacity resourcelist cpu decimalsi memory binarysi pods decimalsi allocatable resourcelist cpu decimalsi memory binarysi pods decimalsi phase conditions addresses daemonendpoints nodedaemonendpoints kubeletendpoint daemonendpoint port nodeinfo nodesysteminfo machineid systemuuid bootid kernelversion osimage red hat enterprise linux server maipo containerruntimeversion docker kubeletversion kubeproxyversion operatingsystem linux architecture images volumesinuse volumesattached may info logging kubelet events for node ci prtest ig n qxpq may info logging pods the kubelet thinks is on node ci prtest ig n qxpq may info registry console deploy started at edt container statuses recorded may info container deployment ready false restart count may info latency metrics for node ci prtest ig n qxpq may info operation sync method pod worker latency microseconds quantile latency may info operation sync method pod worker latency microseconds quantile latency may info operation sync method pod worker latency microseconds quantile latency may info logging node info for node ci prtest ig n may info node info node objectmeta io apimachinery pkg apis meta objectmeta name ci prtest ig n generatename namespace selflink api nodes ci prtest ig n uid resourceversion generation creationtimestamp edt deletiontimestamp deletiongraceperiodseconds nil labels map string beta kubernetes io arch beta kubernetes io instance type standard beta kubernetes io os linux failure domain beta kubernetes io region us failure domain beta kubernetes io zone us a kubernetes io hostname ci prtest ig n role app annotations map string volumes kubernetes io controller managed attach detach true ownerreferences finalizers clustername spec nodespec podcidr externalid providerid gce openshift gce devel ci us a ci prtest ig n unschedulable false taints status nodestatus capacity resourcelist cpu decimalsi memory binarysi pods decimalsi allocatable resourcelist cpu decimalsi memory binarysi pods decimalsi phase conditions addresses daemonendpoints nodedaemonendpoints kubeletendpoint daemonendpoint port nodeinfo nodesysteminfo machineid systemuuid bootid kernelversion osimage red hat enterprise linux server maipo containerruntimeversion docker kubeletversion kubeproxyversion operatingsystem linux architecture images volumesinuse volumesattached may info logging kubelet events for node ci prtest ig n may info logging pods the kubelet thinks is on node ci prtest ig n may info latency metrics for node ci prtest ig n may info operation pull image method docker operations latency microseconds quantile latency may info operation pull image method docker operations latency microseconds quantile latency may info operation pull image method docker operations latency microseconds quantile latency step dumping a list of prepulled images on each node may info waiting up to for all but nodes to be ready step destroying namespace tests kubectl for this suite may info namespace tests kubectl resource bindings ignored listing per whitelist seen here | 1 |
68,365 | 7,094,047,117 | IssuesEvent | 2018-01-12 23:28:39 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | closed | Re-enable Testing for All Modules | Knack P1 Test | Tests must be re-enabled in the CI for the following modules.
- [x] appservice
- [x] backup
- [x] batchai
- [x] dla
- [x] dls
- [x] eventgrid
- [x] find
- [x] interactive
- [x] keyvault
- [x] lab
- [x] monitor
- [x] profile
- [x] rdbms
- [x] role
- [x] sql
- [x] vm
| 1.0 | Re-enable Testing for All Modules - Tests must be re-enabled in the CI for the following modules.
- [x] appservice
- [x] backup
- [x] batchai
- [x] dla
- [x] dls
- [x] eventgrid
- [x] find
- [x] interactive
- [x] keyvault
- [x] lab
- [x] monitor
- [x] profile
- [x] rdbms
- [x] role
- [x] sql
- [x] vm
| test | re enable testing for all modules tests must be re enabled in the ci for the following modules appservice backup batchai dla dls eventgrid find interactive keyvault lab monitor profile rdbms role sql vm | 1 |
154,411 | 24,289,474,835 | IssuesEvent | 2022-09-29 03:41:45 | rizonesoft/Notepad3 | https://api.github.com/repos/rizonesoft/Notepad3 | closed | Color picker broken | works as designed close on approval | Input fields are limited to 2 characters, but should be at least 3 for numbers 0-255. | 1.0 | Color picker broken - Input fields are limited to 2 characters, but should be at least 3 for numbers 0-255. | non_test | color picker broken input fields are limited to characters but should be at least for numbers | 0 |
57,709 | 6,554,769,947 | IssuesEvent | 2017-09-06 07:40:24 | red/red | https://api.github.com/repos/red/red | closed | Black color is not rendered when 'black word is used in draw blocks | GUI status.built status.tested type.bug | Tested on macOS Sierra:
```
view [base 300x300 draw [fill-pen black box 20x20 280x280]]
```
doesn't render the black filling. Any other rebol-color works, and any tuple does too,except 0.0.0! | 1.0 | Black color is not rendered when 'black word is used in draw blocks - Tested on macOS Sierra:
```
view [base 300x300 draw [fill-pen black box 20x20 280x280]]
```
doesn't render the black filling. Any other rebol-color works, and any tuple does too,except 0.0.0! | test | black color is not rendered when black word is used in draw blocks tested on macos sierra view doesn t render the black filling any other rebol color works and any tuple does too except | 1 |
368,542 | 25,797,514,933 | IssuesEvent | 2022-12-10 18:06:10 | TEIC/TEI | https://api.github.com/repos/TEIC/TEI | opened | on constraint (of attribute alternation) duplication | Status: Needs Discussion TEI: Guidelines & Documentation TEI: Schema CouncilResponsibility | ## Introduction
There are 6 uses of `<attList org="choice">` in the _Guidelines_:
_(See below for glossary of column headings.)_
| element | attributes | works? | +cS? | note |
|--------------- | ----------------------- | ------ | ------- | ----- |
| `<anyElement>` | `@include`, `@except` | **NO** | no | [1,2] |
| `<classRef>` | `@include`, `@except` | yes | no | |
| `<dataRef>` | `@key`, `@name`, `@ref` | yes | no | |
| `<moduleRef>` | `@include`, `@except` | yes | **yes** | |
| `<moduleRef>` | `@key`, `@url` | yes | **yes** | |
| `<relation>` | `@active`, `@mutual` | yes | **yes** | [3] |
- **element** = The element specification that contains the choice=org. (There are none in other specifications.)
- **attributes** = The attributes that are in alternation with one another.
- **works?** = Whether or not the patterns in tei_all.rnc correctly enforce the alternation — i.e., whether or not the Stylesheets work correctly in this case.
- **+cS** = Is there an added `<constraintSpec>` that _also_ enforces this disjunction?
- **note** = Note anchors, if any.
## Major Problem (not this ticket)
In at least one case of `elementSpec//attList[@org eq 'choice']` it does not work. IIRC, it also never works in cases of over-riding an attribute defined in a class. But those are problems for the Stylesheets repo, not this repo.
## Secondary Problem (this ticket)
I think we should consistently have an additional `<constraintSpec>` in all cases or (better yet) not have it in any cases.
## Tertiary Finding
(This ticket, but only because there were other problems; this one is a small, corrigible error I would have just fixed in dev without discussion. But since the discussion here might result in “just delete it” …)
The `<constraintSpec ident="not-except-and-include">` in `<moduleRef>` is misplaced. (It is a child of the "prefix" attribute definition; it should be the child of either the "except" or "include" attribute definition.)
## Notes
[1] @martindholmes actually has a (not unreasonable) use case for allowing both attributes, anyway.
[2] See also #2357.
[3] There is an interesting annotation in an XML comment in the `<constraintSpec>`: “this constraint is pointless in RELAX NG land, where the org=choice works. It is useful in DTD land, where the `attList/@org` has no effect. It looks to me like it is useful in W3C XML Schema land, too, which I find suprising, as I thought XSD could express a disjuntion like that w/o difficulty. (But I may be reading the XSD incorrectly.) — Syd, 2018-05-01”. | 1.0 | on constraint (of attribute alternation) duplication - ## Introduction
There are 6 uses of `<attList org="choice">` in the _Guidelines_:
_(See below for glossary of column headings.)_
| element | attributes | works? | +cS? | note |
|--------------- | ----------------------- | ------ | ------- | ----- |
| `<anyElement>` | `@include`, `@except` | **NO** | no | [1,2] |
| `<classRef>` | `@include`, `@except` | yes | no | |
| `<dataRef>` | `@key`, `@name`, `@ref` | yes | no | |
| `<moduleRef>` | `@include`, `@except` | yes | **yes** | |
| `<moduleRef>` | `@key`, `@url` | yes | **yes** | |
| `<relation>` | `@active`, `@mutual` | yes | **yes** | [3] |
- **element** = The element specification that contains the choice=org. (There are none in other specifications.)
- **attributes** = The attributes that are in alternation with one another.
- **works?** = Whether or not the patterns in tei_all.rnc correctly enforce the alternation — i.e., whether or not the Stylesheets work correctly in this case.
- **+cS** = Is there an added `<constraintSpec>` that _also_ enforces this disjunction?
- **note** = Note anchors, if any.
## Major Problem (not this ticket)
In at least one case of `elementSpec//attList[@org eq 'choice']` it does not work. IIRC, it also never works in cases of over-riding an attribute defined in a class. But those are problems for the Stylesheets repo, not this repo.
## Secondary Problem (this ticket)
I think we should consistently have an additional `<constraintSpec>` in all cases or (better yet) not have it in any cases.
## Tertiary Finding
(This ticket, but only because there were other problems; this one is a small, corrigible error I would have just fixed in dev without discussion. But since the discussion here might result in “just delete it” …)
The `<constraintSpec ident="not-except-and-include">` in `<moduleRef>` is misplaced. (It is a child of the "prefix" attribute definition; it should be the child of either the "except" or "include" attribute definition.)
## Notes
[1] @martindholmes actually has a (not unreasonable) use case for allowing both attributes, anyway.
[2] See also #2357.
[3] There is an interesting annotation in an XML comment in the `<constraintSpec>`: “this constraint is pointless in RELAX NG land, where the org=choice works. It is useful in DTD land, where the `attList/@org` has no effect. It looks to me like it is useful in W3C XML Schema land, too, which I find suprising, as I thought XSD could express a disjuntion like that w/o difficulty. (But I may be reading the XSD incorrectly.) — Syd, 2018-05-01”. | non_test | on constraint of attribute alternation duplication introduction there are uses of in the guidelines see below for glossary of column headings element attributes works cs note include except no no include except yes no key name ref yes no include except yes yes key url yes yes active mutual yes yes element the element specification that contains the choice org there are none in other specifications attributes the attributes that are in alternation with one another works whether or not the patterns in tei all rnc correctly enforce the alternation — i e whether or not the stylesheets work correctly in this case cs is there an added that also enforces this disjunction note note anchors if any major problem not this ticket in at least one case of elementspec attlist it does not work iirc it also never works in cases of over riding an attribute defined in a class but those are problems for the stylesheets repo not this repo secondary problem this ticket i think we should consistently have an additional in all cases or better yet not have it in any cases tertiary finding this ticket but only because there were other problems this one is a small corrigible error i would have just fixed in dev without discussion but since the discussion here might result in “just delete it” … the in is misplaced it is a child of the prefix attribute definition it should be the child of either the except or include attribute definition notes martindholmes actually has a not unreasonable use case for allowing both attributes anyway see also there is an interesting annotation in an xml comment in the “this constraint is pointless in relax ng land where the org choice works it is useful in dtd land where the attlist org has no effect it looks to me like it is useful in xml schema land too which i find suprising as i thought xsd could express a disjuntion like that w o difficulty but i may be reading the xsd incorrectly — syd ” | 0 |
129,153 | 10,564,946,773 | IssuesEvent | 2019-10-05 06:59:04 | ValveSoftware/SteamVR-for-Linux | https://api.github.com/repos/ValveSoftware/SteamVR-for-Linux | closed | [BUG] Dashboard graphic bugs | Need Retest bug | **Describe the bug**
The dashboard is torn or distorted. This makes it really not possible to use the dashboard.
**To Reproduce**
Steps to reproduce the behavior:
1. Start SteamVR
2. Click on System Button
3. See error
**Expected behavior**
The dashboard should not be distorted
**System Information:**
- Distribution: Kubuntu 19.04
- SteamVR version: 1.8.6
- Steam client version: Sep 23 2019,at 16:07:56
- Opted into Steam client beta?: Yes
- Graphics driver version: Nvidia 435.21
- Gist for SteamVR System Information:
https://gist.github.com/fabiankranewitter/a5b1e9769b70ca7e84337a5fa8b48969
**Screenshots**

**Note:** Commenters who are also experiencing this issue are encouraged to include the "System Information" section in their replies.
| 1.0 | [BUG] Dashboard graphic bugs - **Describe the bug**
The dashboard is torn or distorted. This makes it really not possible to use the dashboard.
**To Reproduce**
Steps to reproduce the behavior:
1. Start SteamVR
2. Click on System Button
3. See error
**Expected behavior**
The dashboard should not be distorted
**System Information:**
- Distribution: Kubuntu 19.04
- SteamVR version: 1.8.6
- Steam client version: Sep 23 2019,at 16:07:56
- Opted into Steam client beta?: Yes
- Graphics driver version: Nvidia 435.21
- Gist for SteamVR System Information:
https://gist.github.com/fabiankranewitter/a5b1e9769b70ca7e84337a5fa8b48969
**Screenshots**

**Note:** Commenters who are also experiencing this issue are encouraged to include the "System Information" section in their replies.
| test | dashboard graphic bugs describe the bug the dashboard is torn or distorted this makes it really not possible to use the dashboard to reproduce steps to reproduce the behavior start steamvr click on system button see error expected behavior the dashboard should not be distorted system information distribution kubuntu steamvr version steam client version sep at opted into steam client beta yes graphics driver version nvidia gist for steamvr system information screenshots note commenters who are also experiencing this issue are encouraged to include the system information section in their replies | 1 |
668 | 2,577,889,013 | IssuesEvent | 2015-02-12 19:44:34 | jasonhall/google-styleguide | https://api.github.com/repos/jasonhall/google-styleguide | opened | When I try to import eclipse-java-google-style.xml formatter to Eclipse I've got a warning dialog | auto-migrated Priority-Medium Type-Defect | ```
1. Download
http://code.google.com/p/google-styleguide/source/browse/trunk/eclipse-java-goog
le-style.xml
2. Download latest Eclipse IDE for Java Developers (I used
eclipse-java-luna-SR1-linux-gtk.tar.gz )
3. Extract downloaded archive and launch eclipse
4. Open Window->Preferences->Java->Code Style->Formatter
5. Import downloaded package
6. Eclipse shows a warning dialog that the importing profile has newer version
than current eclipse supported version. I attach the file with the dialog
screenshot.
Is it expected? How can I use this profile with full features turn on in
eclipse?
```
-----
Original issue reported on code.google.com by barinov.sergei@gmail.com on 28 Oct 2014 at 9:13
Attachments:
* [Screenshot from 2014-10-28 12:13:01.png](https://storage.googleapis.com/google-code-attachments/google-styleguide/issue-33/comment-0/Screenshot from 2014-10-28 12:13:01.png)
| 1.0 | When I try to import eclipse-java-google-style.xml formatter to Eclipse I've got a warning dialog - ```
1. Download
http://code.google.com/p/google-styleguide/source/browse/trunk/eclipse-java-goog
le-style.xml
2. Download latest Eclipse IDE for Java Developers (I used
eclipse-java-luna-SR1-linux-gtk.tar.gz )
3. Extract downloaded archive and launch eclipse
4. Open Window->Preferences->Java->Code Style->Formatter
5. Import downloaded package
6. Eclipse shows a warning dialog that the importing profile has newer version
than current eclipse supported version. I attach the file with the dialog
screenshot.
Is it expected? How can I use this profile with full features turn on in
eclipse?
```
-----
Original issue reported on code.google.com by barinov.sergei@gmail.com on 28 Oct 2014 at 9:13
Attachments:
* [Screenshot from 2014-10-28 12:13:01.png](https://storage.googleapis.com/google-code-attachments/google-styleguide/issue-33/comment-0/Screenshot from 2014-10-28 12:13:01.png)
| non_test | when i try to import eclipse java google style xml formatter to eclipse i ve got a warning dialog download le style xml download latest eclipse ide for java developers i used eclipse java luna linux gtk tar gz extract downloaded archive and launch eclipse open window preferences java code style formatter import downloaded package eclipse shows a warning dialog that the importing profile has newer version than current eclipse supported version i attach the file with the dialog screenshot is it expected how can i use this profile with full features turn on in eclipse original issue reported on code google com by barinov sergei gmail com on oct at attachments from png | 0 |
45,218 | 5,703,952,535 | IssuesEvent | 2017-04-18 02:07:29 | geopython/pycsw | https://api.github.com/repos/geopython/pycsw | closed | Move to py.test as a testing framework | enhancement tests | We are currently using a fairly involved testing framework that has been custom tailored for pycsw. It features lots of functional tests for many suites. However, personally I find it hard to work with it because:
- It is hard to pinpoint errors when executing the `run_tests.py` script because of all of the output that it generates
- The previous state of a test run can influence the outcome of a new run
- The setup and cleanup tasks are defined in the `pavement.py` file, while the tests themselves are located elsewhere
- There is a lot noise with new files being generated and deleted in the project's directory
I propose moving to [pytest](http://pytest.org/latest/) as a more standard testing framework. It seems like a good choice because:
- It seems to be the de facto standard for python testing nowadays
- It is easy to install and works across Python versions
- Has lots of cool plugins and can generate reports that integrate with other tools
- It is great for writing all kinds of tests, from unit tests to integration to functional (which is what most of our tests are)
- It does a lot of stuff automatically, like hiding the output of tests that pass
I'm willing to work on the code and make this happen, if there is interest in it.
| 1.0 | Move to py.test as a testing framework - We are currently using a fairly involved testing framework that has been custom tailored for pycsw. It features lots of functional tests for many suites. However, personally I find it hard to work with it because:
- It is hard to pinpoint errors when executing the `run_tests.py` script because of all of the output that it generates
- The previous state of a test run can influence the outcome of a new run
- The setup and cleanup tasks are defined in the `pavement.py` file, while the tests themselves are located elsewhere
- There is a lot noise with new files being generated and deleted in the project's directory
I propose moving to [pytest](http://pytest.org/latest/) as a more standard testing framework. It seems like a good choice because:
- It seems to be the de facto standard for python testing nowadays
- It is easy to install and works across Python versions
- Has lots of cool plugins and can generate reports that integrate with other tools
- It is great for writing all kinds of tests, from unit tests to integration to functional (which is what most of our tests are)
- It does a lot of stuff automatically, like hiding the output of tests that pass
I'm willing to work on the code and make this happen, if there is interest in it.
| test | move to py test as a testing framework we are currently using a fairly involved testing framework that has been custom tailored for pycsw it features lots of functional tests for many suites however personally i find it hard to work with it because it is hard to pinpoint errors when executing the run tests py script because of all of the output that it generates the previous state of a test run can influence the outcome of a new run the setup and cleanup tasks are defined in the pavement py file while the tests themselves are located elsewhere there is a lot noise with new files being generated and deleted in the project s directory i propose moving to as a more standard testing framework it seems like a good choice because it seems to be the de facto standard for python testing nowadays it is easy to install and works across python versions has lots of cool plugins and can generate reports that integrate with other tools it is great for writing all kinds of tests from unit tests to integration to functional which is what most of our tests are it does a lot of stuff automatically like hiding the output of tests that pass i m willing to work on the code and make this happen if there is interest in it | 1 |
47,101 | 24,870,794,513 | IssuesEvent | 2022-10-27 15:05:12 | awslabs/soci-snapshotter | https://api.github.com/repos/awslabs/soci-snapshotter | opened | Check if bbolt db for metadata can be tuned | Performance Tuning | bbolt db, which stores file metadata information may be one of the reasons, the snapshotter's workflows are slow. It would be good to have a look at what can be done to speed things up.
| True | Check if bbolt db for metadata can be tuned - bbolt db, which stores file metadata information may be one of the reasons, the snapshotter's workflows are slow. It would be good to have a look at what can be done to speed things up.
| non_test | check if bbolt db for metadata can be tuned bbolt db which stores file metadata information may be one of the reasons the snapshotter s workflows are slow it would be good to have a look at what can be done to speed things up | 0 |
127,258 | 18,010,363,217 | IssuesEvent | 2021-09-16 07:53:43 | maddyCode23/linux-4.1.15 | https://api.github.com/repos/maddyCode23/linux-4.1.15 | opened | CVE-2019-15923 (Medium) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2019-15923 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/paride/pcd.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/paride/pcd.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 5.0.9. There is a NULL pointer dereference for a cd data structure if alloc_disk fails in drivers/block/paride/pf.c.
<p>Publish Date: 2019-09-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15923>CVE-2019-15923</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.0.9">https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.0.9</a></p>
<p>Release Date: 2019-09-04</p>
<p>Fix Resolution: v5.1-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-15923 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2019-15923 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/paride/pcd.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/paride/pcd.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 5.0.9. There is a NULL pointer dereference for a cd data structure if alloc_disk fails in drivers/block/paride/pf.c.
<p>Publish Date: 2019-09-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15923>CVE-2019-15923</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.0.9">https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.0.9</a></p>
<p>Release Date: 2019-09-04</p>
<p>Fix Resolution: v5.1-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href vulnerable source files drivers block paride pcd c drivers block paride pcd c vulnerability details an issue was discovered in the linux kernel before there is a null pointer dereference for a cd data structure if alloc disk fails in drivers block paride pf c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
96,410 | 8,614,085,956 | IssuesEvent | 2018-11-19 16:34:30 | InFact-coop/your-sanctuary | https://api.github.com/repos/InFact-coop/your-sanctuary | closed | Feedback from user testing - disclaimer regarding website links and safety | enhancement please-test user-feedback | We need to put a disclaimer regarding providing website links somewhere at the start of the chat
Could we trial this to the left of the page
Please note - during the chat you may be sent links to external websites for more information and advice. For your safety, we recommend that you delete your web history once you have finished your chat. If you need advice on how to do this then please visit https://www.yoursanctuary.org.uk/cover-your-tracks-online. | 1.0 | Feedback from user testing - disclaimer regarding website links and safety - We need to put a disclaimer regarding providing website links somewhere at the start of the chat
Could we trial this to the left of the page
Please note - during the chat you may be sent links to external websites for more information and advice. For your safety, we recommend that you delete your web history once you have finished your chat. If you need advice on how to do this then please visit https://www.yoursanctuary.org.uk/cover-your-tracks-online. | test | feedback from user testing disclaimer regarding website links and safety we need to put a disclaimer regarding providing website links somewhere at the start of the chat could we trial this to the left of the page please note during the chat you may be sent links to external websites for more information and advice for your safety we recommend that you delete your web history once you have finished your chat if you need advice on how to do this then please visit | 1 |
117,139 | 9,914,184,210 | IssuesEvent | 2019-06-28 13:48:40 | ValveSoftware/steam-for-linux | https://api.github.com/repos/ValveSoftware/steam-for-linux | closed | Steam won't do anything at all | Need Retest reviewed | I use Ubuntu 16.04, the current version of Steam will not even complete installation. I can't play games, I can't even open the program. My system is up to date, steam is up date. it just won't work.
when I attempt to open Steam through Terminal, this is the result:
Repairing installation, linking /home/stephen/.steam/steam to /home/stephen/.local/share/Steam
rm: cannot remove '/home/stephen/.steam/steam': Is a directory
Setting up Steam content in /home/stephen/.local/share/Steam
rm: cannot remove '/home/stephen/.steam/steam': Is a directory
Steam support has been no help, here is the link to that conversation: [https://support.steampowered.com/view.php?ticketref=5774-TZCB-3123]
| 1.0 | Steam won't do anything at all - I use Ubuntu 16.04, the current version of Steam will not even complete installation. I can't play games, I can't even open the program. My system is up to date, steam is up date. it just won't work.
when I attempt to open Steam through Terminal, this is the result:
Repairing installation, linking /home/stephen/.steam/steam to /home/stephen/.local/share/Steam
rm: cannot remove '/home/stephen/.steam/steam': Is a directory
Setting up Steam content in /home/stephen/.local/share/Steam
rm: cannot remove '/home/stephen/.steam/steam': Is a directory
Steam support has been no help, here is the link to that conversation: [https://support.steampowered.com/view.php?ticketref=5774-TZCB-3123]
| test | steam won t do anything at all i use ubuntu the current version of steam will not even complete installation i can t play games i can t even open the program my system is up to date steam is up date it just won t work when i attempt to open steam through terminal this is the result repairing installation linking home stephen steam steam to home stephen local share steam rm cannot remove home stephen steam steam is a directory setting up steam content in home stephen local share steam rm cannot remove home stephen steam steam is a directory steam support has been no help here is the link to that conversation | 1 |
148,938 | 11,872,270,447 | IssuesEvent | 2020-03-26 15:34:32 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: hibernate failed | C-test-failure O-roachtest O-robot branch-provisional_202003200044_v20.1.0-beta.3 release-blocker | [(roachtest).hibernate failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1819475&tab=buildLog) on [provisional_202003200044_v20.1.0-beta.3@62783b33de905d516f72146fcc4234a27dc8638e](https://github.com/cockroachdb/cockroach/commits/62783b33de905d516f72146fcc4234a27dc8638e):
```
The test failed on branch=provisional_202003200044_v20.1.0-beta.3, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20200320-1819475/hibernate/run_1
orm_helpers.go:214,orm_helpers.go:144,java_helpers.go:216,hibernate.go:173,hibernate.go:185,test_runner.go:753:
Tests run on Cockroach v20.1.0-beta.2-925-g62783b3
Tests run against hibernate HHH-13724-cockroachdb-dialects
7876 Total Tests Run
7856 tests passed
20 tests failed
1872 tests skipped
0 tests ignored
0 tests passed unexpectedly
20 tests failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- FAIL: org.hibernate.test.hql.HQLTest.testRowValueConstructorSyntaxInInListBeingTranslated - unknown (unexpected)
--- FAIL: org.hibernate.jpa.test.query.QueryTest.testNativeQueryNullNamedParameter - unknown (unexpected)
--- FAIL: org.hibernate.jpa.test.query.QueryTest.testNativeQueryNullPositionalParameterParameter - unknown (unexpected)
--- FAIL: org.hibernate.test.nationalized.UseNationalizedCharDataSettingTest.testSettingOnCharType - unknown (unexpected)
--- FAIL: org.hibernate.jpa.test.query.QueryTest.testNativeQueryNullPositionalParameter - unknown (unexpected)
--- FAIL: org.hibernate.test.hql.HQLTest.testExpressionWithParamInFunction - unknown (unexpected)
--- FAIL: org.hibernate.test.legacy.CustomSQLTest.testInsert - unknown (unexpected)
--- FAIL: org.hibernate.test.nationalized.UseNationalizedCharDataSettingTest.testSetting - unknown (unexpected)
--- FAIL: org.hibernate.test.hql.HQLTest.testGroupByFunction - unknown (unexpected)
--- FAIL: org.hibernate.cache.spi.ReadWriteCacheTest.testUpdate - unknown (unexpected)
--- FAIL: org.hibernate.jpa.test.query.QueryTest.testNativeQueryNullNamedParameterParameter - unknown (unexpected)
--- FAIL: org.hibernate.test.hql.ASTParserLoadingTest.testEJBQLFunctions - unknown (unexpected)
--- FAIL: org.hibernate.test.nationalized.SimpleNationalizedTest.simpleNationalizedTest - unknown (unexpected)
--- FAIL: org.hibernate.cache.spi.ReadWriteCacheTest.testDelete - unknown (unexpected)
--- FAIL: org.hibernate.test.converter.AndNationalizedTests.basicTest - unknown (unexpected)
--- FAIL: org.hibernate.test.legacy.FooBarTest.testCollectionsInSelect - unknown (unexpected)
--- FAIL: org.hibernate.test.schemaupdate.inheritance.tableperclass.SchemaCreationTest.testUniqueConstraintIsCorrectlyGenerated - unknown (unexpected)
--- FAIL: org.hibernate.test.annotations.query.QueryAndSQLTest.testQueryWithNullParameter - unknown (unexpected)
--- FAIL: org.hibernate.test.annotations.query.QueryAndSQLTest.testNativeQueryWithNullParameter - unknown (unexpected)
--- FAIL: org.hibernate.test.hql.HQLTest.testSelectStandardFunctionsNoParens - unknown (unexpected)
For a full summary look at the hibernate artifacts
An updated blacklist (hibernateBlackList20_1) is available in the artifacts' hibernate log
```
<details><summary>More</summary><p>
Artifacts: [/hibernate](https://teamcity.cockroachdb.com/viewLog.html?buildId=1819475&tab=artifacts#/hibernate)
Related:
- #46304 roachtest: hibernate failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202003181957_v20.1.0-beta.3](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202003181957_v20.1.0-beta.3) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #46175 roachtest: hibernate failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202003161814_v19.2.5](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202003161814_v19.2.5) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #45798 roachtest: hibernate failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #45795 roachtest: hibernate failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #45787 roachtest: hibernate failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ahibernate.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: hibernate failed - [(roachtest).hibernate failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1819475&tab=buildLog) on [provisional_202003200044_v20.1.0-beta.3@62783b33de905d516f72146fcc4234a27dc8638e](https://github.com/cockroachdb/cockroach/commits/62783b33de905d516f72146fcc4234a27dc8638e):
```
The test failed on branch=provisional_202003200044_v20.1.0-beta.3, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20200320-1819475/hibernate/run_1
orm_helpers.go:214,orm_helpers.go:144,java_helpers.go:216,hibernate.go:173,hibernate.go:185,test_runner.go:753:
Tests run on Cockroach v20.1.0-beta.2-925-g62783b3
Tests run against hibernate HHH-13724-cockroachdb-dialects
7876 Total Tests Run
7856 tests passed
20 tests failed
1872 tests skipped
0 tests ignored
0 tests passed unexpectedly
20 tests failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- FAIL: org.hibernate.test.hql.HQLTest.testRowValueConstructorSyntaxInInListBeingTranslated - unknown (unexpected)
--- FAIL: org.hibernate.jpa.test.query.QueryTest.testNativeQueryNullNamedParameter - unknown (unexpected)
--- FAIL: org.hibernate.jpa.test.query.QueryTest.testNativeQueryNullPositionalParameterParameter - unknown (unexpected)
--- FAIL: org.hibernate.test.nationalized.UseNationalizedCharDataSettingTest.testSettingOnCharType - unknown (unexpected)
--- FAIL: org.hibernate.jpa.test.query.QueryTest.testNativeQueryNullPositionalParameter - unknown (unexpected)
--- FAIL: org.hibernate.test.hql.HQLTest.testExpressionWithParamInFunction - unknown (unexpected)
--- FAIL: org.hibernate.test.legacy.CustomSQLTest.testInsert - unknown (unexpected)
--- FAIL: org.hibernate.test.nationalized.UseNationalizedCharDataSettingTest.testSetting - unknown (unexpected)
--- FAIL: org.hibernate.test.hql.HQLTest.testGroupByFunction - unknown (unexpected)
--- FAIL: org.hibernate.cache.spi.ReadWriteCacheTest.testUpdate - unknown (unexpected)
--- FAIL: org.hibernate.jpa.test.query.QueryTest.testNativeQueryNullNamedParameterParameter - unknown (unexpected)
--- FAIL: org.hibernate.test.hql.ASTParserLoadingTest.testEJBQLFunctions - unknown (unexpected)
--- FAIL: org.hibernate.test.nationalized.SimpleNationalizedTest.simpleNationalizedTest - unknown (unexpected)
--- FAIL: org.hibernate.cache.spi.ReadWriteCacheTest.testDelete - unknown (unexpected)
--- FAIL: org.hibernate.test.converter.AndNationalizedTests.basicTest - unknown (unexpected)
--- FAIL: org.hibernate.test.legacy.FooBarTest.testCollectionsInSelect - unknown (unexpected)
--- FAIL: org.hibernate.test.schemaupdate.inheritance.tableperclass.SchemaCreationTest.testUniqueConstraintIsCorrectlyGenerated - unknown (unexpected)
--- FAIL: org.hibernate.test.annotations.query.QueryAndSQLTest.testQueryWithNullParameter - unknown (unexpected)
--- FAIL: org.hibernate.test.annotations.query.QueryAndSQLTest.testNativeQueryWithNullParameter - unknown (unexpected)
--- FAIL: org.hibernate.test.hql.HQLTest.testSelectStandardFunctionsNoParens - unknown (unexpected)
For a full summary look at the hibernate artifacts
An updated blacklist (hibernateBlackList20_1) is available in the artifacts' hibernate log
```
<details><summary>More</summary><p>
Artifacts: [/hibernate](https://teamcity.cockroachdb.com/viewLog.html?buildId=1819475&tab=artifacts#/hibernate)
Related:
- #46304 roachtest: hibernate failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202003181957_v20.1.0-beta.3](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202003181957_v20.1.0-beta.3) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #46175 roachtest: hibernate failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202003161814_v19.2.5](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202003161814_v19.2.5) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #45798 roachtest: hibernate failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #45795 roachtest: hibernate failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #45787 roachtest: hibernate failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ahibernate.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | roachtest hibernate failed on the test failed on branch provisional beta cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts hibernate run orm helpers go orm helpers go java helpers go hibernate go hibernate go test runner go tests run on cockroach beta tests run against hibernate hhh cockroachdb dialects total tests run tests passed tests failed tests skipped tests ignored tests passed unexpectedly tests failed unexpectedly tests expected failed but skipped tests expected failed but not run fail org hibernate test hql hqltest testrowvalueconstructorsyntaxininlistbeingtranslated unknown unexpected fail org hibernate jpa test query querytest testnativequerynullnamedparameter unknown unexpected fail org hibernate jpa test query querytest testnativequerynullpositionalparameterparameter unknown unexpected fail org hibernate test nationalized usenationalizedchardatasettingtest testsettingonchartype unknown unexpected fail org hibernate jpa test query querytest testnativequerynullpositionalparameter unknown unexpected fail org hibernate test hql hqltest testexpressionwithparaminfunction unknown unexpected fail org hibernate test legacy customsqltest testinsert unknown unexpected fail org hibernate test nationalized usenationalizedchardatasettingtest testsetting unknown unexpected fail org hibernate test hql hqltest testgroupbyfunction unknown unexpected fail org hibernate cache spi readwritecachetest testupdate unknown unexpected fail org hibernate jpa test query querytest testnativequerynullnamedparameterparameter unknown unexpected fail org hibernate test hql astparserloadingtest testejbqlfunctions unknown unexpected fail org hibernate test nationalized simplenationalizedtest simplenationalizedtest unknown unexpected fail org hibernate cache spi readwritecachetest testdelete unknown unexpected fail org hibernate test converter andnationalizedtests basictest unknown unexpected fail org hibernate test legacy foobartest testcollectionsinselect unknown unexpected fail org hibernate test schemaupdate inheritance tableperclass schemacreationtest testuniqueconstraintiscorrectlygenerated unknown unexpected fail org hibernate test annotations query queryandsqltest testquerywithnullparameter unknown unexpected fail org hibernate test annotations query queryandsqltest testnativequerywithnullparameter unknown unexpected fail org hibernate test hql hqltest testselectstandardfunctionsnoparens unknown unexpected for a full summary look at the hibernate artifacts an updated blacklist is available in the artifacts hibernate log more artifacts related roachtest hibernate failed roachtest hibernate failed roachtest hibernate failed roachtest hibernate failed roachtest hibernate failed powered by | 1 |
728,962 | 25,102,615,060 | IssuesEvent | 2022-11-08 14:36:53 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | stocktwits.com - design is broken | browser-firefox priority-normal engine-gecko | <!-- @browser: Firefox 106.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/113667 -->
**URL**: https://stocktwits.com/
**Browser / Version**: Firefox 106.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
after the update stocktwits would not load correctly. Everything is HUGE. Works fine on chrome. I deleted all history and cookies and nothing. Refreshed firefox and nothing. will try a full install but it is a pita.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/11/0317a978-095d-4852-aeb1-7357d63d4122.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | stocktwits.com - design is broken - <!-- @browser: Firefox 106.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/113667 -->
**URL**: https://stocktwits.com/
**Browser / Version**: Firefox 106.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
after the update stocktwits would not load correctly. Everything is HUGE. Works fine on chrome. I deleted all history and cookies and nothing. Refreshed firefox and nothing. will try a full install but it is a pita.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/11/0317a978-095d-4852-aeb1-7357d63d4122.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | stocktwits com design is broken url browser version firefox operating system windows tested another browser yes chrome problem type design is broken description items not fully visible steps to reproduce after the update stocktwits would not load correctly everything is huge works fine on chrome i deleted all history and cookies and nothing refreshed firefox and nothing will try a full install but it is a pita view the screenshot img alt screenshot src browser configuration none from with ❤️ | 0 |
322,498 | 23,910,147,149 | IssuesEvent | 2022-09-09 07:22:25 | hapijs/inert | https://api.github.com/repos/hapijs/inert | closed | Release Notes 7.0.0 | documentation | <!--
⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️
You must complete this entire issue template to receive support. You MUST NOT remove, change, or replace the template with your own format. A missing or incomplete report will cause your issue to be closed without comment. Please respect the time and experience that went into this template. It is here for a reason. Thank you!
⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️
-->
#### Context
* *node version*: 18.3.0
* *module version*: 7.0.0
#### What are you trying to achieve or the steps to reproduce ?
<!--
Before opening a documentation issue, please consider opening a Pull Request instead for trivial changes such as typos, spelling, incorrect links, anchors, or other corrections that are easier to just fix than report using this template.
Please do not spend valuable time proposing extensive changes to the documentation before first asking about it. We value your time and do not want to waste it. Just open an issue first using this template and ask if your proposed changes would be helpful.
Make sure to wrap all code examples in backticks so that they display correctly. Before submitting an issue, make sure to click on the Preview tab above to verify everything is formatted correctly.
-->
Version 7.0.0 has been released in npm, but no corresponding release with a change log is available here.
| 1.0 | Release Notes 7.0.0 - <!--
⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️
You must complete this entire issue template to receive support. You MUST NOT remove, change, or replace the template with your own format. A missing or incomplete report will cause your issue to be closed without comment. Please respect the time and experience that went into this template. It is here for a reason. Thank you!
⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️
-->
#### Context
* *node version*: 18.3.0
* *module version*: 7.0.0
#### What are you trying to achieve or the steps to reproduce ?
<!--
Before opening a documentation issue, please consider opening a Pull Request instead for trivial changes such as typos, spelling, incorrect links, anchors, or other corrections that are easier to just fix than report using this template.
Please do not spend valuable time proposing extensive changes to the documentation before first asking about it. We value your time and do not want to waste it. Just open an issue first using this template and ask if your proposed changes would be helpful.
Make sure to wrap all code examples in backticks so that they display correctly. Before submitting an issue, make sure to click on the Preview tab above to verify everything is formatted correctly.
-->
Version 7.0.0 has been released in npm, but no corresponding release with a change log is available here.
| non_test | release notes ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ you must complete this entire issue template to receive support you must not remove change or replace the template with your own format a missing or incomplete report will cause your issue to be closed without comment please respect the time and experience that went into this template it is here for a reason thank you ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ ⚠️ context node version module version what are you trying to achieve or the steps to reproduce before opening a documentation issue please consider opening a pull request instead for trivial changes such as typos spelling incorrect links anchors or other corrections that are easier to just fix than report using this template please do not spend valuable time proposing extensive changes to the documentation before first asking about it we value your time and do not want to waste it just open an issue first using this template and ask if your proposed changes would be helpful make sure to wrap all code examples in backticks so that they display correctly before submitting an issue make sure to click on the preview tab above to verify everything is formatted correctly version has been released in npm but no corresponding release with a change log is available here | 0 |
202,810 | 15,301,830,944 | IssuesEvent | 2021-02-24 14:04:50 | WoWManiaUK/Blackwing-Lair | https://api.github.com/repos/WoWManiaUK/Blackwing-Lair | closed | [Spell] "Nothing to dispel" | Class/Spells Confirmed By Tester Fixed Confirmed Fixed in Dev | **Links:**
https://www.wowhead.com/news=151398/changes-to-dispels-in-cataclysm
**What is happening:**
You cannot cast dispelling spells on targets with no valid debuffs.
**What should happen:**
>In Cataclysm we are raising the mana costs, making it possible to waste mana by casting a dispel when there is nothing to dispel
You should be able to waste dispels by using them on non-afflicted targets.
Affected Spells:
Paladin: 4987 and 53551
Druid: 2782 and 88423
Shaman: 51886 and 77130
Priest: 527, 518 and possibly 32375 and 33167 (I don't know much about priests)
I have only tested paladin and druid, others I just assume work the same way.
| 1.0 | [Spell] "Nothing to dispel" - **Links:**
https://www.wowhead.com/news=151398/changes-to-dispels-in-cataclysm
**What is happening:**
You cannot cast dispelling spells on targets with no valid debuffs.
**What should happen:**
>In Cataclysm we are raising the mana costs, making it possible to waste mana by casting a dispel when there is nothing to dispel
You should be able to waste dispels by using them on non-afflicted targets.
Affected Spells:
Paladin: 4987 and 53551
Druid: 2782 and 88423
Shaman: 51886 and 77130
Priest: 527, 518 and possibly 32375 and 33167 (I don't know much about priests)
I have only tested paladin and druid, others I just assume work the same way.
| test | nothing to dispel links what is happening you cannot cast dispelling spells on targets with no valid debuffs what should happen in cataclysm we are raising the mana costs making it possible to waste mana by casting a dispel when there is nothing to dispel you should be able to waste dispels by using them on non afflicted targets affected spells paladin and druid and shaman and priest and possibly and i don t know much about priests i have only tested paladin and druid others i just assume work the same way | 1 |
228,373 | 18,172,650,115 | IssuesEvent | 2021-09-27 21:55:10 | istio/istio | https://api.github.com/repos/istio/istio | closed | Many pilot integration tests fail setup on remote clusters in a multicluster topology | area/test and release feature/Multi-cluster | ### Bug Description
Many of the pilot integration tests install Istio config resources such as Gateway, VirtualService, etc., on every cluster in the topology. In a remote cluster the Istio CRDs are not available which causes the setup to fail with errors similar to the following:
```
2021-09-07T19:47:34.127490Z info tf === BEGIN: Test: 'pilot[TestMirroring/mirror-percent-absent]' ===
config.go:85: failed applying YAML to cluster remote: unable to recognize "/tmp/pilot-439f9f0ce44143008a0434add/TestMirroring/mirror-percent-absent/_test_context/VirtualService.2877169153.yaml": no matches for kind "VirtualService" in version "networking.istio.io/v1alpha3":
```
The current multicluster tests don't have this problem because they are not using "true" (istiodless) remote clusters, i.e., the remote clusters have istiod and the CRDs installed.
The failing tests (or framework) need to be changed to only install the Istio resources on config clusters.
A command similar to the following can be used to reproduce the failures in a local test environment:
```
sudo make shell
export HUB=gcr.io/istio-testing
export TAG=1.12-alpha.ce6981cec04932b4f4dc38499efcb708b22eb013
rm -rf artifacts
ARTIFACTS=$PWD/artifacts ./prow/integ-suite-kind.sh --topology MULTICLUSTER --skip-build --skip-cleanup --topology-config prow/config/topology/external-istiod-multicluster.json
go test -p 1 -vet=off -v -count=1 -tags=integ ./tests/integration/pilot/... -timeout 30m --istio.test.ci --istio.test.pullpolicy=IfNotPresent --istio.test.kube.topology=/work/localtest.external-istiod-multicluster.json --istio.test.skipVM >/tmp/log.txt 2>&1
```
### Version
```prose
Current version.
```
### Additional Information
_No response_ | 1.0 | Many pilot integration tests fail setup on remote clusters in a multicluster topology - ### Bug Description
Many of the pilot integration tests install Istio config resources such as Gateway, VirtualService, etc., on every cluster in the topology. In a remote cluster the Istio CRDs are not available which causes the setup to fail with errors similar to the following:
```
2021-09-07T19:47:34.127490Z info tf === BEGIN: Test: 'pilot[TestMirroring/mirror-percent-absent]' ===
config.go:85: failed applying YAML to cluster remote: unable to recognize "/tmp/pilot-439f9f0ce44143008a0434add/TestMirroring/mirror-percent-absent/_test_context/VirtualService.2877169153.yaml": no matches for kind "VirtualService" in version "networking.istio.io/v1alpha3":
```
The current multicluster tests don't have this problem because they are not using "true" (istiodless) remote clusters, i.e., the remote clusters have istiod and the CRDs installed.
The failing tests (or framework) need to be changed to only install the Istio resources on config clusters.
A command similar to the following can be used to reproduce the failures in a local test environment:
```
sudo make shell
export HUB=gcr.io/istio-testing
export TAG=1.12-alpha.ce6981cec04932b4f4dc38499efcb708b22eb013
rm -rf artifacts
ARTIFACTS=$PWD/artifacts ./prow/integ-suite-kind.sh --topology MULTICLUSTER --skip-build --skip-cleanup --topology-config prow/config/topology/external-istiod-multicluster.json
go test -p 1 -vet=off -v -count=1 -tags=integ ./tests/integration/pilot/... -timeout 30m --istio.test.ci --istio.test.pullpolicy=IfNotPresent --istio.test.kube.topology=/work/localtest.external-istiod-multicluster.json --istio.test.skipVM >/tmp/log.txt 2>&1
```
### Version
```prose
Current version.
```
### Additional Information
_No response_ | test | many pilot integration tests fail setup on remote clusters in a multicluster topology bug description many of the pilot integration tests install istio config resources such as gateway virtualservice etc on every cluster in the topology in a remote cluster the istio crds are not available which causes the setup to fail with errors similar to the following info tf begin test pilot config go failed applying yaml to cluster remote unable to recognize tmp pilot testmirroring mirror percent absent test context virtualservice yaml no matches for kind virtualservice in version networking istio io the current multicluster tests don t have this problem because they are not using true istiodless remote clusters i e the remote clusters have istiod and the crds installed the failing tests or framework need to be changed to only install the istio resources on config clusters a command similar to the following can be used to reproduce the failures in a local test environment sudo make shell export hub gcr io istio testing export tag alpha rm rf artifacts artifacts pwd artifacts prow integ suite kind sh topology multicluster skip build skip cleanup topology config prow config topology external istiod multicluster json go test p vet off v count tags integ tests integration pilot timeout istio test ci istio test pullpolicy ifnotpresent istio test kube topology work localtest external istiod multicluster json istio test skipvm tmp log txt version prose current version additional information no response | 1 |
22,429 | 2,649,105,104 | IssuesEvent | 2015-03-14 16:07:05 | Paradoxianer/ProjectConceptor_base | https://api.github.com/repos/Paradoxianer/ProjectConceptor_base | opened | Redesign Codebase - Plugins return Toolbar + Toolbarposition | auto-migrated Maintainability Priority-Low | _From @GoogleCodeExporter on March 14, 2015 10:34_
```
Plugins return ToolBarposition + Toolbars
They will be an Menu Entry to enable and disable the Toolbars
and will be removed or added on activation or deactivaion of the editor
```
Original issue reported on code.google.com by `two4...@gmail.com` on 28 Dec 2013 at 6:07
* Blocking: #5
_Copied from original issue: Paradoxianer/projectconceptor#7_ | 1.0 | Redesign Codebase - Plugins return Toolbar + Toolbarposition - _From @GoogleCodeExporter on March 14, 2015 10:34_
```
Plugins return ToolBarposition + Toolbars
They will be an Menu Entry to enable and disable the Toolbars
and will be removed or added on activation or deactivaion of the editor
```
Original issue reported on code.google.com by `two4...@gmail.com` on 28 Dec 2013 at 6:07
* Blocking: #5
_Copied from original issue: Paradoxianer/projectconceptor#7_ | non_test | redesign codebase plugins return toolbar toolbarposition from googlecodeexporter on march plugins return toolbarposition toolbars they will be an menu entry to enable and disable the toolbars and will be removed or added on activation or deactivaion of the editor original issue reported on code google com by gmail com on dec at blocking copied from original issue paradoxianer projectconceptor | 0 |
302,674 | 26,159,026,814 | IssuesEvent | 2022-12-31 07:30:53 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: cdc/cloud-sink-gcs/rangefeed=true failed | C-test-failure O-robot O-roachtest release-blocker branch-release-22.2 | roachtest.cdc/cloud-sink-gcs/rangefeed=true [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147282?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147282?buildTab=artifacts#/cdc/cloud-sink-gcs/rangefeed=true) on release-22.2 @ [07a53a36601e9ca5fcffcff55f69b43c6dfbf1c1](https://github.com/cockroachdb/cockroach/commits/07a53a36601e9ca5fcffcff55f69b43c6dfbf1c1):
```
test artifacts and logs in: /artifacts/cdc/cloud-sink-gcs/rangefeed=true/run_1
(test_impl.go:286).Fatal: output in run_073000.729710524_n4_workload_run_tpcc: ./workload run tpcc --warehouses=50 --duration=30m {pgurl:1-3} returned: context canceled
(test_impl.go:286).Fatal: monitor failure: monitor task failed: pq: Use of CHANGEFEED requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=16</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #91650 roachtest: cdc/cloud-sink-gcs/rangefeed=true failed [C-test-failure O-roachtest O-robot T-cdc branch-release-22.1]
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/cloud-sink-gcs/rangefeed=true.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: cdc/cloud-sink-gcs/rangefeed=true failed - roachtest.cdc/cloud-sink-gcs/rangefeed=true [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147282?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147282?buildTab=artifacts#/cdc/cloud-sink-gcs/rangefeed=true) on release-22.2 @ [07a53a36601e9ca5fcffcff55f69b43c6dfbf1c1](https://github.com/cockroachdb/cockroach/commits/07a53a36601e9ca5fcffcff55f69b43c6dfbf1c1):
```
test artifacts and logs in: /artifacts/cdc/cloud-sink-gcs/rangefeed=true/run_1
(test_impl.go:286).Fatal: output in run_073000.729710524_n4_workload_run_tpcc: ./workload run tpcc --warehouses=50 --duration=30m {pgurl:1-3} returned: context canceled
(test_impl.go:286).Fatal: monitor failure: monitor task failed: pq: Use of CHANGEFEED requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=16</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #91650 roachtest: cdc/cloud-sink-gcs/rangefeed=true failed [C-test-failure O-roachtest O-robot T-cdc branch-release-22.1]
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/cloud-sink-gcs/rangefeed=true.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest cdc cloud sink gcs rangefeed true failed roachtest cdc cloud sink gcs rangefeed true with on release test artifacts and logs in artifacts cdc cloud sink gcs rangefeed true run test impl go fatal output in run workload run tpcc workload run tpcc warehouses duration pgurl returned context canceled test impl go fatal monitor failure monitor task failed pq use of changefeed requires an enterprise license your evaluation license expired on december if you re interested in getting a new license please contact subscriptions cockroachlabs com and we can help you out parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see same failure on other branches roachtest cdc cloud sink gcs rangefeed true failed cc cockroachdb cdc | 1 |
112,649 | 17,095,343,135 | IssuesEvent | 2021-07-09 01:03:49 | samq-wsdemo/cargo-audit | https://api.github.com/repos/samq-wsdemo/cargo-audit | closed | WS-2018-0636 (High) detected in smallvec-0.6.14.crate - autoclosed | security vulnerability | ## WS-2018-0636 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>smallvec-0.6.14.crate</b></p></summary>
<p>'Small vector' optimization: store up to a small number of items on the stack</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/smallvec/0.6.14/download">https://crates.io/api/v1/crates/smallvec/0.6.14/download</a></p>
<p>
Dependency Hierarchy:
- abscissa_core-0.5.2.crate (Root Library)
- tracing-subscriber-0.1.6.crate
- :x: **smallvec-0.6.14.crate** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samq-wsdemo/cargo-audit/commit/9c5e38d023f1ba549a2efd6943482ff505eb7f00">9c5e38d023f1ba549a2efd6943482ff505eb7f00</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected versions of this crate called mem::uninitialized() to create values of a user-supplied type T. This is unsound e.g. if T is a reference type (which must be non-null and thus may not remain uninitialized).
The flaw was corrected by avoiding the use of mem::uninitialized(), using MaybeUninit instead.
<p>Publish Date: 2018-09-25
<p>URL: <a href=https://github.com/servo/rust-smallvec/commit/690d65e587409bf88fa942dea99e2aeb4c6b23e4>WS-2018-0636</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/servo/rust-smallvec/releases/tag/v1.0.0">https://github.com/servo/rust-smallvec/releases/tag/v1.0.0</a></p>
<p>Release Date: 2018-09-25</p>
<p>Fix Resolution: v1.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Crate","packageName":"smallvec","packageVersion":"0.6.14","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"abscissa_core:0.5.2;tracing-subscriber:0.1.6;smallvec:0.6.14","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.0.0"}],"baseBranches":["main"],"vulnerabilityIdentifier":"WS-2018-0636","vulnerabilityDetails":"Affected versions of this crate called mem::uninitialized() to create values of a user-supplied type T. This is unsound e.g. if T is a reference type (which must be non-null and thus may not remain uninitialized).\n\nThe flaw was corrected by avoiding the use of mem::uninitialized(), using MaybeUninit instead.","vulnerabilityUrl":"https://github.com/servo/rust-smallvec/commit/690d65e587409bf88fa942dea99e2aeb4c6b23e4","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | WS-2018-0636 (High) detected in smallvec-0.6.14.crate - autoclosed - ## WS-2018-0636 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>smallvec-0.6.14.crate</b></p></summary>
<p>'Small vector' optimization: store up to a small number of items on the stack</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/smallvec/0.6.14/download">https://crates.io/api/v1/crates/smallvec/0.6.14/download</a></p>
<p>
Dependency Hierarchy:
- abscissa_core-0.5.2.crate (Root Library)
- tracing-subscriber-0.1.6.crate
- :x: **smallvec-0.6.14.crate** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samq-wsdemo/cargo-audit/commit/9c5e38d023f1ba549a2efd6943482ff505eb7f00">9c5e38d023f1ba549a2efd6943482ff505eb7f00</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected versions of this crate called mem::uninitialized() to create values of a user-supplied type T. This is unsound e.g. if T is a reference type (which must be non-null and thus may not remain uninitialized).
The flaw was corrected by avoiding the use of mem::uninitialized(), using MaybeUninit instead.
<p>Publish Date: 2018-09-25
<p>URL: <a href=https://github.com/servo/rust-smallvec/commit/690d65e587409bf88fa942dea99e2aeb4c6b23e4>WS-2018-0636</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/servo/rust-smallvec/releases/tag/v1.0.0">https://github.com/servo/rust-smallvec/releases/tag/v1.0.0</a></p>
<p>Release Date: 2018-09-25</p>
<p>Fix Resolution: v1.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Crate","packageName":"smallvec","packageVersion":"0.6.14","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"abscissa_core:0.5.2;tracing-subscriber:0.1.6;smallvec:0.6.14","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.0.0"}],"baseBranches":["main"],"vulnerabilityIdentifier":"WS-2018-0636","vulnerabilityDetails":"Affected versions of this crate called mem::uninitialized() to create values of a user-supplied type T. This is unsound e.g. if T is a reference type (which must be non-null and thus may not remain uninitialized).\n\nThe flaw was corrected by avoiding the use of mem::uninitialized(), using MaybeUninit instead.","vulnerabilityUrl":"https://github.com/servo/rust-smallvec/commit/690d65e587409bf88fa942dea99e2aeb4c6b23e4","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_test | ws high detected in smallvec crate autoclosed ws high severity vulnerability vulnerable library smallvec crate small vector optimization store up to a small number of items on the stack library home page a href dependency hierarchy abscissa core crate root library tracing subscriber crate x smallvec crate vulnerable library found in head commit a href found in base branch main vulnerability details affected versions of this crate called mem uninitialized to create values of a user supplied type t this is unsound e g if t is a reference type which must be non null and thus may not remain uninitialized the flaw was corrected by avoiding the use of mem uninitialized using maybeuninit instead publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree abscissa core tracing subscriber smallvec isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier ws vulnerabilitydetails affected versions of this crate called mem uninitialized to create values of a user supplied type t this is unsound e g if t is a reference type which must be non null and thus may not remain uninitialized n nthe flaw was corrected by avoiding the use of mem uninitialized using maybeuninit instead vulnerabilityurl | 0 |
129,078 | 27,389,394,708 | IssuesEvent | 2023-02-28 15:20:39 | WoWManiaUK/Redemption | https://api.github.com/repos/WoWManiaUK/Redemption | closed | [NPC] Brazie Getz - Missing | Fixed on PTR - Tester Confirmed Code Change | **Links:**
https://wowpedia.fandom.com/wiki/Brazie_Getz
https://www.wow-mania.com/armory/?npc=37904
**What is Happening:**
This NPC is not spawning after Alliance kill Deathbringer Saurfang
**What Should happen:**
Brazie Getz is a [gnome](https://wowpedia.fandom.com/wiki/Gnome) [general goods](https://wowpedia.fandom.com/wiki/General_goods) vendor found at [Deathbringer's Rise](https://wowpedia.fandom.com/wiki/Deathbringer%27s_Rise) in [Icecrown Citadel](https://wowpedia.fandom.com/wiki/Icecrown_Citadel_(instance)) after the Alliance defeats [Deathbringer Saurfang](https://wowpedia.fandom.com/wiki/Deathbringer_Saurfang).
| 1.0 | [NPC] Brazie Getz - Missing - **Links:**
https://wowpedia.fandom.com/wiki/Brazie_Getz
https://www.wow-mania.com/armory/?npc=37904
**What is Happening:**
This NPC is not spawning after Alliance kill Deathbringer Saurfang
**What Should happen:**
Brazie Getz is a [gnome](https://wowpedia.fandom.com/wiki/Gnome) [general goods](https://wowpedia.fandom.com/wiki/General_goods) vendor found at [Deathbringer's Rise](https://wowpedia.fandom.com/wiki/Deathbringer%27s_Rise) in [Icecrown Citadel](https://wowpedia.fandom.com/wiki/Icecrown_Citadel_(instance)) after the Alliance defeats [Deathbringer Saurfang](https://wowpedia.fandom.com/wiki/Deathbringer_Saurfang).
| non_test | brazie getz missing links what is happening this npc is not spawning after alliance kill deathbringer saurfang what should happen brazie getz is a vendor found at in after the alliance defeats | 0 |
24,256 | 4,073,968,249 | IssuesEvent | 2016-05-28 04:06:05 | lavalamp/issue-testing | https://api.github.com/repos/lavalamp/issue-testing | opened | Broken test run: kubernetes-e2e-gke - 7900 [6 failures] | kind/flake team/test-infra | https://storage.googleapis.com/kubernetes-jenkins/logs/kubernetes-e2e-gke/7900/
Multiple broken tests:
Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:122
Expected error:
<*errors.StatusError | 0xc8208ba100>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server does not allow access to the requested resource (get serviceAccounts)",
Reason: "Forbidden",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-ew4tk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 403,
},
}
the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred
```
Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:226
May 27 20:15:05.010: failed to get pod pod2, that's pretty weird. validation failed: the server does not allow access to the requested resource (get pods pod2)
```
Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:406
May 27 20:15:24.788: expected node port 31822 to be in use, stdout: . err: Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.206.103 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-l6wke hostexec -- /bin/sh -c for i in $(seq 1 300); do if ss -ant46 'sport = :31822' | grep ^LISTEN; then exit 0; fi; sleep 1; done; exit 1] [] <nil> Error from server:
[] <nil> 0xc820857960 exit status 1 <nil> true [0xc820ce8070 0xc820ce8088 0xc820ce80a0] [0xc820ce8070 0xc820ce8088 0xc820ce80a0] [0xc820ce8080 0xc820ce8098] [0x9e43e0 0x9e43e0] 0xc820d0f260}:
Command stdout:
stderr:
Error from server:
error:
exit status 1
```
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:857
Expected error:
<*errors.errorString | 0xc820b94660>: {
s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.206.103 --kubeconfig=/workspace/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hsj6h] [] <nil> Error from server: the server does not allow access to the requested resource (delete replicationControllers e2e-test-nginx-rc)\n [] <nil> 0xc820889380 exit status 1 <nil> true [0xc82004e200 0xc82004e220 0xc82004e238] [0xc82004e200 0xc82004e220 0xc82004e238] [0xc82004e218 0xc82004e230] [0x9e43e0 0x9e43e0] 0xc820747aa0}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (delete replicationControllers e2e-test-nginx-rc)\n\nerror:\nexit status 1\n",
}
Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.206.103 --kubeconfig=/workspace/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hsj6h] [] <nil> Error from server: the server does not allow access to the requested resource (delete replicationControllers e2e-test-nginx-rc)
[] <nil> 0xc820889380 exit status 1 <nil> true [0xc82004e200 0xc82004e220 0xc82004e238] [0xc82004e200 0xc82004e220 0xc82004e238] [0xc82004e218 0xc82004e230] [0x9e43e0 0x9e43e0] 0xc820747aa0}:
Command stdout:
stderr:
Error from server: the server does not allow access to the requested resource (delete replicationControllers e2e-test-nginx-rc)
error:
exit status 1
not to have occurred
```
Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:123
May 27 20:15:13.260: Couldn't delete ns "e2e-tests-pods-0uim5": the server does not allow access to the requested resource (delete namespaces e2e-tests-pods-0uim5)
```
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
Expected error:
<*errors.StatusError | 0xc820870500>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server does not allow access to the requested resource (get pods)",
Reason: "Forbidden",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-dtcbd/pods?fieldSelector=metadata.name%3Dmy-hostname-basic-50bde913-2482-11e6-b103-0242ac110005-25vj3\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 403,
},
}
the server does not allow access to the requested resource (get pods)
not to have occurred
```
| 1.0 | Broken test run: kubernetes-e2e-gke - 7900 [6 failures] - https://storage.googleapis.com/kubernetes-jenkins/logs/kubernetes-e2e-gke/7900/
Multiple broken tests:
Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:122
Expected error:
<*errors.StatusError | 0xc8208ba100>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server does not allow access to the requested resource (get serviceAccounts)",
Reason: "Forbidden",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-job-ew4tk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 403,
},
}
the server does not allow access to the requested resource (get serviceAccounts)
not to have occurred
```
Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:226
May 27 20:15:05.010: failed to get pod pod2, that's pretty weird. validation failed: the server does not allow access to the requested resource (get pods pod2)
```
Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:406
May 27 20:15:24.788: expected node port 31822 to be in use, stdout: . err: Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.206.103 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-l6wke hostexec -- /bin/sh -c for i in $(seq 1 300); do if ss -ant46 'sport = :31822' | grep ^LISTEN; then exit 0; fi; sleep 1; done; exit 1] [] <nil> Error from server:
[] <nil> 0xc820857960 exit status 1 <nil> true [0xc820ce8070 0xc820ce8088 0xc820ce80a0] [0xc820ce8070 0xc820ce8088 0xc820ce80a0] [0xc820ce8080 0xc820ce8098] [0x9e43e0 0x9e43e0] 0xc820d0f260}:
Command stdout:
stderr:
Error from server:
error:
exit status 1
```
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:857
Expected error:
<*errors.errorString | 0xc820b94660>: {
s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.206.103 --kubeconfig=/workspace/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hsj6h] [] <nil> Error from server: the server does not allow access to the requested resource (delete replicationControllers e2e-test-nginx-rc)\n [] <nil> 0xc820889380 exit status 1 <nil> true [0xc82004e200 0xc82004e220 0xc82004e238] [0xc82004e200 0xc82004e220 0xc82004e238] [0xc82004e218 0xc82004e230] [0x9e43e0 0x9e43e0] 0xc820747aa0}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (delete replicationControllers e2e-test-nginx-rc)\n\nerror:\nexit status 1\n",
}
Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://130.211.206.103 --kubeconfig=/workspace/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hsj6h] [] <nil> Error from server: the server does not allow access to the requested resource (delete replicationControllers e2e-test-nginx-rc)
[] <nil> 0xc820889380 exit status 1 <nil> true [0xc82004e200 0xc82004e220 0xc82004e238] [0xc82004e200 0xc82004e220 0xc82004e238] [0xc82004e218 0xc82004e230] [0x9e43e0 0x9e43e0] 0xc820747aa0}:
Command stdout:
stderr:
Error from server: the server does not allow access to the requested resource (delete replicationControllers e2e-test-nginx-rc)
error:
exit status 1
not to have occurred
```
Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:123
May 27 20:15:13.260: Couldn't delete ns "e2e-tests-pods-0uim5": the server does not allow access to the requested resource (delete namespaces e2e-tests-pods-0uim5)
```
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:38
Expected error:
<*errors.StatusError | 0xc820870500>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server does not allow access to the requested resource (get pods)",
Reason: "Forbidden",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Forbidden: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-dtcbd/pods?fieldSelector=metadata.name%3Dmy-hostname-basic-50bde913-2482-11e6-b103-0242ac110005-25vj3\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 403,
},
}
the server does not allow access to the requested resource (get pods)
not to have occurred
```
| test | broken test run kubernetes gke multiple broken tests failed job should delete a job kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error errstatus typemeta kind apiversion listmeta selflink resourceversion status failure message the server does not allow access to the requested resource get serviceaccounts reason forbidden details name group kind serviceaccounts causes type unexpectedserverresponse message forbidden api watch namespaces tests job serviceaccounts fieldselector metadata name field retryafterseconds code the server does not allow access to the requested resource get serviceaccounts not to have occurred failed services should serve multiport endpoints from pods kubernetes suite go src io kubernetes output dockerized go src io kubernetes test service go may failed to get pod that s pretty weird validation failed the server does not allow access to the requested resource get pods failed services should be able to create a functioning nodeport service kubernetes suite go src io kubernetes output dockerized go src io kubernetes test service go may expected node port to be in use stdout err error running workspace kubernetes platforms linux kubectl error from server exit status true command stdout stderr error from server error exit status failed kubectl client kubectl run rc should create an rc from an image kubernetes suite go src io kubernetes output dockerized go src io kubernetes test kubectl go expected error s error running workspace kubernetes platforms linux kubectl error from server the server does not allow access to the requested resource delete replicationcontrollers test nginx rc n exit status true ncommand stdout n nstderr nerror from server the server does not allow access to the requested resource delete replicationcontrollers test nginx rc n nerror nexit status n error running workspace kubernetes platforms linux kubectl error from server the server does not allow access to the requested resource delete replicationcontrollers test nginx rc exit status true command stdout stderr error from server the server does not allow access to the requested resource delete replicationcontrollers test nginx rc error exit status not to have occurred failed pods should get a host ip kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go may couldn t delete ns tests pods the server does not allow access to the requested resource delete namespaces tests pods failed replicationcontroller should serve a basic image on each replica with a public image kubernetes suite go src io kubernetes output dockerized go src io kubernetes test rc go expected error errstatus typemeta kind apiversion listmeta selflink resourceversion status failure message the server does not allow access to the requested resource get pods reason forbidden details name group kind pods causes type unexpectedserverresponse message forbidden api watch namespaces tests replication controller dtcbd pods fieldselector metadata name hostname basic field retryafterseconds code the server does not allow access to the requested resource get pods not to have occurred | 1 |
290,234 | 25,044,996,297 | IssuesEvent | 2022-11-05 05:39:00 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: sqlsmith/setup=empty/setting=no-mutations failed | C-test-failure O-robot O-roachtest branch-master | roachtest.sqlsmith/setup=empty/setting=no-mutations [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7330110?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7330110?buildTab=artifacts#/sqlsmith/setup=empty/setting=no-mutations) on master @ [2c0129ce2507e708382cda830d1e47817a299c2e](https://github.com/cockroachdb/cockroach/commits/2c0129ce2507e708382cda830d1e47817a299c2e):
```
test artifacts and logs in: /artifacts/sqlsmith/setup=empty/setting=no-mutations/run_1
(test_impl.go:297).Fatalf: error: pq: internal error: runtime error: invalid memory address or nil pointer dereference
stmt:
SELECT
concat_agg(tab_115482.col_199004::BYTES)::BYTES AS col_199005
FROM
(
VALUES
(COALESCE('\xea31001d1cd9':::BYTES, NULL)),
(crdb_internal.descriptor_with_post_deserialization_changes('\x':::BYTES::BYTES)::BYTES),
('\x2b':::BYTES)
)
AS tab_115482 (col_199004)
WHERE
true
GROUP BY
tab_115482.col_199004
HAVING
every(true::BOOL)::BOOL;
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=empty/setting=no-mutations.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: sqlsmith/setup=empty/setting=no-mutations failed - roachtest.sqlsmith/setup=empty/setting=no-mutations [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7330110?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7330110?buildTab=artifacts#/sqlsmith/setup=empty/setting=no-mutations) on master @ [2c0129ce2507e708382cda830d1e47817a299c2e](https://github.com/cockroachdb/cockroach/commits/2c0129ce2507e708382cda830d1e47817a299c2e):
```
test artifacts and logs in: /artifacts/sqlsmith/setup=empty/setting=no-mutations/run_1
(test_impl.go:297).Fatalf: error: pq: internal error: runtime error: invalid memory address or nil pointer dereference
stmt:
SELECT
concat_agg(tab_115482.col_199004::BYTES)::BYTES AS col_199005
FROM
(
VALUES
(COALESCE('\xea31001d1cd9':::BYTES, NULL)),
(crdb_internal.descriptor_with_post_deserialization_changes('\x':::BYTES::BYTES)::BYTES),
('\x2b':::BYTES)
)
AS tab_115482 (col_199004)
WHERE
true
GROUP BY
tab_115482.col_199004
HAVING
every(true::BOOL)::BOOL;
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=empty/setting=no-mutations.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest sqlsmith setup empty setting no mutations failed roachtest sqlsmith setup empty setting no mutations with on master test artifacts and logs in artifacts sqlsmith setup empty setting no mutations run test impl go fatalf error pq internal error runtime error invalid memory address or nil pointer dereference stmt select concat agg tab col bytes bytes as col from values coalesce bytes null crdb internal descriptor with post deserialization changes x bytes bytes bytes bytes as tab col where true group by tab col having every true bool bool parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb sql queries | 1 |
335,231 | 30,019,174,553 | IssuesEvent | 2023-06-26 21:22:36 | ISPC-TST-ARQUITECTURA-Y-CONECTIVIDAD/tareafinal-grupo-5 | https://api.github.com/repos/ISPC-TST-ARQUITECTURA-Y-CONECTIVIDAD/tareafinal-grupo-5 | closed | [DEV-05.4] Realizar pruebas exhaustivas del código y corregir errores identificados | enhancement Software Testing | Ejecuta pruebas rigurosas para asegurarte de que el código funcione correctamente y corrige cualquier error o problema identificado durante las pruebas. | 1.0 | [DEV-05.4] Realizar pruebas exhaustivas del código y corregir errores identificados - Ejecuta pruebas rigurosas para asegurarte de que el código funcione correctamente y corrige cualquier error o problema identificado durante las pruebas. | test | realizar pruebas exhaustivas del código y corregir errores identificados ejecuta pruebas rigurosas para asegurarte de que el código funcione correctamente y corrige cualquier error o problema identificado durante las pruebas | 1 |
29,011 | 13,922,691,044 | IssuesEvent | 2020-10-21 13:34:46 | AMICI-dev/AMICI | https://api.github.com/repos/AMICI-dev/AMICI | closed | Optimize code generation for sx0_fixed parameters | c++ performance python | For larger models, these source files can become quite big and keep the compiler busy for quite some time. Therefore, we should generate more compiler-friendly code. | True | Optimize code generation for sx0_fixed parameters - For larger models, these source files can become quite big and keep the compiler busy for quite some time. Therefore, we should generate more compiler-friendly code. | non_test | optimize code generation for fixed parameters for larger models these source files can become quite big and keep the compiler busy for quite some time therefore we should generate more compiler friendly code | 0 |
102,078 | 8,817,101,043 | IssuesEvent | 2018-12-30 19:18:32 | qaviton/test_repository | https://api.github.com/repos/qaviton/test_repository | reopened | functional web login with invalid password characters test | test | create test with steps :
1) load a url
2) go to login
3) enter username and password with invalid characters
4) click login
5) assert login error | 1.0 | functional web login with invalid password characters test - create test with steps :
1) load a url
2) go to login
3) enter username and password with invalid characters
4) click login
5) assert login error | test | functional web login with invalid password characters test create test with steps load a url go to login enter username and password with invalid characters click login assert login error | 1 |
443,057 | 30,872,277,898 | IssuesEvent | 2023-08-03 12:12:24 | ccameron/REPIC | https://api.github.com/repos/ccameron/REPIC | closed | Error installing REPIC | documentation good first issue | Hi Chris!
I am reviewing the scipion plugin for REPIC.
We were using a git clone.... and I'm trying literary what is in the documentation.... Pasted is what scipion does which is mainly this:
`eval "$(/extra/miniconda3/bin/conda shell.bash hook)"&& conda create -n repic-0 -c bioconda -c gurobi repic gurobi`
I've tried forcing python 3.8 but fails in the same way. Any idea what is going on?
````
eval "$(/extra/miniconda3/bin/conda shell.bash hook)"&& conda create -n repic-0 -c bioconda -c gurobi repic gurobi && touch repic_0_installed
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: -
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package python conflicts for:
repic -> matplotlib-base[version='>=3.2.2'] -> python[version='3.6.*|>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.11,<3.12.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0|>=3.7,<3.8.0a0|>=3.6,<3.7.0a0|>=3.7.1,<3.8.0a0|>=3.5,<3.6.0a0|>=2.7']
repic -> python
Package zlib conflicts for:
gurobi -> python[version='>=3.11,<3.12.0a0'] -> zlib[version='>=1.2.11,<1.3.0a0|>=1.2.12,<1.3.0a0|>=1.2.13,<1.3.0a0']
repic -> python -> zlib[version='>=1.2.11,<1.3.0a0|>=1.2.12,<1.3.0a0|>=1.2.13,<1.3.0a0']
````
| 1.0 | Error installing REPIC - Hi Chris!
I am reviewing the scipion plugin for REPIC.
We were using a git clone.... and I'm trying literary what is in the documentation.... Pasted is what scipion does which is mainly this:
`eval "$(/extra/miniconda3/bin/conda shell.bash hook)"&& conda create -n repic-0 -c bioconda -c gurobi repic gurobi`
I've tried forcing python 3.8 but fails in the same way. Any idea what is going on?
````
eval "$(/extra/miniconda3/bin/conda shell.bash hook)"&& conda create -n repic-0 -c bioconda -c gurobi repic gurobi && touch repic_0_installed
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: -
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package python conflicts for:
repic -> matplotlib-base[version='>=3.2.2'] -> python[version='3.6.*|>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.11,<3.12.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0|>=3.7,<3.8.0a0|>=3.6,<3.7.0a0|>=3.7.1,<3.8.0a0|>=3.5,<3.6.0a0|>=2.7']
repic -> python
Package zlib conflicts for:
gurobi -> python[version='>=3.11,<3.12.0a0'] -> zlib[version='>=1.2.11,<1.3.0a0|>=1.2.12,<1.3.0a0|>=1.2.13,<1.3.0a0']
repic -> python -> zlib[version='>=1.2.11,<1.3.0a0|>=1.2.12,<1.3.0a0|>=1.2.13,<1.3.0a0']
````
| non_test | error installing repic hi chris i am reviewing the scipion plugin for repic we were using a git clone and i m trying literary what is in the documentation pasted is what scipion does which is mainly this eval extra bin conda shell bash hook conda create n repic c bioconda c gurobi repic gurobi i ve tried forcing python but fails in the same way any idea what is going on eval extra bin conda shell bash hook conda create n repic c bioconda c gurobi repic gurobi touch repic installed collecting package metadata current repodata json done solving environment failed with repodata from current repodata json will retry with next repodata source collecting package metadata repodata json done solving environment found conflicts looking for incompatible packages this can take several minutes press ctrl c to abort failed unsatisfiableerror the following specifications were found to be incompatible with each other output in format requested package available versions package python conflicts for repic matplotlib base python repic python package zlib conflicts for gurobi python zlib repic python zlib | 0 |
41,012 | 5,329,661,996 | IssuesEvent | 2017-02-15 15:23:11 | GoogleCloudPlatform/google-cloud-node | https://api.github.com/repos/GoogleCloudPlatform/google-cloud-node | opened | bigquery: complete system tests | bigquery testing | Here's what we need to hit.
| Data type | Queries (Legacy) | Queries (Standard) | tabledata.list | tabledata.insertAll | Creating a table | Query Params (Beta) |
|------------------------------------------|------------------|--------------------|----------------|---------------------|------------------|---------------------|
| scalar value columns | | | | | | |
| repeated scalar value columns | N/A (flattened) | | | | | |
| record value columns | N/A (flattened) | | | | | |
| repeated record value columns | N/A (flattened) | | | | | |
| record value inside another record value | N/A (flattened) | | | | | |
| repeated value within a record value | N/A (flattened) | | | | | |
| timestamp (UTC) | | | | | | |
| datetime (no time zone) | N/A? | | | | | |
| date | N/A? | | | | | |
| time | N/A? | | | | | |
| bytes | N/A? | | | | | | | 1.0 | bigquery: complete system tests - Here's what we need to hit.
| Data type | Queries (Legacy) | Queries (Standard) | tabledata.list | tabledata.insertAll | Creating a table | Query Params (Beta) |
|------------------------------------------|------------------|--------------------|----------------|---------------------|------------------|---------------------|
| scalar value columns | | | | | | |
| repeated scalar value columns | N/A (flattened) | | | | | |
| record value columns | N/A (flattened) | | | | | |
| repeated record value columns | N/A (flattened) | | | | | |
| record value inside another record value | N/A (flattened) | | | | | |
| repeated value within a record value | N/A (flattened) | | | | | |
| timestamp (UTC) | | | | | | |
| datetime (no time zone) | N/A? | | | | | |
| date | N/A? | | | | | |
| time | N/A? | | | | | |
| bytes | N/A? | | | | | | | test | bigquery complete system tests here s what we need to hit data type queries legacy queries standard tabledata list tabledata insertall creating a table query params beta scalar value columns repeated scalar value columns n a flattened record value columns n a flattened repeated record value columns n a flattened record value inside another record value n a flattened repeated value within a record value n a flattened timestamp utc datetime no time zone n a date n a time n a bytes n a | 1 |
16,985 | 3,587,322,880 | IssuesEvent | 2016-01-30 07:10:08 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | flake in test-cmd/images on image import | kind/test-flake priority/P2 | Flake in `test-cmd` [here](https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/7460/console):
```
error: the image stream has a different import spec "", pass --confirm to update
Waiting for the import to complete, CTRL+C to stop waiting.
error: unable to import image: the default tag "5.7" has not been set on repository "library/mysql"
!!! Error in test/cmd/images.sh:80
'[ "$(oc import-image mysql --from=mysql --confirm | grep "sha256:")" ]' exited with status 1
Call stack:
1: test/cmd/images.sh:80 main(...)
Exiting with status 1
!!! Error in hack/test-cmd.sh:284
'${test}' exited with status 1
Call stack:
1: hack/test-cmd.sh:284 main(...)
Exiting with status 1
[FAIL] !!!!! Test Failed !!!!
``` | 1.0 | flake in test-cmd/images on image import - Flake in `test-cmd` [here](https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/7460/console):
```
error: the image stream has a different import spec "", pass --confirm to update
Waiting for the import to complete, CTRL+C to stop waiting.
error: unable to import image: the default tag "5.7" has not been set on repository "library/mysql"
!!! Error in test/cmd/images.sh:80
'[ "$(oc import-image mysql --from=mysql --confirm | grep "sha256:")" ]' exited with status 1
Call stack:
1: test/cmd/images.sh:80 main(...)
Exiting with status 1
!!! Error in hack/test-cmd.sh:284
'${test}' exited with status 1
Call stack:
1: hack/test-cmd.sh:284 main(...)
Exiting with status 1
[FAIL] !!!!! Test Failed !!!!
``` | test | flake in test cmd images on image import flake in test cmd error the image stream has a different import spec pass confirm to update waiting for the import to complete ctrl c to stop waiting error unable to import image the default tag has not been set on repository library mysql error in test cmd images sh exited with status call stack test cmd images sh main exiting with status error in hack test cmd sh test exited with status call stack hack test cmd sh main exiting with status test failed | 1 |
189,977 | 6,803,383,697 | IssuesEvent | 2017-11-03 00:35:44 | Automattic/amp-wp | https://api.github.com/repos/Automattic/amp-wp | closed | Support for Jetpack Sharing | wpcom-jetpack-compat [Priority] Medium [Type] Enhancement | An AMP plugin ticket for tracking Automattic/jetpack#3354 and Automattic/jetpack#3483. Personally, I'm leaning toward adding support in this plugin, like what's done with stats.
| 1.0 | Support for Jetpack Sharing - An AMP plugin ticket for tracking Automattic/jetpack#3354 and Automattic/jetpack#3483. Personally, I'm leaning toward adding support in this plugin, like what's done with stats.
| non_test | support for jetpack sharing an amp plugin ticket for tracking automattic jetpack and automattic jetpack personally i m leaning toward adding support in this plugin like what s done with stats | 0 |
391,149 | 26,881,806,221 | IssuesEvent | 2023-02-05 18:24:01 | wtkcooley/Eventify | https://api.github.com/repos/wtkcooley/Eventify | closed | add artifacts directory | documentation | - Create an artifact directory to store demo videos/screenshots
- Add some artifacts to `README.md` | 1.0 | add artifacts directory - - Create an artifact directory to store demo videos/screenshots
- Add some artifacts to `README.md` | non_test | add artifacts directory create an artifact directory to store demo videos screenshots add some artifacts to readme md | 0 |
73,684 | 14,113,686,858 | IssuesEvent | 2020-11-07 12:32:38 | cython/cython | https://api.github.com/repos/cython/cython | closed | [BUG] Bad reference counting when assigning a memory view to itself | Code Generation defect | **Describe the bug**
The code generated when assigning a memory view `a` to itself, `a = a`, does not function correctly with regards to reference counting. When doing this outside of a function, an `UnboundLocalError` is raised the next time `a` is used. When doing this inside a function, a segfault occurs.
**To Reproduce**
```cython
import numpy as np
cdef int[::1] a
a = np.zeros(1, dtype=np.intc)
print('before bad assignment:', a[0])
a = a
print('after bad assignment:', a[0])
```
**Expected behavior**
The above should of course print
```
before bad assignment: 0
after bad assignment: 0
```
In actuality only the first line is written, as the code fails on the last `a[0]` lookup. When doing this outside of a function, we get
```
UnboundLocalError: local variable 'a' referenced before assignment
```
**Environment**
- OS: Linux Mint 19.3
- Python version: 3.8.2
- Cython version: 0.29.16
**Additional context**
Looking at the generated C code, the problematic `a = a` statement turns into
```C
__PYX_XDEC_MEMVIEW(&__pyx_v_a, 1);
__PYX_INC_MEMVIEW(&__pyx_v_a, 0);
__pyx_v_a = __pyx_v_a;
```
I'm not familiar with the details of `__PYX_XDEC_MEMVIEW` and `__PYX_INC_MEMVIEW`, but I presume they decrement and increment the reference count of `__pyx_v_a` (i.e. `a`). Removing these two lines and keeping only the bottom one fixes the issue.
One can of course just as well remove all three lines, since `__pyx_v_a = __pyx_v_a` is guaranteed to be a no-op (it will be optimized away by the C compiler anyway). | 1.0 | [BUG] Bad reference counting when assigning a memory view to itself - **Describe the bug**
The code generated when assigning a memory view `a` to itself, `a = a`, does not function correctly with regards to reference counting. When doing this outside of a function, an `UnboundLocalError` is raised the next time `a` is used. When doing this inside a function, a segfault occurs.
**To Reproduce**
```cython
import numpy as np
cdef int[::1] a
a = np.zeros(1, dtype=np.intc)
print('before bad assignment:', a[0])
a = a
print('after bad assignment:', a[0])
```
**Expected behavior**
The above should of course print
```
before bad assignment: 0
after bad assignment: 0
```
In actuality only the first line is written, as the code fails on the last `a[0]` lookup. When doing this outside of a function, we get
```
UnboundLocalError: local variable 'a' referenced before assignment
```
**Environment**
- OS: Linux Mint 19.3
- Python version: 3.8.2
- Cython version: 0.29.16
**Additional context**
Looking at the generated C code, the problematic `a = a` statement turns into
```C
__PYX_XDEC_MEMVIEW(&__pyx_v_a, 1);
__PYX_INC_MEMVIEW(&__pyx_v_a, 0);
__pyx_v_a = __pyx_v_a;
```
I'm not familiar with the details of `__PYX_XDEC_MEMVIEW` and `__PYX_INC_MEMVIEW`, but I presume they decrement and increment the reference count of `__pyx_v_a` (i.e. `a`). Removing these two lines and keeping only the bottom one fixes the issue.
One can of course just as well remove all three lines, since `__pyx_v_a = __pyx_v_a` is guaranteed to be a no-op (it will be optimized away by the C compiler anyway). | non_test | bad reference counting when assigning a memory view to itself describe the bug the code generated when assigning a memory view a to itself a a does not function correctly with regards to reference counting when doing this outside of a function an unboundlocalerror is raised the next time a is used when doing this inside a function a segfault occurs to reproduce cython import numpy as np cdef int a a np zeros dtype np intc print before bad assignment a a a print after bad assignment a expected behavior the above should of course print before bad assignment after bad assignment in actuality only the first line is written as the code fails on the last a lookup when doing this outside of a function we get unboundlocalerror local variable a referenced before assignment environment os linux mint python version cython version additional context looking at the generated c code the problematic a a statement turns into c pyx xdec memview pyx v a pyx inc memview pyx v a pyx v a pyx v a i m not familiar with the details of pyx xdec memview and pyx inc memview but i presume they decrement and increment the reference count of pyx v a i e a removing these two lines and keeping only the bottom one fixes the issue one can of course just as well remove all three lines since pyx v a pyx v a is guaranteed to be a no op it will be optimized away by the c compiler anyway | 0 |
87,176 | 3,738,781,177 | IssuesEvent | 2016-03-09 00:28:48 | DDMAL/rodan-client | https://api.github.com/repos/DDMAL/rodan-client | opened | Improve enumeration use of RunJob states | Priority: MINOR Type: MAINTENANCE | I currently have two places where RunJob states are listed: Model and Collection. Need to simplify it. | 1.0 | Improve enumeration use of RunJob states - I currently have two places where RunJob states are listed: Model and Collection. Need to simplify it. | non_test | improve enumeration use of runjob states i currently have two places where runjob states are listed model and collection need to simplify it | 0 |
495,800 | 14,288,232,109 | IssuesEvent | 2020-11-23 17:24:03 | googleapis/repo-automation-bots | https://api.github.com/repos/googleapis/repo-automation-bots | closed | generate-bot: Cannot find module 'enquirer' | priority: p2 type: bug | Following the instructions to create a new bot using generate-bot:
```
> node ./packages/generate-bot/run.js
internal/modules/cjs/loader.js:985
throw err;
^
Error: Cannot find module 'enquirer'
Require stack:
- /home/chingor/code/repo-automation-bots/packages/generate-bot/main.js
- /home/chingor/code/repo-automation-bots/packages/generate-bot/run.js
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:982:15)
at Function.Module._load (internal/modules/cjs/loader.js:864:27)
at Module.require (internal/modules/cjs/loader.js:1044:19)
at require (internal/modules/cjs/helpers.js:77:18)
at Object.<anonymous> (/home/chingor/code/repo-automation-bots/packages/generate-bot/main.js:16:18)
at Module._compile (internal/modules/cjs/loader.js:1158:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1178:10)
at Module.load (internal/modules/cjs/loader.js:1002:32)
at Function.Module._load (internal/modules/cjs/loader.js:901:14)
at Module.require (internal/modules/cjs/loader.js:1044:19) {
code: 'MODULE_NOT_FOUND',
requireStack: [
'/home/chingor/code/repo-automation-bots/packages/generate-bot/main.js',
'/home/chingor/code/repo-automation-bots/packages/generate-bot/run.js'
]
}
``` | 1.0 | generate-bot: Cannot find module 'enquirer' - Following the instructions to create a new bot using generate-bot:
```
> node ./packages/generate-bot/run.js
internal/modules/cjs/loader.js:985
throw err;
^
Error: Cannot find module 'enquirer'
Require stack:
- /home/chingor/code/repo-automation-bots/packages/generate-bot/main.js
- /home/chingor/code/repo-automation-bots/packages/generate-bot/run.js
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:982:15)
at Function.Module._load (internal/modules/cjs/loader.js:864:27)
at Module.require (internal/modules/cjs/loader.js:1044:19)
at require (internal/modules/cjs/helpers.js:77:18)
at Object.<anonymous> (/home/chingor/code/repo-automation-bots/packages/generate-bot/main.js:16:18)
at Module._compile (internal/modules/cjs/loader.js:1158:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1178:10)
at Module.load (internal/modules/cjs/loader.js:1002:32)
at Function.Module._load (internal/modules/cjs/loader.js:901:14)
at Module.require (internal/modules/cjs/loader.js:1044:19) {
code: 'MODULE_NOT_FOUND',
requireStack: [
'/home/chingor/code/repo-automation-bots/packages/generate-bot/main.js',
'/home/chingor/code/repo-automation-bots/packages/generate-bot/run.js'
]
}
``` | non_test | generate bot cannot find module enquirer following the instructions to create a new bot using generate bot node packages generate bot run js internal modules cjs loader js throw err error cannot find module enquirer require stack home chingor code repo automation bots packages generate bot main js home chingor code repo automation bots packages generate bot run js at function module resolvefilename internal modules cjs loader js at function module load internal modules cjs loader js at module require internal modules cjs loader js at require internal modules cjs helpers js at object home chingor code repo automation bots packages generate bot main js at module compile internal modules cjs loader js at object module extensions js internal modules cjs loader js at module load internal modules cjs loader js at function module load internal modules cjs loader js at module require internal modules cjs loader js code module not found requirestack home chingor code repo automation bots packages generate bot main js home chingor code repo automation bots packages generate bot run js | 0 |
48,783 | 7,457,972,573 | IssuesEvent | 2018-03-30 08:01:05 | hackoregon/data-science-pet-containers | https://api.github.com/repos/hackoregon/data-science-pet-containers | closed | Check upstream repositories | bug documentation | Some of the images may be mixing "jsssie" and "stretch" packages - this is a problem if it's happening. | 1.0 | Check upstream repositories - Some of the images may be mixing "jsssie" and "stretch" packages - this is a problem if it's happening. | non_test | check upstream repositories some of the images may be mixing jsssie and stretch packages this is a problem if it s happening | 0 |
204,107 | 15,415,483,234 | IssuesEvent | 2021-03-05 02:42:55 | bitcoin/bitcoin | https://api.github.com/repos/bitcoin/bitcoin | closed | windows: new -Wreturn-type warnings after #19203 | Bug Tests Upstream Windows | We now get lots of compile time output like:
```bash
In file included from test/fuzz/addition_overflow.cpp:7:
./test/fuzz/util.h: In member function 'virtual SOCKET FuzzedSock::Get() const':
./test/fuzz/util.h:563:5: warning: no return statement in function returning non-void [-Wreturn-type]
563 | }
| ^
./test/fuzz/util.h: In member function 'virtual SOCKET FuzzedSock::Release()':
./test/fuzz/util.h:568:5: warning: no return statement in function returning non-void [-Wreturn-type]
568 | }
| ^
...
```
anywhere `test/fuzz/util.h` is included. Would be great to not have build logs filled with this spam.
This was all present in the CI. i.e https://cirrus-ci.com/task/5500626558779392?command=ci#L4283.
cc @practicalswift @vasild | 1.0 | windows: new -Wreturn-type warnings after #19203 - We now get lots of compile time output like:
```bash
In file included from test/fuzz/addition_overflow.cpp:7:
./test/fuzz/util.h: In member function 'virtual SOCKET FuzzedSock::Get() const':
./test/fuzz/util.h:563:5: warning: no return statement in function returning non-void [-Wreturn-type]
563 | }
| ^
./test/fuzz/util.h: In member function 'virtual SOCKET FuzzedSock::Release()':
./test/fuzz/util.h:568:5: warning: no return statement in function returning non-void [-Wreturn-type]
568 | }
| ^
...
```
anywhere `test/fuzz/util.h` is included. Would be great to not have build logs filled with this spam.
This was all present in the CI. i.e https://cirrus-ci.com/task/5500626558779392?command=ci#L4283.
cc @practicalswift @vasild | test | windows new wreturn type warnings after we now get lots of compile time output like bash in file included from test fuzz addition overflow cpp test fuzz util h in member function virtual socket fuzzedsock get const test fuzz util h warning no return statement in function returning non void test fuzz util h in member function virtual socket fuzzedsock release test fuzz util h warning no return statement in function returning non void anywhere test fuzz util h is included would be great to not have build logs filled with this spam this was all present in the ci i e cc practicalswift vasild | 1 |
81,331 | 15,612,315,302 | IssuesEvent | 2021-03-19 15:15:01 | NixOS/nixpkgs | https://api.github.com/repos/NixOS/nixpkgs | opened | Vulnerability roundup 100: squid-4.14: 1 advisory [5.3] | 1.severity: security | [search](https://search.nix.gsc.io/?q=squid&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=squid+in%3Apath&type=Code)
* [ ] [CVE-2021-28116](https://nvd.nist.gov/vuln/detail/CVE-2021-28116) CVSSv3=5.3 (nixos-unstable)
Scanned versions: nixos-unstable: 1f77a4c8c74.
Cc @7c6f434c
Cc @fpletz
| True | Vulnerability roundup 100: squid-4.14: 1 advisory [5.3] - [search](https://search.nix.gsc.io/?q=squid&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=squid+in%3Apath&type=Code)
* [ ] [CVE-2021-28116](https://nvd.nist.gov/vuln/detail/CVE-2021-28116) CVSSv3=5.3 (nixos-unstable)
Scanned versions: nixos-unstable: 1f77a4c8c74.
Cc @7c6f434c
Cc @fpletz
| non_test | vulnerability roundup squid advisory nixos unstable scanned versions nixos unstable cc cc fpletz | 0 |
52,577 | 7,775,588,283 | IssuesEvent | 2018-06-05 03:45:53 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | opened | Better document document-level experiments or consider replacing with origin trials | Related to: Documentation | # Proposal
I would like to propose one of (in order of preference):
- removing support for document-level experiments in favor of origin trial experiments
- not allowing breaking changes for features with document-level experiments (with the same policy we use for launched policies), and adding this to our [experiment documentation](https://www.ampproject.org/docs/reference/experimental)
- providing very clear documentation that document-level experiments may break a page at any time, and should not be used on non-experimental production traffic
# Background
AMP currently supports [document-level experiments](https://www.ampproject.org/docs/reference/experimental), i.e. by adding a meta "amp-experiments-opt-in" on a page the page author can enable experiments that are whitelisted for document level experiments (specified in [allow-doc-opt-in in prod-config.json](https://github.com/ampproject/amphtml/blob/master/build-system/global-configs/prod-config.json)).
There are three main types of experiments we support:
- Experiments enabled in the browser (via cookies or AMP.toggleExperiment), which allows developers to test experimental features but not to push these features to production.
- Document-level experiments, which allows developers to use experimental features in a production setting by adding the meta tag to their document (ideally behind the developers own experiment).
- Origin trial experiments, which allows us to whitelist domains who can enable the experiment features in a production setting (ideally behind the publisher's own experiment).
# Document-level vs. origin trial
The biggest issue with document-level experiments is that we have no good way to know everyone who is using the document-level experiment or to notify them of upcoming breaking changes. This means that a breaking change in the experiment carries the risk that a page in production breaks. In contrast, with origin trials the whitelisting process allows us to (a) require developers using the experiment to acknowledge the risks involved in using an experimental feature and (b) collect information that we could use to notify developers using the experimental feature in the case a breaking change will be made.
The main use case we've had for document-level experiments thus far has been cases where a few developers are working closely with the AMP community to refine an upcoming feature. This case is served well by the origin-trial case, and the number of these cases is not very large.
# Other use cases for document-level experiments
The document-level experiment I2I (#6869) has an additional use case for the amp-inabox. I don't have complete context for that use case so I'd like some input from @lannka whether origin trials is sufficient for that case. If not we may still be able to move to the "document-level experiments don't allow breaking changes" policy.
/cc @aghassemi @cathyxz @choumx @cramforce @jridgewell @kristoferbaxter
| 1.0 | Better document document-level experiments or consider replacing with origin trials - # Proposal
I would like to propose one of (in order of preference):
- removing support for document-level experiments in favor of origin trial experiments
- not allowing breaking changes for features with document-level experiments (with the same policy we use for launched policies), and adding this to our [experiment documentation](https://www.ampproject.org/docs/reference/experimental)
- providing very clear documentation that document-level experiments may break a page at any time, and should not be used on non-experimental production traffic
# Background
AMP currently supports [document-level experiments](https://www.ampproject.org/docs/reference/experimental), i.e. by adding a meta "amp-experiments-opt-in" on a page the page author can enable experiments that are whitelisted for document level experiments (specified in [allow-doc-opt-in in prod-config.json](https://github.com/ampproject/amphtml/blob/master/build-system/global-configs/prod-config.json)).
There are three main types of experiments we support:
- Experiments enabled in the browser (via cookies or AMP.toggleExperiment), which allows developers to test experimental features but not to push these features to production.
- Document-level experiments, which allows developers to use experimental features in a production setting by adding the meta tag to their document (ideally behind the developers own experiment).
- Origin trial experiments, which allows us to whitelist domains who can enable the experiment features in a production setting (ideally behind the publisher's own experiment).
# Document-level vs. origin trial
The biggest issue with document-level experiments is that we have no good way to know everyone who is using the document-level experiment or to notify them of upcoming breaking changes. This means that a breaking change in the experiment carries the risk that a page in production breaks. In contrast, with origin trials the whitelisting process allows us to (a) require developers using the experiment to acknowledge the risks involved in using an experimental feature and (b) collect information that we could use to notify developers using the experimental feature in the case a breaking change will be made.
The main use case we've had for document-level experiments thus far has been cases where a few developers are working closely with the AMP community to refine an upcoming feature. This case is served well by the origin-trial case, and the number of these cases is not very large.
# Other use cases for document-level experiments
The document-level experiment I2I (#6869) has an additional use case for the amp-inabox. I don't have complete context for that use case so I'd like some input from @lannka whether origin trials is sufficient for that case. If not we may still be able to move to the "document-level experiments don't allow breaking changes" policy.
/cc @aghassemi @cathyxz @choumx @cramforce @jridgewell @kristoferbaxter
| non_test | better document document level experiments or consider replacing with origin trials proposal i would like to propose one of in order of preference removing support for document level experiments in favor of origin trial experiments not allowing breaking changes for features with document level experiments with the same policy we use for launched policies and adding this to our providing very clear documentation that document level experiments may break a page at any time and should not be used on non experimental production traffic background amp currently supports i e by adding a meta amp experiments opt in on a page the page author can enable experiments that are whitelisted for document level experiments specified in there are three main types of experiments we support experiments enabled in the browser via cookies or amp toggleexperiment which allows developers to test experimental features but not to push these features to production document level experiments which allows developers to use experimental features in a production setting by adding the meta tag to their document ideally behind the developers own experiment origin trial experiments which allows us to whitelist domains who can enable the experiment features in a production setting ideally behind the publisher s own experiment document level vs origin trial the biggest issue with document level experiments is that we have no good way to know everyone who is using the document level experiment or to notify them of upcoming breaking changes this means that a breaking change in the experiment carries the risk that a page in production breaks in contrast with origin trials the whitelisting process allows us to a require developers using the experiment to acknowledge the risks involved in using an experimental feature and b collect information that we could use to notify developers using the experimental feature in the case a breaking change will be made the main use case we ve had for document level experiments thus far has been cases where a few developers are working closely with the amp community to refine an upcoming feature this case is served well by the origin trial case and the number of these cases is not very large other use cases for document level experiments the document level experiment has an additional use case for the amp inabox i don t have complete context for that use case so i d like some input from lannka whether origin trials is sufficient for that case if not we may still be able to move to the document level experiments don t allow breaking changes policy cc aghassemi cathyxz choumx cramforce jridgewell kristoferbaxter | 0 |
179,767 | 21,580,333,097 | IssuesEvent | 2022-05-02 18:00:35 | vincenzodistasio97/excel-to-json | https://api.github.com/repos/vincenzodistasio97/excel-to-json | opened | WS-2020-0091 (High) detected in http-proxy-1.16.2.tgz | security vulnerability | ## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.16.2.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.16.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.16.2.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.1.tgz (Root Library)
- webpack-dev-server-2.9.4.tgz
- http-proxy-middleware-0.17.4.tgz
- :x: **http-proxy-1.16.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/e367d4db4134dc676344b2b9fb2443300bd3c9c7">e367d4db4134dc676344b2b9fb2443300bd3c9c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-14</p>
<p>Fix Resolution (http-proxy): 1.18.1</p>
<p>Direct dependency fix Resolution (react-scripts): 1.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2020-0091 (High) detected in http-proxy-1.16.2.tgz - ## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.16.2.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.16.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.16.2.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.1.tgz (Root Library)
- webpack-dev-server-2.9.4.tgz
- http-proxy-middleware-0.17.4.tgz
- :x: **http-proxy-1.16.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/e367d4db4134dc676344b2b9fb2443300bd3c9c7">e367d4db4134dc676344b2b9fb2443300bd3c9c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-14</p>
<p>Fix Resolution (http-proxy): 1.18.1</p>
<p>Direct dependency fix Resolution (react-scripts): 1.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | ws high detected in http proxy tgz ws high severity vulnerability vulnerable library http proxy tgz http proxying for the masses library home page a href path to dependency file client package json path to vulnerable library client node modules http proxy package json dependency hierarchy react scripts tgz root library webpack dev server tgz http proxy middleware tgz x http proxy tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http proxy direct dependency fix resolution react scripts step up your open source security game with whitesource | 0 |
726,359 | 24,996,190,826 | IssuesEvent | 2022-11-03 00:37:06 | SparkDevNetwork/Rock | https://api.github.com/repos/SparkDevNetwork/Rock | closed | Event Registration Registrant Detail page uses incorrect text for missing Signed Documents | Type: Enhancement Status: Confirmed Priority: Low Topic: Event Registration Fixed in v14.1 | ### Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the following:
* Can you reproduce the problem on a fresh install or the [demo site](http://rock.rocksolidchurchdemo.com/)?
* Did you include your Rock version number and [client culture](https://github.com/SparkDevNetwork/Rock/wiki/Environment-and-Diagnostics-Information) setting?
* Did you [perform a cursory search](https://github.com/issues?q=is%3Aissue+user%3ASparkDevNetwork+-repo%3ASparkDevNetwork%2FSlack) to see if your bug or enhancement is already reported?
### Description
When looking at an Event Registration that requires a signed document and comparing it to a Group that requires a signed document, the verbiage does not match. At all.
On the Group Member Detail page it will tell you (not exact words) "We've sent a request to this user but they haven't signed it yet, last request was sent 18 hours ago."
On the Registrant Detail page it will tell you (not exact words) "We don't have a signed document for Daniel." and the same message is used whether a request has been sent or not.
### Steps to Reproduce
1. Setup a signed document for both a Group and an Event Registration.
2. Add person to both but don't sign the document yet.
3. Compare the messages.
**Expected behavior:**
While the text doesn't need to be exactly the same (though it probably should be if possible), our users are getting confused by the fact that they can't tell in an Event Registration if the document has been sent vs they just haven't signed yet. Additionally, they want to know how long ago one was sent like they can with Group Member details so that they don't resent a request shortly after the first was sent.
**Actual behavior:**
Message about missing signed document is ambiguous and non-consistent.
### Pull Request
We can build a pull request to sync up the verbiage between the two blocks if enhancement is approved.
### Versions
* **Rock Version:** 9.2
* **Client Culture Setting:** en-US | 1.0 | Event Registration Registrant Detail page uses incorrect text for missing Signed Documents - ### Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the following:
* Can you reproduce the problem on a fresh install or the [demo site](http://rock.rocksolidchurchdemo.com/)?
* Did you include your Rock version number and [client culture](https://github.com/SparkDevNetwork/Rock/wiki/Environment-and-Diagnostics-Information) setting?
* Did you [perform a cursory search](https://github.com/issues?q=is%3Aissue+user%3ASparkDevNetwork+-repo%3ASparkDevNetwork%2FSlack) to see if your bug or enhancement is already reported?
### Description
When looking at an Event Registration that requires a signed document and comparing it to a Group that requires a signed document, the verbiage does not match. At all.
On the Group Member Detail page it will tell you (not exact words) "We've sent a request to this user but they haven't signed it yet, last request was sent 18 hours ago."
On the Registrant Detail page it will tell you (not exact words) "We don't have a signed document for Daniel." and the same message is used whether a request has been sent or not.
### Steps to Reproduce
1. Setup a signed document for both a Group and an Event Registration.
2. Add person to both but don't sign the document yet.
3. Compare the messages.
**Expected behavior:**
While the text doesn't need to be exactly the same (though it probably should be if possible), our users are getting confused by the fact that they can't tell in an Event Registration if the document has been sent vs they just haven't signed yet. Additionally, they want to know how long ago one was sent like they can with Group Member details so that they don't resent a request shortly after the first was sent.
**Actual behavior:**
Message about missing signed document is ambiguous and non-consistent.
### Pull Request
We can build a pull request to sync up the verbiage between the two blocks if enhancement is approved.
### Versions
* **Rock Version:** 9.2
* **Client Culture Setting:** en-US | non_test | event registration registrant detail page uses incorrect text for missing signed documents prerequisites put an x between the brackets on this line if you have done all of the following can you reproduce the problem on a fresh install or the did you include your rock version number and setting did you to see if your bug or enhancement is already reported description when looking at an event registration that requires a signed document and comparing it to a group that requires a signed document the verbiage does not match at all on the group member detail page it will tell you not exact words we ve sent a request to this user but they haven t signed it yet last request was sent hours ago on the registrant detail page it will tell you not exact words we don t have a signed document for daniel and the same message is used whether a request has been sent or not steps to reproduce setup a signed document for both a group and an event registration add person to both but don t sign the document yet compare the messages expected behavior while the text doesn t need to be exactly the same though it probably should be if possible our users are getting confused by the fact that they can t tell in an event registration if the document has been sent vs they just haven t signed yet additionally they want to know how long ago one was sent like they can with group member details so that they don t resent a request shortly after the first was sent actual behavior message about missing signed document is ambiguous and non consistent pull request we can build a pull request to sync up the verbiage between the two blocks if enhancement is approved versions rock version client culture setting en us | 0 |
40,593 | 5,310,084,890 | IssuesEvent | 2017-02-12 16:59:35 | ukdtom/WebTools.bundle | https://api.github.com/repos/ukdtom/WebTools.bundle | closed | [All] Add search function to SUB module | enhancement Ready for testing | Add a function that allows the frontend to search for a movie or a tv show.
Episodes are not really needed so just show name and movie title.
Prefered returnvalue would be the key of the item/items in question and let the frontend query for the rest of the information. That would make it a generic function that can be placed in pm module or something alike.
| 1.0 | [All] Add search function to SUB module - Add a function that allows the frontend to search for a movie or a tv show.
Episodes are not really needed so just show name and movie title.
Prefered returnvalue would be the key of the item/items in question and let the frontend query for the rest of the information. That would make it a generic function that can be placed in pm module or something alike.
| test | add search function to sub module add a function that allows the frontend to search for a movie or a tv show episodes are not really needed so just show name and movie title prefered returnvalue would be the key of the item items in question and let the frontend query for the rest of the information that would make it a generic function that can be placed in pm module or something alike | 1 |
51,030 | 6,144,263,695 | IssuesEvent | 2017-06-27 08:29:05 | brave/browser-laptop | https://api.github.com/repos/brave/browser-laptop | closed | Manual Test Run on Windows x64 for 0.17.x RC3 | OS/Windows release-notes/exclude tests | # Manual Tests Template
Copy this to a new issue on each release.
The tests should run for macOS, Windows x64, Windows ia32, and a Linux distro for each release.
## Installer
1. [x] Check that installer is close to the size of last release.
2. [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
3. [x] Check Brave, muon, and libchromiumcontent version in About and make sure it is EXACTLY as expected.
## Last changeset test
1. [x] Test what is covered by the last changeset (you can find this by clicking on the SHA in about:brave).
## Per release specialty tests
- [x] Webview crash on YouTube after a while of inactivity . ([#9663](https://github.com/brave/browser-laptop/issues/9663))
- [x] Update Muon to 4.1.4. ([#9645](https://github.com/brave/browser-laptop/issues/9645))
- [x] Dead tab when loading large content. ([#8045](https://github.com/brave/browser-laptop/issues/8045))
- [x] Upgrade to Chromium 59.0.3071.109. ([#9626](https://github.com/brave/browser-laptop/issues/9626))
- [ ] Unable to view HBOGO video. ([#8581](https://github.com/brave/browser-laptop/issues/8581))
## Widevine/Netflix test
1. [x] Test that you can log into Netflix and start a show.
## Ledger
1. [ ] Create a wallet with a value other than $5 selected in the monthly budget dropdown. Click on the 'Add Funds' button and check that Coinbase transactions are blocked.
2. [x] Remove all `ledger-*.json` files from `~/Library/Application\ Support/Brave/`. Go to the Payments tab in about:preferences, enable payments, click on `create wallet`. Check that the `add funds` button appears after a wallet is created.
3. [ ] Click on `add funds` and verify that adding funds through Coinbase increases the account balance.
4. [ ] Repeat the step above but add funds by scanning the QR code in a mobile bitcoin app instead of through Coinbase.
5. [x] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
6. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click the register button. In the Payments tab, click `add funds`. Verify that the `transfer funds` button is visible and that clicking on `transfer funds` opens a jsfiddle URL in a new tab.
7. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click `unregister`. Verify that the `transfer funds` button no longer appears in the `add funds` modal.
8. [x] Check that disabling payments and enabling them again does not lose state.
## Sync
1. [x] Verify you are able to sync two devices using the secret code
2. [x] Visit a site on device 1 and change shield setting, ensure that the saved site preference is synced to device 2
3. [x] Enable Browsing history sync on device 1, ensure the history is shown on device 2
4. [x] Import/Add bookmarks on device 1, ensure it is synced on device 2
5. [x] Ensure imported bookmark folder structure is maintained on device 2
6. [x] Ensure bookmark favicons are shown after sync
## Data
1. [x] Make sure that data from the last version appears in the new version OK.
2. [x] Test that the previous version's cookies are preserved in the next version.
## About pages
1. [x] Test that about:adblock loads
2. [x] Test that about:autofill loads
3. [x] Test that about:bookmarks loads bookmarks
4. [x] Test that about:downloads loads downloads
5. [x] Test that about:extensions loads
6. [x] Test that about:history loads history
7. [x] Test that about:passwords loads
8. [x] Test that about:styles loads
9. [x] Test that about:preferences changing a preference takes effect right away
10. [x] Test that about:preferences language change takes effect on re-start
## Bookmarks
1. [x] Test that creating a bookmark on the bookmarks toolbar with the star button works
2. [x] Test that creating a bookmark on the bookmarks toolbar by dragging the un/lock icon works
3. [x] Test that creating a bookmark folder on the bookmarks toolbar works
4. [x] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
5. [x] Test that clicking a bookmark in the toolbar loads the bookmark.
6. [x] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Context menus
1. [x] Make sure context menu items in the URL bar work
2. [x] Make sure context menu items on content work with no selected text.
3. [x] Make sure context menu items on content work with selected text.
4. [x] Make sure context menu items on content work inside an editable control on `about:styles` (input, textarea, or contenteditable).
## Find on page
1. [x] Ensure search box is shown with shortcut
2. [x] Test successful find
3. [x] Test forward and backward find navigation
4. [x] Test failed find shows 0 results
5. [x] Test match case find
## Geolocation
1. [x] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
1. [x] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
1. [x] Test downloading a file works and that all actions on the download item works.
## Fullscreen
1. [x] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
2. [x] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs, Pinning and Tear off tabs
1. [x] Test that tabs are pinnable
2. [x] Test that tabs are unpinnable
3. [x] Test that tabs are draggable to same tabset
4. [x] Test that tabs are draggable to alternate tabset
5. [x] Test that tabs can be teared off into a new window
6. [x] Test that you are able to reattach a tab that is teared off into a new window
7. [x] Test that tab pages can be closed
8. [x] Test that tab pages can be muted
## Zoom
1. [x] Test zoom in / out shortcut works
2. [x] Test hamburger menu zooms.
3. [x] Test zoom saved when you close the browser and restore on a single site.
4. [x] Test zoom saved when you navigate within a single origin site.
5. [x] Test that navigating to a different origin resets the zoom
## Bravery settings
1. [x] Check that HTTPS Everywhere works by loading https://https-everywhere.badssl.com/
2. [x] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
3. [x] Check that ad replacement works on http://slashdot.org
4. [x] Check that toggling to blocking and allow ads works as expected.
5. [x] Test that clicking through a cert error in https://badssl.com/ works.
6. [x] Test that Safe Browsing works (http://downloadme.org/)
7. [x] Turning Safe Browsing off and shields off both disable safe browsing for http://downloadme.org/.
8. [x] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [x] Test that about:preferences default Bravery settings take effect on pages with no site settings.
10. [x] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
11. [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
12. [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
## Content tests
1. [x] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
2. [x] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
3. [x] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [x] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
5. [x] Open `about:styles` and type some misspellings on a textbox, make sure they are underlined.
6. [x] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
7. [x] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
8. [x] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
9. [x] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
10. [x] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
## Flash tests
1. [x] Turn on Flash in about:preferences#security. Test that clicking on 'Install Flash' banner on myspace.com shows a notification to allow Flash and that the banner disappears when 'Allow' is clicked.
2. [x] Test that flash placeholder appears on http://www.homestarrunner.com
## Autofill tests
1. [x] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
1. [x] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
2. [x] Test that windows and tabs restore when closed, including active tab.
3. [x] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Cookie and Cache
1. [x] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
2. [x] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
1. [x] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly. | 1.0 | Manual Test Run on Windows x64 for 0.17.x RC3 - # Manual Tests Template
Copy this to a new issue on each release.
The tests should run for macOS, Windows x64, Windows ia32, and a Linux distro for each release.
## Installer
1. [x] Check that installer is close to the size of last release.
2. [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
3. [x] Check Brave, muon, and libchromiumcontent version in About and make sure it is EXACTLY as expected.
## Last changeset test
1. [x] Test what is covered by the last changeset (you can find this by clicking on the SHA in about:brave).
## Per release specialty tests
- [x] Webview crash on YouTube after a while of inactivity . ([#9663](https://github.com/brave/browser-laptop/issues/9663))
- [x] Update Muon to 4.1.4. ([#9645](https://github.com/brave/browser-laptop/issues/9645))
- [x] Dead tab when loading large content. ([#8045](https://github.com/brave/browser-laptop/issues/8045))
- [x] Upgrade to Chromium 59.0.3071.109. ([#9626](https://github.com/brave/browser-laptop/issues/9626))
- [ ] Unable to view HBOGO video. ([#8581](https://github.com/brave/browser-laptop/issues/8581))
## Widevine/Netflix test
1. [x] Test that you can log into Netflix and start a show.
## Ledger
1. [ ] Create a wallet with a value other than $5 selected in the monthly budget dropdown. Click on the 'Add Funds' button and check that Coinbase transactions are blocked.
2. [x] Remove all `ledger-*.json` files from `~/Library/Application\ Support/Brave/`. Go to the Payments tab in about:preferences, enable payments, click on `create wallet`. Check that the `add funds` button appears after a wallet is created.
3. [ ] Click on `add funds` and verify that adding funds through Coinbase increases the account balance.
4. [ ] Repeat the step above but add funds by scanning the QR code in a mobile bitcoin app instead of through Coinbase.
5. [x] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
6. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click the register button. In the Payments tab, click `add funds`. Verify that the `transfer funds` button is visible and that clicking on `transfer funds` opens a jsfiddle URL in a new tab.
7. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click `unregister`. Verify that the `transfer funds` button no longer appears in the `add funds` modal.
8. [x] Check that disabling payments and enabling them again does not lose state.
## Sync
1. [x] Verify you are able to sync two devices using the secret code
2. [x] Visit a site on device 1 and change shield setting, ensure that the saved site preference is synced to device 2
3. [x] Enable Browsing history sync on device 1, ensure the history is shown on device 2
4. [x] Import/Add bookmarks on device 1, ensure it is synced on device 2
5. [x] Ensure imported bookmark folder structure is maintained on device 2
6. [x] Ensure bookmark favicons are shown after sync
## Data
1. [x] Make sure that data from the last version appears in the new version OK.
2. [x] Test that the previous version's cookies are preserved in the next version.
## About pages
1. [x] Test that about:adblock loads
2. [x] Test that about:autofill loads
3. [x] Test that about:bookmarks loads bookmarks
4. [x] Test that about:downloads loads downloads
5. [x] Test that about:extensions loads
6. [x] Test that about:history loads history
7. [x] Test that about:passwords loads
8. [x] Test that about:styles loads
9. [x] Test that about:preferences changing a preference takes effect right away
10. [x] Test that about:preferences language change takes effect on re-start
## Bookmarks
1. [x] Test that creating a bookmark on the bookmarks toolbar with the star button works
2. [x] Test that creating a bookmark on the bookmarks toolbar by dragging the un/lock icon works
3. [x] Test that creating a bookmark folder on the bookmarks toolbar works
4. [x] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
5. [x] Test that clicking a bookmark in the toolbar loads the bookmark.
6. [x] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Context menus
1. [x] Make sure context menu items in the URL bar work
2. [x] Make sure context menu items on content work with no selected text.
3. [x] Make sure context menu items on content work with selected text.
4. [x] Make sure context menu items on content work inside an editable control on `about:styles` (input, textarea, or contenteditable).
## Find on page
1. [x] Ensure search box is shown with shortcut
2. [x] Test successful find
3. [x] Test forward and backward find navigation
4. [x] Test failed find shows 0 results
5. [x] Test match case find
## Geolocation
1. [x] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
1. [x] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
1. [x] Test downloading a file works and that all actions on the download item works.
## Fullscreen
1. [x] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
2. [x] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs, Pinning and Tear off tabs
1. [x] Test that tabs are pinnable
2. [x] Test that tabs are unpinnable
3. [x] Test that tabs are draggable to same tabset
4. [x] Test that tabs are draggable to alternate tabset
5. [x] Test that tabs can be teared off into a new window
6. [x] Test that you are able to reattach a tab that is teared off into a new window
7. [x] Test that tab pages can be closed
8. [x] Test that tab pages can be muted
## Zoom
1. [x] Test zoom in / out shortcut works
2. [x] Test hamburger menu zooms.
3. [x] Test zoom saved when you close the browser and restore on a single site.
4. [x] Test zoom saved when you navigate within a single origin site.
5. [x] Test that navigating to a different origin resets the zoom
## Bravery settings
1. [x] Check that HTTPS Everywhere works by loading https://https-everywhere.badssl.com/
2. [x] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
3. [x] Check that ad replacement works on http://slashdot.org
4. [x] Check that toggling to blocking and allow ads works as expected.
5. [x] Test that clicking through a cert error in https://badssl.com/ works.
6. [x] Test that Safe Browsing works (http://downloadme.org/)
7. [x] Turning Safe Browsing off and shields off both disable safe browsing for http://downloadme.org/.
8. [x] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [x] Test that about:preferences default Bravery settings take effect on pages with no site settings.
10. [x] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
11. [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
12. [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
## Content tests
1. [x] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
2. [x] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
3. [x] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [x] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
5. [x] Open `about:styles` and type some misspellings on a textbox, make sure they are underlined.
6. [x] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
7. [x] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
8. [x] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
9. [x] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
10. [x] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
## Flash tests
1. [x] Turn on Flash in about:preferences#security. Test that clicking on 'Install Flash' banner on myspace.com shows a notification to allow Flash and that the banner disappears when 'Allow' is clicked.
2. [x] Test that flash placeholder appears on http://www.homestarrunner.com
## Autofill tests
1. [x] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
1. [x] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
2. [x] Test that windows and tabs restore when closed, including active tab.
3. [x] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Cookie and Cache
1. [x] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
2. [x] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
1. [x] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly. | test | manual test run on windows for x manual tests template copy this to a new issue on each release the tests should run for macos windows windows and a linux distro for each release installer check that installer is close to the size of last release check signature if os run spctl assess verbose applications brave app and make sure it returns accepted if windows right click on the installer exe and go to properties go to the digital signatures tab and double click on the signature make sure it says the digital signature is ok in the popup window check brave muon and libchromiumcontent version in about and make sure it is exactly as expected last changeset test test what is covered by the last changeset you can find this by clicking on the sha in about brave per release specialty tests webview crash on youtube after a while of inactivity update muon to dead tab when loading large content upgrade to chromium unable to view hbogo video widevine netflix test test that you can log into netflix and start a show ledger create a wallet with a value other than selected in the monthly budget dropdown click on the add funds button and check that coinbase transactions are blocked remove all ledger json files from library application support brave go to the payments tab in about preferences enable payments click on create wallet check that the add funds button appears after a wallet is created click on add funds and verify that adding funds through coinbase increases the account balance repeat the step above but add funds by scanning the qr code in a mobile bitcoin app instead of through coinbase visit nytimes com for a few seconds and make sure it shows up in the payments table go to and click the register button in the payments tab click add funds verify that the transfer funds button is visible and that clicking on transfer funds opens a jsfiddle url in a new tab go to and click unregister verify that the transfer funds button no longer appears in the add funds modal check that disabling payments and enabling them again does not lose state sync verify you are able to sync two devices using the secret code visit a site on device and change shield setting ensure that the saved site preference is synced to device enable browsing history sync on device ensure the history is shown on device import add bookmarks on device ensure it is synced on device ensure imported bookmark folder structure is maintained on device ensure bookmark favicons are shown after sync data make sure that data from the last version appears in the new version ok test that the previous version s cookies are preserved in the next version about pages test that about adblock loads test that about autofill loads test that about bookmarks loads bookmarks test that about downloads loads downloads test that about extensions loads test that about history loads history test that about passwords loads test that about styles loads test that about preferences changing a preference takes effect right away test that about preferences language change takes effect on re start bookmarks test that creating a bookmark on the bookmarks toolbar with the star button works test that creating a bookmark on the bookmarks toolbar by dragging the un lock icon works test that creating a bookmark folder on the bookmarks toolbar works test that moving a bookmark into a folder by drag and drop on the bookmarks folder works test that clicking a bookmark in the toolbar loads the bookmark test that clicking a bookmark in a bookmark toolbar folder loads the bookmark context menus make sure context menu items in the url bar work make sure context menu items on content work with no selected text make sure context menu items on content work with selected text make sure context menu items on content work inside an editable control on about styles input textarea or contenteditable find on page ensure search box is shown with shortcut test successful find test forward and backward find navigation test failed find shows results test match case find geolocation check that works site hacks test sub page loads a video and you can play it downloads test downloading a file works and that all actions on the download item works fullscreen test that entering full screen window works view toggle full screen and exit back not esc test that entering full screen works and esc to go back youtube com tabs pinning and tear off tabs test that tabs are pinnable test that tabs are unpinnable test that tabs are draggable to same tabset test that tabs are draggable to alternate tabset test that tabs can be teared off into a new window test that you are able to reattach a tab that is teared off into a new window test that tab pages can be closed test that tab pages can be muted zoom test zoom in out shortcut works test hamburger menu zooms test zoom saved when you close the browser and restore on a single site test zoom saved when you navigate within a single origin site test that navigating to a different origin resets the zoom bravery settings check that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to check that ad replacement works on check that toggling to blocking and allow ads works as expected test that clicking through a cert error in works test that safe browsing works turning safe browsing off and shields off both disable safe browsing for visit and then turn on script blocking nothing should load allow it from the script blocking ui in the url bar and it should work test that about preferences default bravery settings take effect on pages with no site settings test that turning on fingerprinting protection in about preferences shows fingerprints blocked at test that turning it off in the bravery menu shows fingerprints blocked test that party storage results are blank at when party cookies are blocked and not blank when party cookies are unblocked test that audio fingerprint is blocked at when fingerprinting protection is on content tests go to and click on the twitter icon on the top right test that context menus work in the new twitter tab load twitter and click on a tweet so the popup div shows click to dismiss and repeat with another div make sure it shows go to and test that clicking on show pops up a notification asking for permission make sure that clicking deny leads to no notifications being shown go to and make sure that the password can be saved make sure the saved password shows up in about passwords then reload and make sure the password is autofilled open about styles and type some misspellings on a textbox make sure they are underlined make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text make sure that command click control click on windows control click on ubuntu on a link opens a new tab but does not switch to it click on it and make sure it is already loaded open an email on or inbox google com and click on a link make sure it works test that pdf is loaded at test that shows up as grey not red no mixed content scripts are run flash tests turn on flash in about preferences security test that clicking on install flash banner on myspace com shows a notification to allow flash and that the banner disappears when allow is clicked test that flash placeholder appears on autofill tests test that autofill works on session storage do not forget to make a backup of your entire library application support brave folder temporarily move away your library application support brave session store and test that clean session storage works appdata brave in windows config brave in ubuntu test that windows and tabs restore when closed including active tab move away your entire library application support brave folder appdata brave in windows config brave in ubuntu cookie and cache make a backup of your profile turn on all clearing in preferences and shut down make sure when you bring the browser back up everything is gone that is specified go to and set an evercookie check that going to prefs clearing site data and cache and going back to the evercookie site does not remember the old evercookie value update tests test that updating using brave update version env variable works correctly | 1 |
236,933 | 19,586,865,302 | IssuesEvent | 2022-01-05 08:14:19 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | NoSuchElement Exception in FieldAccessCompeltionResolver | Type/Bug Team/LanguageServer Area/Completion GA-Test-Hackathon | **Description:**

As depicted above, no completions are provided at the cursor position due to the following exception.
```
[Error - 7:03:55 pm] Operation 'text/completion' failed! {uri: '/home/malintha/Documents/wso2/experimental/projects/ballerina/test-hackathon/project1/main.bal', [172:37], error: 'No value present'}
java.util.NoSuchElementException: No value present
at java.base/java.util.Optional.orElseThrow(Optional.java:382)
at org.ballerinalang.langserver.completions.util.FieldAccessCompletionResolver.getSymbolByName(FieldAccessCompletionResolver.java:279)
at org.ballerinalang.langserver.completions.util.FieldAccessCompletionResolver.transform(FieldAccessCompletionResolver.java:162)
at org.ballerinalang.langserver.completions.util.FieldAccessCompletionResolver.transform(FieldAccessCompletionResolver.java:79)
at io.ballerina.compiler.syntax.tree.MethodCallExpressionNode.apply(MethodCallExpressionNode.java:66)
at org.ballerinalang.langserver.completions.util.FieldAccessCompletionResolver.getTypeSymbol(FieldAccessCompletionResolver.java:266)
at org.ballerinalang.langserver.completions.providers.context.FieldAccessContext.getEntries(FieldAccessContext.java:67)
at org.ballerinalang.langserver.completions.providers.context.FieldAccessExpressionNodeContext.getCompletions(FieldAccessExpressionNodeContext.java:44)
at org.ballerinalang.langserver.completions.providers.context.FieldAccessExpressionNodeContext.getCompletions(FieldAccessExpressionNodeContext.java:33)
at org.ballerinalang.langserver.completions.util.CompletionUtil.route(CompletionUtil.java:136)
at org.ballerinalang.langserver.completions.util.CompletionUtil.getCompletionItems(CompletionUtil.java:82)
at org.ballerinalang.langserver.completions.BallerinaCompletionExtension.execute(BallerinaCompletionExtension.java:65)
at org.ballerinalang.langserver.completions.BallerinaCompletionExtension.execute(BallerinaCompletionExtension.java:40)
at org.ballerinalang.langserver.LangExtensionDelegator.completion(LangExtensionDelegator.java:122)
at org.ballerinalang.langserver.BallerinaTextDocumentService.lambda$completion$0(BallerinaTextDocumentService.java:147)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at java.base/java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:479)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
```
**Steps to reproduce:**
```Ballerina
function testFunction1() returns Type2 {
return {field1: 0, field2: 0};
}
class MyClass {
function () returns Type2 funcRef = testFunction1;
function testSelf() {
int field1 = self.funcRef().
}
}
```
**Affected Versions:**
stage-sl-snapshot
| 1.0 | NoSuchElement Exception in FieldAccessCompeltionResolver - **Description:**

As depicted above, no completions are provided at the cursor position due to the following exception.
```
[Error - 7:03:55 pm] Operation 'text/completion' failed! {uri: '/home/malintha/Documents/wso2/experimental/projects/ballerina/test-hackathon/project1/main.bal', [172:37], error: 'No value present'}
java.util.NoSuchElementException: No value present
at java.base/java.util.Optional.orElseThrow(Optional.java:382)
at org.ballerinalang.langserver.completions.util.FieldAccessCompletionResolver.getSymbolByName(FieldAccessCompletionResolver.java:279)
at org.ballerinalang.langserver.completions.util.FieldAccessCompletionResolver.transform(FieldAccessCompletionResolver.java:162)
at org.ballerinalang.langserver.completions.util.FieldAccessCompletionResolver.transform(FieldAccessCompletionResolver.java:79)
at io.ballerina.compiler.syntax.tree.MethodCallExpressionNode.apply(MethodCallExpressionNode.java:66)
at org.ballerinalang.langserver.completions.util.FieldAccessCompletionResolver.getTypeSymbol(FieldAccessCompletionResolver.java:266)
at org.ballerinalang.langserver.completions.providers.context.FieldAccessContext.getEntries(FieldAccessContext.java:67)
at org.ballerinalang.langserver.completions.providers.context.FieldAccessExpressionNodeContext.getCompletions(FieldAccessExpressionNodeContext.java:44)
at org.ballerinalang.langserver.completions.providers.context.FieldAccessExpressionNodeContext.getCompletions(FieldAccessExpressionNodeContext.java:33)
at org.ballerinalang.langserver.completions.util.CompletionUtil.route(CompletionUtil.java:136)
at org.ballerinalang.langserver.completions.util.CompletionUtil.getCompletionItems(CompletionUtil.java:82)
at org.ballerinalang.langserver.completions.BallerinaCompletionExtension.execute(BallerinaCompletionExtension.java:65)
at org.ballerinalang.langserver.completions.BallerinaCompletionExtension.execute(BallerinaCompletionExtension.java:40)
at org.ballerinalang.langserver.LangExtensionDelegator.completion(LangExtensionDelegator.java:122)
at org.ballerinalang.langserver.BallerinaTextDocumentService.lambda$completion$0(BallerinaTextDocumentService.java:147)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at java.base/java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:479)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
```
**Steps to reproduce:**
```Ballerina
function testFunction1() returns Type2 {
return {field1: 0, field2: 0};
}
class MyClass {
function () returns Type2 funcRef = testFunction1;
function testSelf() {
int field1 = self.funcRef().
}
}
```
**Affected Versions:**
stage-sl-snapshot
| test | nosuchelement exception in fieldaccesscompeltionresolver description as depicted above no completions are provided at the cursor position due to the following exception operation text completion failed uri home malintha documents experimental projects ballerina test hackathon main bal error no value present java util nosuchelementexception no value present at java base java util optional orelsethrow optional java at org ballerinalang langserver completions util fieldaccesscompletionresolver getsymbolbyname fieldaccesscompletionresolver java at org ballerinalang langserver completions util fieldaccesscompletionresolver transform fieldaccesscompletionresolver java at org ballerinalang langserver completions util fieldaccesscompletionresolver transform fieldaccesscompletionresolver java at io ballerina compiler syntax tree methodcallexpressionnode apply methodcallexpressionnode java at org ballerinalang langserver completions util fieldaccesscompletionresolver gettypesymbol fieldaccesscompletionresolver java at org ballerinalang langserver completions providers context fieldaccesscontext getentries fieldaccesscontext java at org ballerinalang langserver completions providers context fieldaccessexpressionnodecontext getcompletions fieldaccessexpressionnodecontext java at org ballerinalang langserver completions providers context fieldaccessexpressionnodecontext getcompletions fieldaccessexpressionnodecontext java at org ballerinalang langserver completions util completionutil route completionutil java at org ballerinalang langserver completions util completionutil getcompletionitems completionutil java at org ballerinalang langserver completions ballerinacompletionextension execute ballerinacompletionextension java at org ballerinalang langserver completions ballerinacompletionextension execute ballerinacompletionextension java at org ballerinalang langserver langextensiondelegator completion langextensiondelegator java at org ballerinalang langserver ballerinatextdocumentservice lambda completion ballerinatextdocumentservice java at java base java util concurrent completablefuture uniapply tryfire completablefuture java at java base java util concurrent completablefuture completion exec completablefuture java at java base java util concurrent forkjointask doexec forkjointask java at java base java util concurrent forkjoinpool workqueue toplevelexec forkjoinpool java at java base java util concurrent forkjoinpool scan forkjoinpool java at java base java util concurrent forkjoinpool runworker forkjoinpool java at java base java util concurrent forkjoinworkerthread run forkjoinworkerthread java steps to reproduce ballerina function returns return class myclass function returns funcref function testself int self funcref affected versions stage sl snapshot | 1 |
14,117 | 5,558,201,310 | IssuesEvent | 2017-03-24 14:12:30 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Some c-ares symbols in Node headers not exported by Node executable | addons build cares | <!--
Thank you for reporting an issue.
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
- **Version**: 6.3.1
- **Platform**:`Linux <hostname redacted> 3.13.0-87-generic #133-Ubuntu SMP Tue May 24 18:32:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux`
<!-- Enter your issue details below this comment. -->
The `ares.h` file in the Node header distribution (the one that node-gyp downloads) contains symbols that are not dynamically exported by the Node binary. For example, `ares_gethostbyname` appears in that header file, but `objdump -T $(which node)` shows that that symbol appears nowhere in the dynamic symbol table. Because of this, I can build a native addon using that header without errors, but have it fail to find the symbol when I load it.
| 1.0 | Some c-ares symbols in Node headers not exported by Node executable - <!--
Thank you for reporting an issue.
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
- **Version**: 6.3.1
- **Platform**:`Linux <hostname redacted> 3.13.0-87-generic #133-Ubuntu SMP Tue May 24 18:32:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux`
<!-- Enter your issue details below this comment. -->
The `ares.h` file in the Node header distribution (the one that node-gyp downloads) contains symbols that are not dynamically exported by the Node binary. For example, `ares_gethostbyname` appears in that header file, but `objdump -T $(which node)` shows that that symbol appears nowhere in the dynamic symbol table. Because of this, I can build a native addon using that header without errors, but have it fail to find the symbol when I load it.
| non_test | some c ares symbols in node headers not exported by node executable thank you for reporting an issue please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform linux generic ubuntu smp tue may utc gnu linux the ares h file in the node header distribution the one that node gyp downloads contains symbols that are not dynamically exported by the node binary for example ares gethostbyname appears in that header file but objdump t which node shows that that symbol appears nowhere in the dynamic symbol table because of this i can build a native addon using that header without errors but have it fail to find the symbol when i load it | 0 |
145,962 | 11,716,191,306 | IssuesEvent | 2020-03-09 15:16:46 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | NLL impl_trait_in_bindings ICE | A-impl-trait C-bug E-needstest F-impl_trait_in_bindings I-ICE T-compiler glacier requires-nightly | ```Rust
#![feature(impl_trait_in_bindings)]
fn bug<'a, 'b, T>()
where
'a: 'b,
{
let f: impl Fn(&'a T) -> &'b T = |x| x;
}
fn main() {}
```
<details>
<summary>Backtrace:</summary>
```
thread 'rustc' panicked at 'assertion failed: self.universal_regions.is_universal_region(shorter)', src/librustc_mir/borrow_check/nll/type_check/free_region_relations.rs:335:9
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39
1: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:70
2: std::panicking::default_hook::{{closure}}
at src/libstd/sys_common/backtrace.rs:58
at src/libstd/panicking.rs:200
3: std::panicking::default_hook
at src/libstd/panicking.rs:215
4: rustc::util::common::panic_hook
5: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:482
6: std::panicking::begin_panic
7: <rustc_mir::borrow_check::nll::type_check::free_region_relations::UniversalRegionRelations<'tcx> as rustc::infer::outlives::free_region_map::FreeRegionRelations<'tcx>>::sub_free_regions
8: rustc::infer::InferCtxt::commit_if_ok
9: <rustc::traits::query::type_op::custom::CustomTypeOp<F, G> as rustc::traits::query::type_op::TypeOp<'gcx, 'tcx>>::fully_perform
10: rustc_mir::borrow_check::nll::type_check::TypeChecker::eq_opaque_type_and_type
11: rustc_mir::borrow_check::nll::type_check::TypeChecker::check_stmt
12: rustc_mir::borrow_check::nll::type_check::TypeChecker::typeck_mir
13: rustc_mir::borrow_check::nll::type_check::type_check
14: rustc_mir::borrow_check::do_mir_borrowck
15: rustc::ty::context::GlobalCtxt::enter_local
16: rustc_mir::borrow_check::mir_borrowck
17: rustc::ty::query::__query_compute::mir_borrowck
18: rustc::ty::query::<impl rustc::ty::query::config::QueryAccessors<'tcx> for rustc::ty::query::queries::mir_borrowck<'tcx>>::compute
19: rustc::dep_graph::graph::DepGraph::with_task_impl
20: <rustc::ty::query::plumbing::JobOwner<'a, 'tcx, Q>>::start
21: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::force_query_with_job
22: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::try_get_with
23: rustc::ty::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::par_body_owners
24: rustc::util::common::time
25: <std::thread::local::LocalKey<T>>::with
26: rustc::ty::context::TyCtxt::create_and_enter
27: rustc_driver::driver::compile_input
28: rustc_driver::run_compiler_with_pool
29: <scoped_tls::ScopedKey<T>>::set
30: rustc_driver::run_compiler
31: <scoped_tls::ScopedKey<T>>::set
query stack during panic:
#0 [mir_borrowck] processing `bug`
end of query stack
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.33.0-nightly (60e825389 2018-12-28) running on x86_64-unknown-linux-gnu
note: compiler flags: -C opt-level=3 -C codegen-units=1 --crate-type bin
note: some of the compiler flags provided by cargo are hidden
error: Could not compile `playground`.
```
</details>
https://play.rust-lang.org/?version=nightly&mode=release&edition=2018&gist=fed15c3173e52cba9875b50dace18c00
2015's Edition compiles fine (without NLL), 2018 ICE's | 1.0 | NLL impl_trait_in_bindings ICE - ```Rust
#![feature(impl_trait_in_bindings)]
fn bug<'a, 'b, T>()
where
'a: 'b,
{
let f: impl Fn(&'a T) -> &'b T = |x| x;
}
fn main() {}
```
<details>
<summary>Backtrace:</summary>
```
thread 'rustc' panicked at 'assertion failed: self.universal_regions.is_universal_region(shorter)', src/librustc_mir/borrow_check/nll/type_check/free_region_relations.rs:335:9
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39
1: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:70
2: std::panicking::default_hook::{{closure}}
at src/libstd/sys_common/backtrace.rs:58
at src/libstd/panicking.rs:200
3: std::panicking::default_hook
at src/libstd/panicking.rs:215
4: rustc::util::common::panic_hook
5: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:482
6: std::panicking::begin_panic
7: <rustc_mir::borrow_check::nll::type_check::free_region_relations::UniversalRegionRelations<'tcx> as rustc::infer::outlives::free_region_map::FreeRegionRelations<'tcx>>::sub_free_regions
8: rustc::infer::InferCtxt::commit_if_ok
9: <rustc::traits::query::type_op::custom::CustomTypeOp<F, G> as rustc::traits::query::type_op::TypeOp<'gcx, 'tcx>>::fully_perform
10: rustc_mir::borrow_check::nll::type_check::TypeChecker::eq_opaque_type_and_type
11: rustc_mir::borrow_check::nll::type_check::TypeChecker::check_stmt
12: rustc_mir::borrow_check::nll::type_check::TypeChecker::typeck_mir
13: rustc_mir::borrow_check::nll::type_check::type_check
14: rustc_mir::borrow_check::do_mir_borrowck
15: rustc::ty::context::GlobalCtxt::enter_local
16: rustc_mir::borrow_check::mir_borrowck
17: rustc::ty::query::__query_compute::mir_borrowck
18: rustc::ty::query::<impl rustc::ty::query::config::QueryAccessors<'tcx> for rustc::ty::query::queries::mir_borrowck<'tcx>>::compute
19: rustc::dep_graph::graph::DepGraph::with_task_impl
20: <rustc::ty::query::plumbing::JobOwner<'a, 'tcx, Q>>::start
21: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::force_query_with_job
22: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::try_get_with
23: rustc::ty::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::par_body_owners
24: rustc::util::common::time
25: <std::thread::local::LocalKey<T>>::with
26: rustc::ty::context::TyCtxt::create_and_enter
27: rustc_driver::driver::compile_input
28: rustc_driver::run_compiler_with_pool
29: <scoped_tls::ScopedKey<T>>::set
30: rustc_driver::run_compiler
31: <scoped_tls::ScopedKey<T>>::set
query stack during panic:
#0 [mir_borrowck] processing `bug`
end of query stack
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.33.0-nightly (60e825389 2018-12-28) running on x86_64-unknown-linux-gnu
note: compiler flags: -C opt-level=3 -C codegen-units=1 --crate-type bin
note: some of the compiler flags provided by cargo are hidden
error: Could not compile `playground`.
```
</details>
https://play.rust-lang.org/?version=nightly&mode=release&edition=2018&gist=fed15c3173e52cba9875b50dace18c00
2015's Edition compiles fine (without NLL), 2018 ICE's | test | nll impl trait in bindings ice rust fn bug where a b let f impl fn a t b t x x fn main backtrace thread rustc panicked at assertion failed self universal regions is universal region shorter src librustc mir borrow check nll type check free region relations rs note some details are omitted run with rust backtrace full for a verbose backtrace stack backtrace std sys unix backtrace tracing imp unwind backtrace at src libstd sys unix backtrace tracing gcc s rs std sys common backtrace print at src libstd sys common backtrace rs std panicking default hook closure at src libstd sys common backtrace rs at src libstd panicking rs std panicking default hook at src libstd panicking rs rustc util common panic hook std panicking rust panic with hook at src libstd panicking rs std panicking begin panic as rustc infer outlives free region map freeregionrelations sub free regions rustc infer inferctxt commit if ok as rustc traits query type op typeop fully perform rustc mir borrow check nll type check typechecker eq opaque type and type rustc mir borrow check nll type check typechecker check stmt rustc mir borrow check nll type check typechecker typeck mir rustc mir borrow check nll type check type check rustc mir borrow check do mir borrowck rustc ty context globalctxt enter local rustc mir borrow check mir borrowck rustc ty query query compute mir borrowck rustc ty query for rustc ty query queries mir borrowck compute rustc dep graph graph depgraph with task impl start rustc ty query plumbing force query with job rustc ty query plumbing try get with rustc ty par body owners rustc util common time with rustc ty context tyctxt create and enter rustc driver driver compile input rustc driver run compiler with pool set rustc driver run compiler set query stack during panic processing bug end of query stack error internal compiler error unexpected panic note the compiler unexpectedly panicked this is a bug note we would appreciate a bug report note rustc nightly running on unknown linux gnu note compiler flags c opt level c codegen units crate type bin note some of the compiler flags provided by cargo are hidden error could not compile playground s edition compiles fine without nll ice s | 1 |
77,785 | 27,162,468,661 | IssuesEvent | 2023-02-17 13:01:03 | mooltiverse/nyx | https://api.github.com/repos/mooltiverse/nyx | closed | Timezones in State files are inconsistent between Java and Go | type::defect | The timezone attributes in timestamps have different formats between Java and Go so a State file marhshalled with one cannot be unmarshalled with the other.
See this example about a `releaseScope/commits/#/authorAction/timeStamp` coming from a JSON State file generated with Java:
```json
"timeStamp" : {
"timeStamp" : 1676540658000,
"timeZone" : "GMT+01:00"
}
```
While this is the same example when the State is generated by Go:
```json
"timeStamp" : {
"timeStamp" : 1676540658000,
"timeZone" : {}
}
```
Timezones in State files generated with Go are not even marshalled, as only a `{}` is saved.
This issue becomes evident when Go needs to unmarshal a State generated with Java or viceversa. This prevents the different implementations of Nyx to be used in combination (see #168 ).
When using the Go version to unmarshal a Java State we get the error:
```
unable to unmarshal content from file '/github/workspace/build/.nyx-state.json': json: cannot unmarshal string into Go struct field TimeStamp.releaseScope.commits.authorAction.timeStamp.timeZone of type time.Location
```
When using the Java version to unmarshal a Go State we get the error:
```
Unable to unmarshal content from file '/home/runner/work/toolbox/toolbox/build/.nyx-state.json'
com.mooltiverse.oss.nyx.io.DataAccessException: Unable to unmarshal content from file '/home/runner/work/toolbox/toolbox/build/.nyx-state.json'
at com.mooltiverse.oss.nyx.io.FileMapper.load(FileMapper.java:111)
at com.mooltiverse.oss.nyx.state.State.resume(State.java:166)
at com.mooltiverse.oss.nyx.Nyx.state(Nyx.java:183)
at com.mooltiverse.oss.nyx.gradle.CoreTask.nyx(CoreTask.java:143)
at com.mooltiverse.oss.nyx.gradle.InferTask.infer(InferTask.java:66)
...
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize value of type `java.util.TimeZone` from Object value (token `JsonToken.START_OBJECT`)
at [Source: (File); line: 252, column: 25] (through reference chain: com.mooltiverse.oss.nyx.state.State["releaseScope"]->com.mooltiverse.oss.nyx.entities.ReleaseScope["commits"]->java.util.ArrayList[0]->com.mooltiverse.oss.nyx.entities.git.Commit["authorAction"]->com.mooltiverse.oss.nyx.entities.git.Action["timeStamp"]->com.mooltiverse.oss.nyx.entities.git.TimeStamp["timeZone"])
at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:59)
at com.fasterxml.jackson.databind.DeserializationContext.reportInputMismatch(DeserializationContext.java:1741)
at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1515)
at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1420)
at com.fasterxml.jackson.databind.DeserializationContext.extractScalarFromObject(DeserializationContext.java:932)
at com.fasterxml.jackson.databind.deser.std.FromStringDeserializer.deserialize(FromStringDeserializer.java:155)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:542)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:564)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:439)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1405)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:352)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:542)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:564)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:439)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1405)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:352)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:542)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:564)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:439)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1405)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:352)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28)
at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:314)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177)
at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:314)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:323)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4674)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3494)
at com.mooltiverse.oss.nyx.io.FileMapper.load(FileMapper.java:108)
... 190 more
``` | 1.0 | Timezones in State files are inconsistent between Java and Go - The timezone attributes in timestamps have different formats between Java and Go so a State file marhshalled with one cannot be unmarshalled with the other.
See this example about a `releaseScope/commits/#/authorAction/timeStamp` coming from a JSON State file generated with Java:
```json
"timeStamp" : {
"timeStamp" : 1676540658000,
"timeZone" : "GMT+01:00"
}
```
While this is the same example when the State is generated by Go:
```json
"timeStamp" : {
"timeStamp" : 1676540658000,
"timeZone" : {}
}
```
Timezones in State files generated with Go are not even marshalled, as only a `{}` is saved.
This issue becomes evident when Go needs to unmarshal a State generated with Java or viceversa. This prevents the different implementations of Nyx to be used in combination (see #168 ).
When using the Go version to unmarshal a Java State we get the error:
```
unable to unmarshal content from file '/github/workspace/build/.nyx-state.json': json: cannot unmarshal string into Go struct field TimeStamp.releaseScope.commits.authorAction.timeStamp.timeZone of type time.Location
```
When using the Java version to unmarshal a Go State we get the error:
```
Unable to unmarshal content from file '/home/runner/work/toolbox/toolbox/build/.nyx-state.json'
com.mooltiverse.oss.nyx.io.DataAccessException: Unable to unmarshal content from file '/home/runner/work/toolbox/toolbox/build/.nyx-state.json'
at com.mooltiverse.oss.nyx.io.FileMapper.load(FileMapper.java:111)
at com.mooltiverse.oss.nyx.state.State.resume(State.java:166)
at com.mooltiverse.oss.nyx.Nyx.state(Nyx.java:183)
at com.mooltiverse.oss.nyx.gradle.CoreTask.nyx(CoreTask.java:143)
at com.mooltiverse.oss.nyx.gradle.InferTask.infer(InferTask.java:66)
...
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize value of type `java.util.TimeZone` from Object value (token `JsonToken.START_OBJECT`)
at [Source: (File); line: 252, column: 25] (through reference chain: com.mooltiverse.oss.nyx.state.State["releaseScope"]->com.mooltiverse.oss.nyx.entities.ReleaseScope["commits"]->java.util.ArrayList[0]->com.mooltiverse.oss.nyx.entities.git.Commit["authorAction"]->com.mooltiverse.oss.nyx.entities.git.Action["timeStamp"]->com.mooltiverse.oss.nyx.entities.git.TimeStamp["timeZone"])
at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:59)
at com.fasterxml.jackson.databind.DeserializationContext.reportInputMismatch(DeserializationContext.java:1741)
at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1515)
at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1420)
at com.fasterxml.jackson.databind.DeserializationContext.extractScalarFromObject(DeserializationContext.java:932)
at com.fasterxml.jackson.databind.deser.std.FromStringDeserializer.deserialize(FromStringDeserializer.java:155)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:542)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:564)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:439)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1405)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:352)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:542)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:564)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:439)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1405)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:352)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:542)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:564)
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:439)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1405)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:352)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28)
at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:314)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177)
at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:138)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:314)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:323)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4674)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3494)
at com.mooltiverse.oss.nyx.io.FileMapper.load(FileMapper.java:108)
... 190 more
``` | non_test | timezones in state files are inconsistent between java and go the timezone attributes in timestamps have different formats between java and go so a state file marhshalled with one cannot be unmarshalled with the other see this example about a releasescope commits authoraction timestamp coming from a json state file generated with java json timestamp timestamp timezone gmt while this is the same example when the state is generated by go json timestamp timestamp timezone timezones in state files generated with go are not even marshalled as only a is saved this issue becomes evident when go needs to unmarshal a state generated with java or viceversa this prevents the different implementations of nyx to be used in combination see when using the go version to unmarshal a java state we get the error unable to unmarshal content from file github workspace build nyx state json json cannot unmarshal string into go struct field timestamp releasescope commits authoraction timestamp timezone of type time location when using the java version to unmarshal a go state we get the error unable to unmarshal content from file home runner work toolbox toolbox build nyx state json com mooltiverse oss nyx io dataaccessexception unable to unmarshal content from file home runner work toolbox toolbox build nyx state json at com mooltiverse oss nyx io filemapper load filemapper java at com mooltiverse oss nyx state state resume state java at com mooltiverse oss nyx nyx state nyx java at com mooltiverse oss nyx gradle coretask nyx coretask java at com mooltiverse oss nyx gradle infertask infer infertask java caused by com fasterxml jackson databind exc mismatchedinputexception cannot deserialize value of type java util timezone from object value token jsontoken start object at through reference chain com mooltiverse oss nyx state state com mooltiverse oss nyx entities releasescope java util arraylist com mooltiverse oss nyx entities git commit com mooltiverse oss nyx entities git action com mooltiverse oss nyx entities git timestamp at com fasterxml jackson databind exc mismatchedinputexception from mismatchedinputexception java at com fasterxml jackson databind deserializationcontext reportinputmismatch deserializationcontext java at com fasterxml jackson databind deserializationcontext handleunexpectedtoken deserializationcontext java at com fasterxml jackson databind deserializationcontext handleunexpectedtoken deserializationcontext java at com fasterxml jackson databind deserializationcontext extractscalarfromobject deserializationcontext java at com fasterxml jackson databind deser std fromstringdeserializer deserialize fromstringdeserializer java at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybased beandeserializer java at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybased beandeserializer java at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at com fasterxml jackson databind deser settablebeanproperty deserialize settablebeanproperty java at com fasterxml jackson databind deser beandeserializer deserializewitherrorwrapping beandeserializer java at com fasterxml jackson databind deser beandeserializer deserializeusingpropertybased beandeserializer java at com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java at com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at com fasterxml jackson databind deser std collectiondeserializer deserializefromarray collectiondeserializer java at com fasterxml jackson databind deser std collectiondeserializer deserialize collectiondeserializer java at com fasterxml jackson databind deser std collectiondeserializer deserialize collectiondeserializer java at com fasterxml jackson databind deser impl fieldproperty deserializeandset fieldproperty java at com fasterxml jackson databind deser beandeserializer vanilladeserialize beandeserializer java at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at com fasterxml jackson databind deser impl fieldproperty deserializeandset fieldproperty java at com fasterxml jackson databind deser beandeserializer vanilladeserialize beandeserializer java at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at com fasterxml jackson databind deser defaultdeserializationcontext readrootvalue defaultdeserializationcontext java at com fasterxml jackson databind objectmapper readmapandclose objectmapper java at com fasterxml jackson databind objectmapper readvalue objectmapper java at com mooltiverse oss nyx io filemapper load filemapper java more | 0 |
645,517 | 21,007,139,865 | IssuesEvent | 2022-03-30 00:23:55 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Add version number when sending confirmations | enhancement privacy priority/P3 QA/Yes release-notes/exclude feature/ads OS/Desktop | Add version number to payload when sending confirmations, i.e. `{"versionNumber":"1.2.3.4"}` | 1.0 | Add version number when sending confirmations - Add version number to payload when sending confirmations, i.e. `{"versionNumber":"1.2.3.4"}` | non_test | add version number when sending confirmations add version number to payload when sending confirmations i e versionnumber | 0 |
124,770 | 10,323,450,593 | IssuesEvent | 2019-08-31 21:41:39 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | consider linting test/run-pass/*.rs to ensure all have assert/assert_eq | A-testsuite C-enhancement | spawned off of the story in #23112
We should consider making the compiletest system reject `run-pass` tests that contain no calls to `assert!` nor `assert_eq!`
(Better still would be to also reject if no such call is reached during an execution of the test, but that would probably require we add testing-specific assertion macros.)
We could add a comment flag to allow tests to opt out of this requirement (e.g. if they are regression tests that are indeed just checking that we do not segfault and do not have a clear assertion to include).
| 1.0 | consider linting test/run-pass/*.rs to ensure all have assert/assert_eq - spawned off of the story in #23112
We should consider making the compiletest system reject `run-pass` tests that contain no calls to `assert!` nor `assert_eq!`
(Better still would be to also reject if no such call is reached during an execution of the test, but that would probably require we add testing-specific assertion macros.)
We could add a comment flag to allow tests to opt out of this requirement (e.g. if they are regression tests that are indeed just checking that we do not segfault and do not have a clear assertion to include).
| test | consider linting test run pass rs to ensure all have assert assert eq spawned off of the story in we should consider making the compiletest system reject run pass tests that contain no calls to assert nor assert eq better still would be to also reject if no such call is reached during an execution of the test but that would probably require we add testing specific assertion macros we could add a comment flag to allow tests to opt out of this requirement e g if they are regression tests that are indeed just checking that we do not segfault and do not have a clear assertion to include | 1 |
219,406 | 17,090,132,886 | IssuesEvent | 2021-07-08 16:19:44 | hpc/charliecloud | https://api.github.com/repos/hpc/charliecloud | closed | testing: delete temp images after use | medium refactor test | There are a lot of BATS tests that build throw-away images not used after completion of the test. Now that we have `ch-image delete`, remove those images at the end of their tests. | 1.0 | testing: delete temp images after use - There are a lot of BATS tests that build throw-away images not used after completion of the test. Now that we have `ch-image delete`, remove those images at the end of their tests. | test | testing delete temp images after use there are a lot of bats tests that build throw away images not used after completion of the test now that we have ch image delete remove those images at the end of their tests | 1 |
128,848 | 18,070,224,286 | IssuesEvent | 2021-09-21 01:26:02 | bitbar/finka-js | https://api.github.com/repos/bitbar/finka-js | opened | CVE-2021-3807 (Medium) detected in ansi-regex-3.0.0.tgz, ansi-regex-5.0.0.tgz | security vulnerability | ## CVE-2021-3807 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ansi-regex-3.0.0.tgz</b>, <b>ansi-regex-5.0.0.tgz</b></p></summary>
<p>
<details><summary><b>ansi-regex-3.0.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz</a></p>
<p>Path to dependency file: finka-js/package.json</p>
<p>Path to vulnerable library: finka-js/node_modules/wide-align/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- mocha-8.4.0.tgz (Root Library)
- wide-align-1.1.3.tgz
- string-width-2.1.1.tgz
- strip-ansi-4.0.0.tgz
- :x: **ansi-regex-3.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>ansi-regex-5.0.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p>
<p>Path to dependency file: finka-js/package.json</p>
<p>Path to vulnerable library: finka-js/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.32.0.tgz (Root Library)
- strip-ansi-6.0.0.tgz
- :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"3.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"mocha:8.4.0;wide-align:1.1.3;string-width:2.1.1;strip-ansi:4.0.0;ansi-regex:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"},{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"5.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"eslint:7.32.0;strip-ansi:6.0.0;ansi-regex:5.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3807","vulnerabilityDetails":"ansi-regex is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-3807 (Medium) detected in ansi-regex-3.0.0.tgz, ansi-regex-5.0.0.tgz - ## CVE-2021-3807 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ansi-regex-3.0.0.tgz</b>, <b>ansi-regex-5.0.0.tgz</b></p></summary>
<p>
<details><summary><b>ansi-regex-3.0.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz</a></p>
<p>Path to dependency file: finka-js/package.json</p>
<p>Path to vulnerable library: finka-js/node_modules/wide-align/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- mocha-8.4.0.tgz (Root Library)
- wide-align-1.1.3.tgz
- string-width-2.1.1.tgz
- strip-ansi-4.0.0.tgz
- :x: **ansi-regex-3.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>ansi-regex-5.0.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p>
<p>Path to dependency file: finka-js/package.json</p>
<p>Path to vulnerable library: finka-js/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.32.0.tgz (Root Library)
- strip-ansi-6.0.0.tgz
- :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"3.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"mocha:8.4.0;wide-align:1.1.3;string-width:2.1.1;strip-ansi:4.0.0;ansi-regex:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"},{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"5.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"eslint:7.32.0;strip-ansi:6.0.0;ansi-regex:5.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3807","vulnerabilityDetails":"ansi-regex is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in ansi regex tgz ansi regex tgz cve medium severity vulnerability vulnerable libraries ansi regex tgz ansi regex tgz ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file finka js package json path to vulnerable library finka js node modules wide align node modules ansi regex package json dependency hierarchy mocha tgz root library wide align tgz string width tgz strip ansi tgz x ansi regex tgz vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file finka js package json path to vulnerable library finka js node modules ansi regex package json dependency hierarchy eslint tgz root library strip ansi tgz x ansi regex tgz vulnerable library found in base branch master vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree mocha wide align string width strip ansi ansi regex isminimumfixversionavailable true minimumfixversion ansi regex packagetype javascript node js packagename ansi regex packageversion packagefilepaths istransitivedependency true dependencytree eslint strip ansi ansi regex isminimumfixversionavailable true minimumfixversion ansi regex basebranches vulnerabilityidentifier cve vulnerabilitydetails ansi regex is vulnerable to inefficient regular expression complexity vulnerabilityurl | 0 |
92,993 | 10,764,432,758 | IssuesEvent | 2019-11-01 08:15:47 | tiuweehan/ped | https://api.github.com/repos/tiuweehan/ped | opened | Sort command is not valid (and supposed to come in v1.3) | severity.Medium type.DocumentationBug | Sort is not a valid command in the application and UG says `coming in 1.3` even though the version is 1.3

| 1.0 | Sort command is not valid (and supposed to come in v1.3) - Sort is not a valid command in the application and UG says `coming in 1.3` even though the version is 1.3

| non_test | sort command is not valid and supposed to come in sort is not a valid command in the application and ug says coming in even though the version is | 0 |
40,973 | 10,598,142,231 | IssuesEvent | 2019-10-10 03:35:05 | Cyb3rWard0g/HELK | https://api.github.com/repos/Cyb3rWard0g/HELK | closed | Not an issue in Helk - Some difficulties while adding a new filebeat pipeline | custom build question | Scenario : File beat on a client send json data to Helk
1- install file beat and configure it 👍
on Filebeat:
Input:
- document type and path
output.kafka:
-
hosts: ["hostIP:9092"]
topic: "filebeat"
codec.json:
pretty: false
logs shows connected successfully no issues:

On Helk:
<img width="261" alt="pic2" src="https://user-images.githubusercontent.com/14360476/66286764-9a634b00-e91e-11e9-8f9c-13b0829e5c0e.PNG">
I can see no incoming messages to Kafka
<img width="499" alt="pict3" src="https://user-images.githubusercontent.com/14360476/66286911-2f664400-e91f-11e9-88e9-f84a8d0db80f.PNG">
What I am missing !! Any Ideas thanks :) | 1.0 | Not an issue in Helk - Some difficulties while adding a new filebeat pipeline - Scenario : File beat on a client send json data to Helk
1- install file beat and configure it 👍
on Filebeat:
Input:
- document type and path
output.kafka:
-
hosts: ["hostIP:9092"]
topic: "filebeat"
codec.json:
pretty: false
logs shows connected successfully no issues:

On Helk:
<img width="261" alt="pic2" src="https://user-images.githubusercontent.com/14360476/66286764-9a634b00-e91e-11e9-8f9c-13b0829e5c0e.PNG">
I can see no incoming messages to Kafka
<img width="499" alt="pict3" src="https://user-images.githubusercontent.com/14360476/66286911-2f664400-e91f-11e9-88e9-f84a8d0db80f.PNG">
What I am missing !! Any Ideas thanks :) | non_test | not an issue in helk some difficulties while adding a new filebeat pipeline scenario file beat on a client send json data to helk install file beat and configure it 👍 on filebeat input document type and path output kafka hosts topic filebeat codec json pretty false logs shows connected successfully no issues on helk img width alt src i can see no incoming messages to kafka img width alt src what i am missing any ideas thanks | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.