Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 844 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 12 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 248k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
122,277 | 12,148,110,954 | IssuesEvent | 2020-04-24 14:05:45 | RakipInitiative/ModelRepository | https://api.github.com/repos/RakipInitiative/ModelRepository | opened | Improve Feedback for errors (esp. OpenBUGS) | documentation question | - if a model execution fails due to changes in the parameter settings, there should be an error message specifying the cause, e.g:
- parameters out of bounds
- parameters are the wrong type
- for OpenBUGS:
- if the simulation in the software fails, is there a way to get the error message from R? | 1.0 | Improve Feedback for errors (esp. OpenBUGS) - - if a model execution fails due to changes in the parameter settings, there should be an error message specifying the cause, e.g:
- parameters out of bounds
- parameters are the wrong type
- for OpenBUGS:
- if the simulation in the software fails, is there a way to get the error message from R? | non_priority | improve feedback for errors esp openbugs if a model execution fails due to changes in the parameter settings there should be an error message specifying the cause e g parameters out of bounds parameters are the wrong type for openbugs if the simulation in the software fails is there a way to get the error message from r | 0 |
184,564 | 21,784,912,473 | IssuesEvent | 2022-05-14 01:46:53 | jinuem/Shopping-Cart-POC | https://api.github.com/repos/jinuem/Shopping-Cart-POC | closed | WS-2019-0333 (High) detected in handlebars-4.1.0.tgz - autoclosed | security vulnerability | ## WS-2019-0333 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.0.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.0.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.0.tgz</a></p>
<p>Path to dependency file: /Shopping-Cart-POC/rejsx/package.json</p>
<p>Path to vulnerable library: Shopping-Cart-POC/rejsx/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.5.tgz (Root Library)
- jest-23.6.0.tgz
- jest-cli-23.6.0.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In handlebars, versions prior to v4.5.3 are vulnerable to prototype pollution. Using a malicious template it's possbile to add or modify properties to the Object prototype. This can also lead to DOS and RCE in certain conditions.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/f7f05d7558e674856686b62a00cde5758f3b7a08>WS-2019-0333</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1325">https://www.npmjs.com/advisories/1325</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0333 (High) detected in handlebars-4.1.0.tgz - autoclosed - ## WS-2019-0333 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.0.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.0.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.0.tgz</a></p>
<p>Path to dependency file: /Shopping-Cart-POC/rejsx/package.json</p>
<p>Path to vulnerable library: Shopping-Cart-POC/rejsx/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.5.tgz (Root Library)
- jest-23.6.0.tgz
- jest-cli-23.6.0.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In handlebars, versions prior to v4.5.3 are vulnerable to prototype pollution. Using a malicious template it's possbile to add or modify properties to the Object prototype. This can also lead to DOS and RCE in certain conditions.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/f7f05d7558e674856686b62a00cde5758f3b7a08>WS-2019-0333</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1325">https://www.npmjs.com/advisories/1325</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws high detected in handlebars tgz autoclosed ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file shopping cart poc rejsx package json path to vulnerable library shopping cart poc rejsx node modules handlebars package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library vulnerability details in handlebars versions prior to are vulnerable to prototype pollution using a malicious template it s possbile to add or modify properties to the object prototype this can also lead to dos and rce in certain conditions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
65,922 | 27,278,951,799 | IssuesEvent | 2023-02-23 08:34:31 | Epitech-Nantes-Tek3/A-equals-l-squared | https://api.github.com/repos/Epitech-Nantes-Tek3/A-equals-l-squared | closed | Add Calendar Service | enhancement Service Front feature | **Short description:**
Add the google calendar service.
**Describe the solution you'd like**
First i want to be able to create a meeting in my google calendar, then in the google calendar of the logged user,
and then i want to be able to have an action when it's the time of the new meeting.
| 1.0 | Add Calendar Service - **Short description:**
Add the google calendar service.
**Describe the solution you'd like**
First i want to be able to create a meeting in my google calendar, then in the google calendar of the logged user,
and then i want to be able to have an action when it's the time of the new meeting.
| non_priority | add calendar service short description add the google calendar service describe the solution you d like first i want to be able to create a meeting in my google calendar then in the google calendar of the logged user and then i want to be able to have an action when it s the time of the new meeting | 0 |
102,213 | 21,933,004,250 | IssuesEvent | 2022-05-23 11:27:24 | Onelinerhub/onelinerhub | https://api.github.com/repos/Onelinerhub/onelinerhub | closed | Short solution needed: "chrome headless user agent" (chrome-headless) | help wanted good first issue code chrome-headless | Please help us write most modern and shortest code solution for this issue:
**chrome headless user agent** (technology: [chrome-headless](https://onelinerhub.com/chrome-headless))
### Fast way
Just write the code solution in the comments.
### Prefered way
1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox).
2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution.
3. Link to this issue in comments of pull request. | 1.0 | Short solution needed: "chrome headless user agent" (chrome-headless) - Please help us write most modern and shortest code solution for this issue:
**chrome headless user agent** (technology: [chrome-headless](https://onelinerhub.com/chrome-headless))
### Fast way
Just write the code solution in the comments.
### Prefered way
1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox).
2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution.
3. Link to this issue in comments of pull request. | non_priority | short solution needed chrome headless user agent chrome headless please help us write most modern and shortest code solution for this issue chrome headless user agent technology fast way just write the code solution in the comments prefered way create with a new code file inside don t forget to explain solution link to this issue in comments of pull request | 0 |
227,212 | 18,053,998,117 | IssuesEvent | 2021-09-20 04:42:11 | logicmoo/logicmoo_workspace | https://api.github.com/repos/logicmoo/logicmoo_workspace | opened | logicmoo.pfc.test.sanity_base.TML_01B JUnit | Test_9999 logicmoo.pfc.test.sanity_base unit_test TML_01B Passing | (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif tml_01b.pfc)
% ISSUE: https://github.com/logicmoo/logicmoo_workspace/issues/
% EDIT: https://github.com/logicmoo/logicmoo_workspace/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc
% JENKINS: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/TML_01B/logicmoo_pfc_test_sanity_base_TML_01B_JUnit/
% ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ATML_01B
```
%~ init_phase(after_load)
%~ init_phase(restore_state)
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/tml_01b.pfc'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- use_module(library(statistics)).
:- statistics.
/*~
% Started at Sun Sep 19 21:42:09 2021
% 1.024 seconds cpu time for 1,891,920 inferences
% 940,042 atoms, 31,015 functors, 29,587 predicates, 715 modules, 15,578,022 VM-codes
%
% Limit Allocated In use
% Local stack: - 52 Kb 4,216 b
% Global stack: - 64 Kb 16,464 b
% Trail stack: - 34 Kb 1,000 b
% Total: 1,024 Mb 150 Kb 21 Kb
%
% 2 garbage collections gained 124,136 bytes in 0.000 seconds.
% 2 atom garbage collections gained 1,319 atoms in 0.038 seconds.
% 5 clause garbage collections gained 1,644 clauses in 0.000 seconds.
% Stack shifts: 1 local, 0 global, 0 trail in 0.000 seconds
% 3 threads, 0 finished threads used 0.000 seconds
~*/
:- cls.
% reset runtime counter
%~ skipped(messy_on_output,cls)
% reset runtime counter
:- statistics(runtime,_Secs).
% Quick fwd test
% Quick fwd test
edge(X,Y) ==> path(X,Y).
path(X,Y),edge(Y, Z) ==> path(X, Z).
edge(1,2).
edge(2,3).
edge(3,4).
path(X,Y) ==> path(Y,X).
:- statistics(runtime,[_|MS]),
dmsg(assert_time_took_with_printing=ms(MS)).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/tml_01b.pfc:23
%~ assert_time_took_with_printing = ms([60]).
:- time(mpred_test(path(1,4))).
%~ mpred_test("Test_0001_Line_0000__path_1",baseKB:path(1,4))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L25
/*~
%~ mpred_test("Test_0001_Line_0000__path_1",baseKB:path(1,4))
passed=info(why_was_true(baseKB:path(1,4)))
Justifications for path(1,4):
1.1 edge(3,4) % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L18 ]
1.2 path(W4,X4),edge(X4,Y4)==>path(W4,Y4) % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L14 ]
1.3 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L18 ',18)
1.4 mfl4(['X'=_,'Y'=_,'Z'=_],baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L14 ',14)
2.1 path(4,1) % [mfl4(_2248,_2250,_2252,_2254)]
2.2 path(W4,X4)==>path(X4,W4) % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L20 ]
2.3 path(1,4) % [mfl4(_3936,_3938,_3940,_3942)]
2.4 mfl4(['X'=_,'Y'=_],baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L20 ',20)
2.5 edge(3,4) % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L18 ]
2.6 path(W4,X4),edge(X4,Y4)==>path(W4,Y4) % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L14 ]
2.7 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L18 ',18)
2.8 mfl4(['X'=_,'Y'=_,'Z'=_],baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L14 ',14)
name = 'logicmoo.pfc.test.sanity_base.TML_01B-Test_0001_Line_0000__path_1'.
JUNIT_CLASSNAME = 'logicmoo.pfc.test.sanity_base.TML_01B'.
JUNIT_CMD = 'timeout --foreground --preserve-status -s SIGKILL -k 10s 10s swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif tml_01b.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-pfc-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.TML_01B-Test_0001_Line_0000__path_1-junit.xml
% 100,706 inferences, 0.020 CPU in 0.020 seconds (100% CPU, 5043841 Lips)
~*/
:- listing(path/2).
%~ skipped( listing( path/2))
:- statistics.
/*~
% Started at Sun Sep 19 21:42:09 2021
% 1.174 seconds cpu time for 2,360,105 inferences
% 939,485 atoms, 31,018 functors, 29,594 predicates, 715 modules, 15,583,763 VM-codes
%
% Limit Allocated In use
% Local stack: - 1,012 Kb 4,736 b
% Global stack: - 512 Kb 326 Kb
% Trail stack: - 66 Kb 1,000 b
% Total: 1,024 Mb 1,590 Kb 332 Kb
%
% 11 garbage collections gained 1,141,640 bytes in 0.001 seconds.
% 5 atom garbage collections gained 2,303 atoms in 0.089 seconds.
% 8 clause garbage collections gained 1,970 clauses in 0.000 seconds.
% Stack shifts: 5 local, 3 global, 1 trail in 0.001 seconds
% 3 threads, 0 finished threads used 0.000 seconds
~*/
%~ unused(no_junit_results)
Test_0001_Line_0000__path_1 result = passed.
%~ test_completed_exit(64)
```
totalTime=1.000
SUCCESS: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k tml_01b.pfc (returned 64) Add_LABELS='' Rem_LABELS='Skipped,Errors,Warnings,Overtime,Skipped,Skipped'
| 3.0 | logicmoo.pfc.test.sanity_base.TML_01B JUnit - (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif tml_01b.pfc)
% ISSUE: https://github.com/logicmoo/logicmoo_workspace/issues/
% EDIT: https://github.com/logicmoo/logicmoo_workspace/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc
% JENKINS: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/TML_01B/logicmoo_pfc_test_sanity_base_TML_01B_JUnit/
% ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ATML_01B
```
%~ init_phase(after_load)
%~ init_phase(restore_state)
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/tml_01b.pfc'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- use_module(library(statistics)).
:- statistics.
/*~
% Started at Sun Sep 19 21:42:09 2021
% 1.024 seconds cpu time for 1,891,920 inferences
% 940,042 atoms, 31,015 functors, 29,587 predicates, 715 modules, 15,578,022 VM-codes
%
% Limit Allocated In use
% Local stack: - 52 Kb 4,216 b
% Global stack: - 64 Kb 16,464 b
% Trail stack: - 34 Kb 1,000 b
% Total: 1,024 Mb 150 Kb 21 Kb
%
% 2 garbage collections gained 124,136 bytes in 0.000 seconds.
% 2 atom garbage collections gained 1,319 atoms in 0.038 seconds.
% 5 clause garbage collections gained 1,644 clauses in 0.000 seconds.
% Stack shifts: 1 local, 0 global, 0 trail in 0.000 seconds
% 3 threads, 0 finished threads used 0.000 seconds
~*/
:- cls.
% reset runtime counter
%~ skipped(messy_on_output,cls)
% reset runtime counter
:- statistics(runtime,_Secs).
% Quick fwd test
% Quick fwd test
edge(X,Y) ==> path(X,Y).
path(X,Y),edge(Y, Z) ==> path(X, Z).
edge(1,2).
edge(2,3).
edge(3,4).
path(X,Y) ==> path(Y,X).
:- statistics(runtime,[_|MS]),
dmsg(assert_time_took_with_printing=ms(MS)).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/tml_01b.pfc:23
%~ assert_time_took_with_printing = ms([60]).
:- time(mpred_test(path(1,4))).
%~ mpred_test("Test_0001_Line_0000__path_1",baseKB:path(1,4))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L25
/*~
%~ mpred_test("Test_0001_Line_0000__path_1",baseKB:path(1,4))
passed=info(why_was_true(baseKB:path(1,4)))
Justifications for path(1,4):
1.1 edge(3,4) % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L18 ]
1.2 path(W4,X4),edge(X4,Y4)==>path(W4,Y4) % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L14 ]
1.3 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L18 ',18)
1.4 mfl4(['X'=_,'Y'=_,'Z'=_],baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L14 ',14)
2.1 path(4,1) % [mfl4(_2248,_2250,_2252,_2254)]
2.2 path(W4,X4)==>path(X4,W4) % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L20 ]
2.3 path(1,4) % [mfl4(_3936,_3938,_3940,_3942)]
2.4 mfl4(['X'=_,'Y'=_],baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L20 ',20)
2.5 edge(3,4) % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L18 ]
2.6 path(W4,X4),edge(X4,Y4)==>path(W4,Y4) % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L14 ]
2.7 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L18 ',18)
2.8 mfl4(['X'=_,'Y'=_,'Z'=_],baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/tml_01b.pfc#L14 ',14)
name = 'logicmoo.pfc.test.sanity_base.TML_01B-Test_0001_Line_0000__path_1'.
JUNIT_CLASSNAME = 'logicmoo.pfc.test.sanity_base.TML_01B'.
JUNIT_CMD = 'timeout --foreground --preserve-status -s SIGKILL -k 10s 10s swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif tml_01b.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-pfc-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.TML_01B-Test_0001_Line_0000__path_1-junit.xml
% 100,706 inferences, 0.020 CPU in 0.020 seconds (100% CPU, 5043841 Lips)
~*/
:- listing(path/2).
%~ skipped( listing( path/2))
:- statistics.
/*~
% Started at Sun Sep 19 21:42:09 2021
% 1.174 seconds cpu time for 2,360,105 inferences
% 939,485 atoms, 31,018 functors, 29,594 predicates, 715 modules, 15,583,763 VM-codes
%
% Limit Allocated In use
% Local stack: - 1,012 Kb 4,736 b
% Global stack: - 512 Kb 326 Kb
% Trail stack: - 66 Kb 1,000 b
% Total: 1,024 Mb 1,590 Kb 332 Kb
%
% 11 garbage collections gained 1,141,640 bytes in 0.001 seconds.
% 5 atom garbage collections gained 2,303 atoms in 0.089 seconds.
% 8 clause garbage collections gained 1,970 clauses in 0.000 seconds.
% Stack shifts: 5 local, 3 global, 1 trail in 0.001 seconds
% 3 threads, 0 finished threads used 0.000 seconds
~*/
%~ unused(no_junit_results)
Test_0001_Line_0000__path_1 result = passed.
%~ test_completed_exit(64)
```
totalTime=1.000
SUCCESS: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k tml_01b.pfc (returned 64) Add_LABELS='' Rem_LABELS='Skipped,Errors,Warnings,Overtime,Skipped,Skipped'
| non_priority | logicmoo pfc test sanity base tml junit cd var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base timeout foreground preserve status s sigkill k swipl x var lib jenkins workspace logicmoo workspace bin lmoo clif tml pfc issue edit jenkins issue search init phase after load init phase restore state running var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base tml pfc this test might need use module library logicmoo plarkc use module library statistics statistics started at sun sep seconds cpu time for inferences atoms functors predicates modules vm codes limit allocated in use local stack kb b global stack kb b trail stack kb b total mb kb kb garbage collections gained bytes in seconds atom garbage collections gained atoms in seconds clause garbage collections gained clauses in seconds stack shifts local global trail in seconds threads finished threads used seconds cls reset runtime counter skipped messy on output cls reset runtime counter statistics runtime secs quick fwd test quick fwd test edge x y path x y path x y edge y z path x z edge edge edge path x y path y x statistics runtime dmsg assert time took with printing ms ms var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base tml pfc assert time took with printing ms time mpred test path mpred test test line path basekb path file mpred test test line path basekb path passed info why was true basekb path justifications for path edge path edge path basekb basekb path path path path basekb edge path edge path basekb basekb name logicmoo pfc test sanity base tml test line path junit classname logicmoo pfc test sanity base tml junit cmd timeout foreground preserve status s sigkill k swipl x var lib jenkins workspace logicmoo workspace bin lmoo clif tml pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo pfc test sanity base units logicmoo pfc test sanity base tml test line path junit xml inferences cpu in seconds cpu lips listing path skipped listing path statistics started at sun sep seconds cpu time for inferences atoms functors predicates modules vm codes limit allocated in use local stack kb b global stack kb kb trail stack kb b total mb kb kb garbage collections gained bytes in seconds atom garbage collections gained atoms in seconds clause garbage collections gained clauses in seconds stack shifts local global trail in seconds threads finished threads used seconds unused no junit results test line path result passed test completed exit totaltime success var lib jenkins workspace logicmoo workspace bin lmoo junit minor k tml pfc returned add labels rem labels skipped errors warnings overtime skipped skipped | 0 |
48,421 | 25,519,673,116 | IssuesEvent | 2022-11-28 19:18:28 | rubymonsters/speakerinnen_liste | https://api.github.com/repos/rubymonsters/speakerinnen_liste | closed | Cache docker images in Travis | performance | Once we merge #971, the build will be slower than now. This can be improved with some caching. | True | Cache docker images in Travis - Once we merge #971, the build will be slower than now. This can be improved with some caching. | non_priority | cache docker images in travis once we merge the build will be slower than now this can be improved with some caching | 0 |
246,789 | 18,853,962,138 | IssuesEvent | 2021-11-12 02:06:10 | plutoniumpw/landing | https://api.github.com/repos/plutoniumpw/landing | closed | Update T6 Server Guide | documentation | New server.cfg zip has a `localappdata` folder, yet guide hasn't been updated.
https://plutonium.pw/docs/server/t6/setting-up-a-server/#1-preparation
Screenshots n such | 1.0 | Update T6 Server Guide - New server.cfg zip has a `localappdata` folder, yet guide hasn't been updated.
https://plutonium.pw/docs/server/t6/setting-up-a-server/#1-preparation
Screenshots n such | non_priority | update server guide new server cfg zip has a localappdata folder yet guide hasn t been updated screenshots n such | 0 |
52,748 | 13,225,001,742 | IssuesEvent | 2020-08-17 20:17:24 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | test scripts should be moved to resources/test (Trac #278) | Migrated from Trac combo reconstruction defect | Currently a lot of test scripts in icerec projects reside in resources/scripts. For consistency with offline-software they should be moved to resources/test (or is it tests?).
Here's a list of currently affected projects created by
```text
grep resources/scripts */CMakeLists.txt
```
in icerec trunk source:
BadDomList
bayesian-priors
cfirst
cflash
clast
core-removal
cramer-rao
credo
cscd-llh
DeepCore_Filter
dipolefit
DomTools
double-muon
ehe-star
FeatureExtractor
fill-ratio
finiteReco
flat-ntuple
gulliver
gulliver-modules
IceDwalk
ipdf
lilliput
linefit
lowe-noise-cleaner
muon-bundle-reco
muon-llh-reco
NFE
ophelia
paraboloid
particleforge
photorec-llh
portia
pulse-splitter
SeededRTCleaning
SLCHitExtractor
tensor-of-inertia
topeventbuilder
toprec
topwaveprocessor
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/278">https://code.icecube.wisc.edu/projects/icecube/ticket/278</a>, reported by kislatand owned by kislat</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-07-07T22:32:33",
"_ts": "1436308353324715",
"description": "Currently a lot of test scripts in icerec projects reside in resources/scripts. For consistency with offline-software they should be moved to resources/test (or is it tests?).\n\nHere's a list of currently affected projects created by \n{{{\n grep resources/scripts */CMakeLists.txt\n}}}\nin icerec trunk source:\n\nBadDomList[[BR]]\nbayesian-priors[[BR]]\ncfirst[[BR]]\ncflash[[BR]]\nclast[[BR]]\ncore-removal[[BR]]\ncramer-rao[[BR]]\ncredo[[BR]]\ncscd-llh[[BR]]\nDeepCore_Filter[[BR]]\ndipolefit[[BR]]\nDomTools[[BR]]\ndouble-muon[[BR]]\nehe-star[[BR]]\nFeatureExtractor[[BR]]\nfill-ratio[[BR]]\nfiniteReco[[BR]]\nflat-ntuple[[BR]]\ngulliver[[BR]]\ngulliver-modules[[BR]]\nIceDwalk[[BR]]\nipdf[[BR]]\nlilliput[[BR]]\nlinefit[[BR]]\nlowe-noise-cleaner[[BR]]\nmuon-bundle-reco[[BR]]\nmuon-llh-reco[[BR]]\nNFE[[BR]]\nophelia[[BR]]\nparaboloid[[BR]]\nparticleforge[[BR]]\nphotorec-llh[[BR]]\nportia[[BR]]\npulse-splitter[[BR]]\nSeededRTCleaning[[BR]]\nSLCHitExtractor[[BR]]\ntensor-of-inertia[[BR]]\ntopeventbuilder[[BR]]\ntoprec[[BR]]\ntopwaveprocessor\n",
"reporter": "kislat",
"cc": "emanuel.jacobi@desy.de",
"resolution": "wontfix",
"time": "2011-06-09T22:49:15",
"component": "combo reconstruction",
"summary": "test scripts should be moved to resources/test",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "kislat",
"type": "defect"
}
```
</p>
</details>
| 1.0 | test scripts should be moved to resources/test (Trac #278) - Currently a lot of test scripts in icerec projects reside in resources/scripts. For consistency with offline-software they should be moved to resources/test (or is it tests?).
Here's a list of currently affected projects created by
```text
grep resources/scripts */CMakeLists.txt
```
in icerec trunk source:
BadDomList
bayesian-priors
cfirst
cflash
clast
core-removal
cramer-rao
credo
cscd-llh
DeepCore_Filter
dipolefit
DomTools
double-muon
ehe-star
FeatureExtractor
fill-ratio
finiteReco
flat-ntuple
gulliver
gulliver-modules
IceDwalk
ipdf
lilliput
linefit
lowe-noise-cleaner
muon-bundle-reco
muon-llh-reco
NFE
ophelia
paraboloid
particleforge
photorec-llh
portia
pulse-splitter
SeededRTCleaning
SLCHitExtractor
tensor-of-inertia
topeventbuilder
toprec
topwaveprocessor
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/278">https://code.icecube.wisc.edu/projects/icecube/ticket/278</a>, reported by kislatand owned by kislat</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-07-07T22:32:33",
"_ts": "1436308353324715",
"description": "Currently a lot of test scripts in icerec projects reside in resources/scripts. For consistency with offline-software they should be moved to resources/test (or is it tests?).\n\nHere's a list of currently affected projects created by \n{{{\n grep resources/scripts */CMakeLists.txt\n}}}\nin icerec trunk source:\n\nBadDomList[[BR]]\nbayesian-priors[[BR]]\ncfirst[[BR]]\ncflash[[BR]]\nclast[[BR]]\ncore-removal[[BR]]\ncramer-rao[[BR]]\ncredo[[BR]]\ncscd-llh[[BR]]\nDeepCore_Filter[[BR]]\ndipolefit[[BR]]\nDomTools[[BR]]\ndouble-muon[[BR]]\nehe-star[[BR]]\nFeatureExtractor[[BR]]\nfill-ratio[[BR]]\nfiniteReco[[BR]]\nflat-ntuple[[BR]]\ngulliver[[BR]]\ngulliver-modules[[BR]]\nIceDwalk[[BR]]\nipdf[[BR]]\nlilliput[[BR]]\nlinefit[[BR]]\nlowe-noise-cleaner[[BR]]\nmuon-bundle-reco[[BR]]\nmuon-llh-reco[[BR]]\nNFE[[BR]]\nophelia[[BR]]\nparaboloid[[BR]]\nparticleforge[[BR]]\nphotorec-llh[[BR]]\nportia[[BR]]\npulse-splitter[[BR]]\nSeededRTCleaning[[BR]]\nSLCHitExtractor[[BR]]\ntensor-of-inertia[[BR]]\ntopeventbuilder[[BR]]\ntoprec[[BR]]\ntopwaveprocessor\n",
"reporter": "kislat",
"cc": "emanuel.jacobi@desy.de",
"resolution": "wontfix",
"time": "2011-06-09T22:49:15",
"component": "combo reconstruction",
"summary": "test scripts should be moved to resources/test",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "kislat",
"type": "defect"
}
```
</p>
</details>
| non_priority | test scripts should be moved to resources test trac currently a lot of test scripts in icerec projects reside in resources scripts for consistency with offline software they should be moved to resources test or is it tests here s a list of currently affected projects created by text grep resources scripts cmakelists txt in icerec trunk source baddomlist bayesian priors cfirst cflash clast core removal cramer rao credo cscd llh deepcore filter dipolefit domtools double muon ehe star featureextractor fill ratio finitereco flat ntuple gulliver gulliver modules icedwalk ipdf lilliput linefit lowe noise cleaner muon bundle reco muon llh reco nfe ophelia paraboloid particleforge photorec llh portia pulse splitter seededrtcleaning slchitextractor tensor of inertia topeventbuilder toprec topwaveprocessor migrated from json status closed changetime ts description currently a lot of test scripts in icerec projects reside in resources scripts for consistency with offline software they should be moved to resources test or is it tests n nhere s a list of currently affected projects created by n n grep resources scripts cmakelists txt n nin icerec trunk source n nbaddomlist nbayesian priors ncfirst ncflash nclast ncore removal ncramer rao ncredo ncscd llh ndeepcore filter ndipolefit ndomtools ndouble muon nehe star nfeatureextractor nfill ratio nfinitereco nflat ntuple ngulliver ngulliver modules nicedwalk nipdf nlilliput nlinefit nlowe noise cleaner nmuon bundle reco nmuon llh reco nnfe nophelia nparaboloid nparticleforge nphotorec llh nportia npulse splitter nseededrtcleaning nslchitextractor ntensor of inertia ntopeventbuilder ntoprec ntopwaveprocessor n reporter kislat cc emanuel jacobi desy de resolution wontfix time component combo reconstruction summary test scripts should be moved to resources test priority normal keywords milestone owner kislat type defect | 0 |
21,056 | 10,568,858,623 | IssuesEvent | 2019-10-06 15:47:45 | gkueny/gkueny.github.io | https://api.github.com/repos/gkueny/gkueny.github.io | closed | WS-2018-0021 Medium Severity Vulnerability detected by WhiteSource | security vulnerability | ## WS-2018-0021 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.6-3.3.6.js</b></p></summary>
<p>Google-styled theme for Bootstrap.</p>
<p>path: /gkueny.github.io/css/js/bootstrap.js</p>
<p>
<p>Library home page: <a href=https://cdnjs.cloudflare.com/ajax/libs/todc-bootstrap/3.3.6-3.3.6/js/bootstrap.js>https://cdnjs.cloudflare.com/ajax/libs/todc-bootstrap/3.3.6-3.3.6/js/bootstrap.js</a></p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.6-3.3.6.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XSS in data-target in bootstrap (3.3.7 and before)
<p>Publish Date: 2017-06-27
<p>URL: <a href=>WS-2018-0021</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/commit/d9be1da55bf0f94a81e8a2c9acf5574fb801306e">https://github.com/twbs/bootstrap/commit/d9be1da55bf0f94a81e8a2c9acf5574fb801306e</a></p>
<p>Release Date: 2017-08-25</p>
<p>Fix Resolution: Replace or update the following files: alert.js, carousel.js, collapse.js, dropdown.js, modal.js</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2018-0021 Medium Severity Vulnerability detected by WhiteSource - ## WS-2018-0021 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.6-3.3.6.js</b></p></summary>
<p>Google-styled theme for Bootstrap.</p>
<p>path: /gkueny.github.io/css/js/bootstrap.js</p>
<p>
<p>Library home page: <a href=https://cdnjs.cloudflare.com/ajax/libs/todc-bootstrap/3.3.6-3.3.6/js/bootstrap.js>https://cdnjs.cloudflare.com/ajax/libs/todc-bootstrap/3.3.6-3.3.6/js/bootstrap.js</a></p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.6-3.3.6.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XSS in data-target in bootstrap (3.3.7 and before)
<p>Publish Date: 2017-06-27
<p>URL: <a href=>WS-2018-0021</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/commit/d9be1da55bf0f94a81e8a2c9acf5574fb801306e">https://github.com/twbs/bootstrap/commit/d9be1da55bf0f94a81e8a2c9acf5574fb801306e</a></p>
<p>Release Date: 2017-08-25</p>
<p>Fix Resolution: Replace or update the following files: alert.js, carousel.js, collapse.js, dropdown.js, modal.js</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws medium severity vulnerability detected by whitesource ws medium severity vulnerability vulnerable library bootstrap js google styled theme for bootstrap path gkueny github io css js bootstrap js library home page a href dependency hierarchy x bootstrap js vulnerable library vulnerability details xss in data target in bootstrap and before publish date url ws cvss score details base score metrics not available suggested fix type change files origin a href release date fix resolution replace or update the following files alert js carousel js collapse js dropdown js modal js step up your open source security game with whitesource | 0 |
57,321 | 24,098,982,932 | IssuesEvent | 2022-09-19 21:43:25 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Unsetting a field format is not possible | bug Team:AppServicesSv duplicate impact:medium | **Kibana version:** 8.3.3
**Elasticsearch version:** 8.3.3
**Server OS version:** Linux
**Browser version:** Chrome 104
**Browser OS version:** macOS
**Original install method (e.g. download page, yum, from source, etc.):** Docker image
**Describe the bug:**
When I set a specific format for a field in a data view, I can no longer unset it. It always resets to the previously chosen format.
**Steps to reproduce:**
1. Select a format for field in a data view, e.g. Bytes
2. Save it
3. Turn off the format
4. Save it
5. Refresh the page
6. Notice that the format stays at the previously selected one
**Expected behavior:**
The format should return to being unset.
**Screenshots (if relevant):**
Before unsetting it, I had explicitly changed this field to be a Number:

I then unset it here:
<img width="566" alt="image" src="https://user-images.githubusercontent.com/582444/185087638-85e9c29d-17fb-42d8-a7ce-f9259a36bc2d.png">
Then I save it, and it looks like being saved:

But when I refresh the page, the format is still set as Number. | 1.0 | Unsetting a field format is not possible - **Kibana version:** 8.3.3
**Elasticsearch version:** 8.3.3
**Server OS version:** Linux
**Browser version:** Chrome 104
**Browser OS version:** macOS
**Original install method (e.g. download page, yum, from source, etc.):** Docker image
**Describe the bug:**
When I set a specific format for a field in a data view, I can no longer unset it. It always resets to the previously chosen format.
**Steps to reproduce:**
1. Select a format for field in a data view, e.g. Bytes
2. Save it
3. Turn off the format
4. Save it
5. Refresh the page
6. Notice that the format stays at the previously selected one
**Expected behavior:**
The format should return to being unset.
**Screenshots (if relevant):**
Before unsetting it, I had explicitly changed this field to be a Number:

I then unset it here:
<img width="566" alt="image" src="https://user-images.githubusercontent.com/582444/185087638-85e9c29d-17fb-42d8-a7ce-f9259a36bc2d.png">
Then I save it, and it looks like being saved:

But when I refresh the page, the format is still set as Number. | non_priority | unsetting a field format is not possible kibana version elasticsearch version server os version linux browser version chrome browser os version macos original install method e g download page yum from source etc docker image describe the bug when i set a specific format for a field in a data view i can no longer unset it it always resets to the previously chosen format steps to reproduce select a format for field in a data view e g bytes save it turn off the format save it refresh the page notice that the format stays at the previously selected one expected behavior the format should return to being unset screenshots if relevant before unsetting it i had explicitly changed this field to be a number i then unset it here img width alt image src then i save it and it looks like being saved but when i refresh the page the format is still set as number | 0 |
55,448 | 11,431,441,852 | IssuesEvent | 2020-02-04 12:09:32 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | [0.9.702] Sub Editor - Toggle Visibility icon, doesn't really look like an icon. | Bug Code | - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
Unless you stare really hard next to the generate waypoint button, the toggle visibility button doesn't really look like it existed. Make it an eye icon?
**Version**
0.9.702 | 1.0 | [0.9.702] Sub Editor - Toggle Visibility icon, doesn't really look like an icon. - - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
Unless you stare really hard next to the generate waypoint button, the toggle visibility button doesn't really look like it existed. Make it an eye icon?
**Version**
0.9.702 | non_priority | sub editor toggle visibility icon doesn t really look like an icon i have searched the issue tracker to check if the issue has already been reported description unless you stare really hard next to the generate waypoint button the toggle visibility button doesn t really look like it existed make it an eye icon version | 0 |
170,407 | 14,259,467,781 | IssuesEvent | 2020-11-20 08:19:47 | ExploreASL/ExploreASL | https://api.github.com/repos/ExploreASL/ExploreASL | closed | Improve internal documentation | documentation | Goal: Create a documentation similar to [NIFTYTORCH](https://niftytorch.github.io/doc/) or [MONAI](https://docs.monai.io/en/latest/) ?
* Add **README** files to subfolders to improve the orientation within the ExploreASL project.
* Improve code readability and overall access for new developers.
* Integrate all the **README** files into an interactive documentation
| 1.0 | Improve internal documentation - Goal: Create a documentation similar to [NIFTYTORCH](https://niftytorch.github.io/doc/) or [MONAI](https://docs.monai.io/en/latest/) ?
* Add **README** files to subfolders to improve the orientation within the ExploreASL project.
* Improve code readability and overall access for new developers.
* Integrate all the **README** files into an interactive documentation
| non_priority | improve internal documentation goal create a documentation similar to or add readme files to subfolders to improve the orientation within the exploreasl project improve code readability and overall access for new developers integrate all the readme files into an interactive documentation | 0 |
74,005 | 15,298,926,531 | IssuesEvent | 2021-02-24 10:18:50 | rsoreq/kendo-ui-core | https://api.github.com/repos/rsoreq/kendo-ui-core | opened | CVE-2020-15256 (High) detected in object-path-0.9.2.tgz | security vulnerability | ## CVE-2020-15256 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>object-path-0.9.2.tgz</b></p></summary>
<p>Access deep properties using a path</p>
<p>Library home page: <a href="https://registry.npmjs.org/object-path/-/object-path-0.9.2.tgz">https://registry.npmjs.org/object-path/-/object-path-0.9.2.tgz</a></p>
<p>Path to dependency file: kendo-ui-core/package.json</p>
<p>Path to vulnerable library: kendo-ui-core/node_modules/object-path/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.9.12.tgz (Root Library)
- eazy-logger-2.1.3.tgz
- tfunk-3.1.0.tgz
- :x: **object-path-0.9.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rsoreq/kendo-ui-core/commit/62afbcdf79c4c7052417ecc86eb31bd6bc04e1ad">62afbcdf79c4c7052417ecc86eb31bd6bc04e1ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability has been found in `object-path` <= 0.11.4 affecting the `set()` method. The vulnerability is limited to the `includeInheritedProps` mode (if version >= 0.11.0 is used), which has to be explicitly enabled by creating a new instance of `object-path` and setting the option `includeInheritedProps: true`, or by using the default `withInheritedProps` instance. The default operating mode is not affected by the vulnerability if version >= 0.11.0 is used. Any usage of `set()` in versions < 0.11.0 is vulnerable. The issue is fixed in object-path version 0.11.5 As a workaround, don't use the `includeInheritedProps: true` options or the `withInheritedProps` instance if using a version >= 0.11.0.
<p>Publish Date: 2020-10-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15256>CVE-2020-15256</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/mariocasciaro/object-path/security/advisories/GHSA-cwx2-736x-mf6w">https://github.com/mariocasciaro/object-path/security/advisories/GHSA-cwx2-736x-mf6w</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 0.11.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"object-path","packageVersion":"0.9.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"browser-sync:2.9.12;eazy-logger:2.1.3;tfunk:3.1.0;object-path:0.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.11.5"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-15256","vulnerabilityDetails":"A prototype pollution vulnerability has been found in `object-path` \u003c\u003d 0.11.4 affecting the `set()` method. The vulnerability is limited to the `includeInheritedProps` mode (if version \u003e\u003d 0.11.0 is used), which has to be explicitly enabled by creating a new instance of `object-path` and setting the option `includeInheritedProps: true`, or by using the default `withInheritedProps` instance. The default operating mode is not affected by the vulnerability if version \u003e\u003d 0.11.0 is used. Any usage of `set()` in versions \u003c 0.11.0 is vulnerable. The issue is fixed in object-path version 0.11.5 As a workaround, don\u0027t use the `includeInheritedProps: true` options or the `withInheritedProps` instance if using a version \u003e\u003d 0.11.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15256","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-15256 (High) detected in object-path-0.9.2.tgz - ## CVE-2020-15256 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>object-path-0.9.2.tgz</b></p></summary>
<p>Access deep properties using a path</p>
<p>Library home page: <a href="https://registry.npmjs.org/object-path/-/object-path-0.9.2.tgz">https://registry.npmjs.org/object-path/-/object-path-0.9.2.tgz</a></p>
<p>Path to dependency file: kendo-ui-core/package.json</p>
<p>Path to vulnerable library: kendo-ui-core/node_modules/object-path/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.9.12.tgz (Root Library)
- eazy-logger-2.1.3.tgz
- tfunk-3.1.0.tgz
- :x: **object-path-0.9.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rsoreq/kendo-ui-core/commit/62afbcdf79c4c7052417ecc86eb31bd6bc04e1ad">62afbcdf79c4c7052417ecc86eb31bd6bc04e1ad</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability has been found in `object-path` <= 0.11.4 affecting the `set()` method. The vulnerability is limited to the `includeInheritedProps` mode (if version >= 0.11.0 is used), which has to be explicitly enabled by creating a new instance of `object-path` and setting the option `includeInheritedProps: true`, or by using the default `withInheritedProps` instance. The default operating mode is not affected by the vulnerability if version >= 0.11.0 is used. Any usage of `set()` in versions < 0.11.0 is vulnerable. The issue is fixed in object-path version 0.11.5 As a workaround, don't use the `includeInheritedProps: true` options or the `withInheritedProps` instance if using a version >= 0.11.0.
<p>Publish Date: 2020-10-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15256>CVE-2020-15256</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/mariocasciaro/object-path/security/advisories/GHSA-cwx2-736x-mf6w">https://github.com/mariocasciaro/object-path/security/advisories/GHSA-cwx2-736x-mf6w</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 0.11.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"object-path","packageVersion":"0.9.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"browser-sync:2.9.12;eazy-logger:2.1.3;tfunk:3.1.0;object-path:0.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.11.5"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-15256","vulnerabilityDetails":"A prototype pollution vulnerability has been found in `object-path` \u003c\u003d 0.11.4 affecting the `set()` method. The vulnerability is limited to the `includeInheritedProps` mode (if version \u003e\u003d 0.11.0 is used), which has to be explicitly enabled by creating a new instance of `object-path` and setting the option `includeInheritedProps: true`, or by using the default `withInheritedProps` instance. The default operating mode is not affected by the vulnerability if version \u003e\u003d 0.11.0 is used. Any usage of `set()` in versions \u003c 0.11.0 is vulnerable. The issue is fixed in object-path version 0.11.5 As a workaround, don\u0027t use the `includeInheritedProps: true` options or the `withInheritedProps` instance if using a version \u003e\u003d 0.11.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15256","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in object path tgz cve high severity vulnerability vulnerable library object path tgz access deep properties using a path library home page a href path to dependency file kendo ui core package json path to vulnerable library kendo ui core node modules object path package json dependency hierarchy browser sync tgz root library eazy logger tgz tfunk tgz x object path tgz vulnerable library found in head commit a href found in base branch master vulnerability details a prototype pollution vulnerability has been found in object path is used which has to be explicitly enabled by creating a new instance of object path and setting the option includeinheritedprops true or by using the default withinheritedprops instance the default operating mode is not affected by the vulnerability if version is used any usage of set in versions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree browser sync eazy logger tfunk object path isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a prototype pollution vulnerability has been found in object path affecting the set method the vulnerability is limited to the includeinheritedprops mode if version is used which has to be explicitly enabled by creating a new instance of object path and setting the option includeinheritedprops true or by using the default withinheritedprops instance the default operating mode is not affected by the vulnerability if version is used any usage of set in versions is vulnerable the issue is fixed in object path version as a workaround don use the includeinheritedprops true options or the withinheritedprops instance if using a version vulnerabilityurl | 0 |
85,652 | 24,649,147,063 | IssuesEvent | 2022-10-17 17:06:40 | dotnet/arcade | https://api.github.com/repos/dotnet/arcade | closed | Build failed: dotnet-arcade-validation-official/main #20221016.1 | Build Failed | Build [#20221016.1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=2022543) partiallySucceeded
## :warning: : internal / dotnet-arcade-validation-official partiallySucceeded
### Summary
**Finished** - Mon, 17 Oct 2022 01:48:35 GMT
**Duration** - 98 minutes
**Requested for** - Microsoft.VisualStudio.Services.TFS
**Reason** - schedule
### Details
#### Promote Arcade to '.NET Eng - Latest' channel
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2022543/logs/415) - The latest build on 'main' branch for the 'installer' repository was not successful.
### Changes
- [384485db](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/384485db815795cb70a0ef85705ca725222c8d66) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20221015.1 (#3452)
- [f463f248](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/f463f24895ca6b019af151b8bfb728f404703137) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20221014.2 (#3450)
- [7f6a03a2](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/7f6a03a226ab05cab0d7d006ab7f8ab9d003d271) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20221014.1 (#3449)
| 1.0 | Build failed: dotnet-arcade-validation-official/main #20221016.1 - Build [#20221016.1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=2022543) partiallySucceeded
## :warning: : internal / dotnet-arcade-validation-official partiallySucceeded
### Summary
**Finished** - Mon, 17 Oct 2022 01:48:35 GMT
**Duration** - 98 minutes
**Requested for** - Microsoft.VisualStudio.Services.TFS
**Reason** - schedule
### Details
#### Promote Arcade to '.NET Eng - Latest' channel
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2022543/logs/415) - The latest build on 'main' branch for the 'installer' repository was not successful.
### Changes
- [384485db](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/384485db815795cb70a0ef85705ca725222c8d66) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20221015.1 (#3452)
- [f463f248](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/f463f24895ca6b019af151b8bfb728f404703137) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20221014.2 (#3450)
- [7f6a03a2](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/7f6a03a226ab05cab0d7d006ab7f8ab9d003d271) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20221014.1 (#3449)
| non_priority | build failed dotnet arcade validation official main build partiallysucceeded warning internal dotnet arcade validation official partiallysucceeded summary finished mon oct gmt duration minutes requested for microsoft visualstudio services tfs reason schedule details promote arcade to net eng latest channel warning the latest build on main branch for the installer repository was not successful changes dotnet maestro update dependencies from build dotnet maestro update dependencies from build dotnet maestro update dependencies from build | 0 |
24,434 | 12,103,869,128 | IssuesEvent | 2020-04-20 19:10:51 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | aws_lambda_alias is recreated when function_name changes from function name to ARN and viceversa | enhancement service/lambda | <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
```
Terraform v0.12.20
+ provider.aws v2.58.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_lambda_alias
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```diff
resource "aws_lambda_alias" "alias" {
- function_name = "lambda"
+ function_name = "arn:aws:lambda:<region>:<account>:function:lambda"
function_version = "..."
name = "alias"
}
```
### Expected Behavior
<!--- What should have happened? --->
`terraform plan` should show no changes.
### Actual Behavior
<!--- What actually happened? --->
`terraform plan` shows that alias will be recreated even though the ARN points to the same function.
```hcl
Terraform will perform the following actions:
# aws_lambda_alias.alias must be replaced
-/+ resource "aws_lambda_alias" "alias" {
~ arn = "arn:aws:lambda:<region>:<account>:function:lambda:alias" -> (known after apply)
description = "..."
~ function_name = "lambda" -> "arn:aws:lambda:<region>:<account>:function:lambda" # forces replacement
function_version = "..."
~ id = "arn:aws:lambda:<region>:<account>:function:lambda:alias" -> (known after apply)
~ invoke_arn = "..." -> (known after apply)
name = "alias"
}
Plan: 1 to add, 0 to change, 1 to destroy.
```
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. Create a lambda alias with `function_name` argument set to the name of the function.
2. Change `function_name` to the ARN of the function.
3. Run `terraform plan`. | 1.0 | aws_lambda_alias is recreated when function_name changes from function name to ARN and viceversa - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
```
Terraform v0.12.20
+ provider.aws v2.58.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_lambda_alias
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```diff
resource "aws_lambda_alias" "alias" {
- function_name = "lambda"
+ function_name = "arn:aws:lambda:<region>:<account>:function:lambda"
function_version = "..."
name = "alias"
}
```
### Expected Behavior
<!--- What should have happened? --->
`terraform plan` should show no changes.
### Actual Behavior
<!--- What actually happened? --->
`terraform plan` shows that alias will be recreated even though the ARN points to the same function.
```hcl
Terraform will perform the following actions:
# aws_lambda_alias.alias must be replaced
-/+ resource "aws_lambda_alias" "alias" {
~ arn = "arn:aws:lambda:<region>:<account>:function:lambda:alias" -> (known after apply)
description = "..."
~ function_name = "lambda" -> "arn:aws:lambda:<region>:<account>:function:lambda" # forces replacement
function_version = "..."
~ id = "arn:aws:lambda:<region>:<account>:function:lambda:alias" -> (known after apply)
~ invoke_arn = "..." -> (known after apply)
name = "alias"
}
Plan: 1 to add, 0 to change, 1 to destroy.
```
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. Create a lambda alias with `function_name` argument set to the name of the function.
2. Change `function_name` to the ARN of the function.
3. Run `terraform plan`. | non_priority | aws lambda alias is recreated when function name changes from function name to arn and viceversa please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version terraform provider aws affected resource s aws lambda alias terraform configuration files diff resource aws lambda alias alias function name lambda function name arn aws lambda function lambda function version name alias expected behavior terraform plan should show no changes actual behavior terraform plan shows that alias will be recreated even though the arn points to the same function hcl terraform will perform the following actions aws lambda alias alias must be replaced resource aws lambda alias alias arn arn aws lambda function lambda alias known after apply description function name lambda arn aws lambda function lambda forces replacement function version id arn aws lambda function lambda alias known after apply invoke arn known after apply name alias plan to add to change to destroy steps to reproduce create a lambda alias with function name argument set to the name of the function change function name to the arn of the function run terraform plan | 0 |
60,642 | 6,712,162,659 | IssuesEvent | 2017-10-13 08:18:32 | daisy/ace | https://api.github.com/repos/daisy/ace | opened | Add integration tests for concurrent runs of Ace | tests | Ace should now be able to process several EPUBs concurrently. We should add a couple integration tests for this. | 1.0 | Add integration tests for concurrent runs of Ace - Ace should now be able to process several EPUBs concurrently. We should add a couple integration tests for this. | non_priority | add integration tests for concurrent runs of ace ace should now be able to process several epubs concurrently we should add a couple integration tests for this | 0 |
11,781 | 4,290,203,340 | IssuesEvent | 2016-07-18 08:43:55 | Outernet-Project/librarian | https://api.github.com/repos/Outernet-Project/librarian | closed | Javascript errors in console when opening hamburger menu (which also doesn't open) | bug UI code (JS/CSS) | Possibly during the merge of bundles, either the order or some of the files were left out, these errors appear:
Name elements.Element already defined with value undefined
Name elements.ExpandableBox already defined with value undefined
Name widgets.PulldownMenubar already defined with value undefined
Name widgets.ContextMenu already defined with value undefined
Name widgets.Statusbar already defined with value undefined | 1.0 | Javascript errors in console when opening hamburger menu (which also doesn't open) - Possibly during the merge of bundles, either the order or some of the files were left out, these errors appear:
Name elements.Element already defined with value undefined
Name elements.ExpandableBox already defined with value undefined
Name widgets.PulldownMenubar already defined with value undefined
Name widgets.ContextMenu already defined with value undefined
Name widgets.Statusbar already defined with value undefined | non_priority | javascript errors in console when opening hamburger menu which also doesn t open possibly during the merge of bundles either the order or some of the files were left out these errors appear name elements element already defined with value undefined name elements expandablebox already defined with value undefined name widgets pulldownmenubar already defined with value undefined name widgets contextmenu already defined with value undefined name widgets statusbar already defined with value undefined | 0 |
61,625 | 25,578,152,076 | IssuesEvent | 2022-12-01 00:41:44 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [REMOTO] Engenheiro de Software na [SOCIAL MINER] | BIG DATA MYSQL PYTHON MONGODB JAVASCRIPT C# TESTE AUTOMATIZADO NODE.JS DOCKER KUBERNETES NOSQL AWS ETL REMOTO ELASTIC APACHE KAFKA .NET CORE AZURE MICROSERVICES HADOOP ECOSYSTEM REDSHIFT SERVERLESS SCIKIT-LEARN TENSORFLOW KINESIS Stale | ## Engenheiro de Software
Nos preocupamos com o que você entrega e não como você se veste. Sabe como é uma Startup, né? Pessoas criativas, ambiente dinâmico, despojado, horário flexível, pouca burocracia e formalidades... Dá uma olhada nas fotos para sentir a vibe dos nerds :P
## Atividades
Estamos procurando Desenvolvedores(as) apaixonado(a)s para nos ajudar a acabar com o SPAM no mundo. Não é brincadeira, temos uma missão de transformar a experiência de compra das pessoas em algo muito mais humanizado e inteligente.
Com grandes poderes vêm grandes responsabilidades e para ser um super Miner vai lidar com isso aqui:
- Escalabilidade / Performance (+100 milhões de pessoas impactadas/mês)
- Sistemas inteligentes (Quer brincar com data mining e IA?)
- Metodologias de desenvolvimento ágil
- Foco em Back-end (C#, Python, NodeJS)
- Programação Front-End (JS Vanilla, React)
## Local
- Salvador
## Requisitos
*Obrigatórios:*
- C#
- JavaScript
- MySQL
- NoSQL (prefencia por MongoDB)
- Testes automatizados
- Arquiteturas escaláveis
- Arquiteturas Restful
- Microservices
- Perfil de Liderança
- Trabalho em equipe, bom relacionamento interpessoal, boa argumentação e proatividade
*Diferenciais:*
- .Net Core
- Python
- NodeJS
- Hadoop Ecosystem (Apache Spark, EMR, HDFS)
- Elasticsearch
- Arquiteturas de Big Data
- Redshift
- ETL´s
- Serverless
- Kubernetes (docker)
- Machine Learning Frameworks (scikit-learn, tensorflow, etc)
- Apache Kafka / Kinesis
- Object Storage (AWS S3, Azure Blob, etc)
## Contratação
- Remoto
## Social Miner
A Social Miner é uma plataforma de People Marketing que ajuda empresas a automatizarem sua comunicação de marketing, entregando experiências personalizadas para cada indivíduo em grande escala. A startup possui mais de 30 milhões de usuários conectados através de seus plugins sociais e mais de 500 milhões de impressões únicas mês.
## Como se candidatar
Link: https://hipsters.jobs/job/11362/engenheiro-de-software-ba/ | 1.0 | [REMOTO] Engenheiro de Software na [SOCIAL MINER] - ## Engenheiro de Software
Nos preocupamos com o que você entrega e não como você se veste. Sabe como é uma Startup, né? Pessoas criativas, ambiente dinâmico, despojado, horário flexível, pouca burocracia e formalidades... Dá uma olhada nas fotos para sentir a vibe dos nerds :P
## Atividades
Estamos procurando Desenvolvedores(as) apaixonado(a)s para nos ajudar a acabar com o SPAM no mundo. Não é brincadeira, temos uma missão de transformar a experiência de compra das pessoas em algo muito mais humanizado e inteligente.
Com grandes poderes vêm grandes responsabilidades e para ser um super Miner vai lidar com isso aqui:
- Escalabilidade / Performance (+100 milhões de pessoas impactadas/mês)
- Sistemas inteligentes (Quer brincar com data mining e IA?)
- Metodologias de desenvolvimento ágil
- Foco em Back-end (C#, Python, NodeJS)
- Programação Front-End (JS Vanilla, React)
## Local
- Salvador
## Requisitos
*Obrigatórios:*
- C#
- JavaScript
- MySQL
- NoSQL (prefencia por MongoDB)
- Testes automatizados
- Arquiteturas escaláveis
- Arquiteturas Restful
- Microservices
- Perfil de Liderança
- Trabalho em equipe, bom relacionamento interpessoal, boa argumentação e proatividade
*Diferenciais:*
- .Net Core
- Python
- NodeJS
- Hadoop Ecosystem (Apache Spark, EMR, HDFS)
- Elasticsearch
- Arquiteturas de Big Data
- Redshift
- ETL´s
- Serverless
- Kubernetes (docker)
- Machine Learning Frameworks (scikit-learn, tensorflow, etc)
- Apache Kafka / Kinesis
- Object Storage (AWS S3, Azure Blob, etc)
## Contratação
- Remoto
## Social Miner
A Social Miner é uma plataforma de People Marketing que ajuda empresas a automatizarem sua comunicação de marketing, entregando experiências personalizadas para cada indivíduo em grande escala. A startup possui mais de 30 milhões de usuários conectados através de seus plugins sociais e mais de 500 milhões de impressões únicas mês.
## Como se candidatar
Link: https://hipsters.jobs/job/11362/engenheiro-de-software-ba/ | non_priority | engenheiro de software na engenheiro de software nos preocupamos com o que você entrega e não como você se veste sabe como é uma startup né pessoas criativas ambiente dinâmico despojado horário flexível pouca burocracia e formalidades dá uma olhada nas fotos para sentir a vibe dos nerds p atividades estamos procurando desenvolvedores as apaixonado a s para nos ajudar a acabar com o spam no mundo não é brincadeira temos uma missão de transformar a experiência de compra das pessoas em algo muito mais humanizado e inteligente com grandes poderes vêm grandes responsabilidades e para ser um super miner vai lidar com isso aqui escalabilidade performance milhões de pessoas impactadas mês sistemas inteligentes quer brincar com data mining e ia metodologias de desenvolvimento ágil foco em back end c python nodejs programação front end js vanilla react local salvador requisitos obrigatórios c javascript mysql nosql prefencia por mongodb testes automatizados arquiteturas escaláveis arquiteturas restful microservices perfil de liderança trabalho em equipe bom relacionamento interpessoal boa argumentação e proatividade diferenciais net core python nodejs hadoop ecosystem apache spark emr hdfs elasticsearch arquiteturas de big data redshift etl´s serverless kubernetes docker machine learning frameworks scikit learn tensorflow etc apache kafka kinesis object storage aws azure blob etc contratação remoto social miner a social miner é uma plataforma de people marketing que ajuda empresas a automatizarem sua comunicação de marketing entregando experiências personalizadas para cada indivíduo em grande escala a startup possui mais de milhões de usuários conectados através de seus plugins sociais e mais de milhões de impressões únicas mês como se candidatar link | 0 |
131,274 | 10,687,078,810 | IssuesEvent | 2019-10-22 15:29:07 | imixs/imixs-melman | https://api.github.com/repos/imixs/imixs-melman | closed | JWTAuthenticator - set jwt as header property instead of a query string | feature testing | set the jwt as a header property
See also https://github.com/imixs/imixs-jwt/issues/9 | 1.0 | JWTAuthenticator - set jwt as header property instead of a query string - set the jwt as a header property
See also https://github.com/imixs/imixs-jwt/issues/9 | non_priority | jwtauthenticator set jwt as header property instead of a query string set the jwt as a header property see also | 0 |
9,735 | 13,854,794,530 | IssuesEvent | 2020-10-15 10:00:24 | alessandrasonsini/PeakLand | https://api.github.com/repos/alessandrasonsini/PeakLand | opened | FR - Visited list | Functional Requirement | The system shall provide a list of all visited itineraries for each logged user. | 1.0 | FR - Visited list - The system shall provide a list of all visited itineraries for each logged user. | non_priority | fr visited list the system shall provide a list of all visited itineraries for each logged user | 0 |
96,968 | 8,638,745,576 | IssuesEvent | 2018-11-23 15:47:34 | EyeSeeTea/dhis2-core | https://api.github.com/repos/EyeSeeTea/dhis2-core | closed | App for notifications settings | testing | For DHIS 2.30
- [x] Create skeleton app: dhis2-app-skeleton. Check existing apps for best reference.:
- [x] d2 (30.x.x)
- [x] d2-ui
- [x] d2-i18n
- [x] material-ui v3
- [x] style checks
- [x] build infrastructure (manifest + webapp).
- [x] testing: jest + enzyme
- [x] App: notifications-app, Notifications App.
- [x] Create attributes by hand on DB: user_noInterpretationMentionNotifications, user_noInterpretationSubcriptionNotifications
- [x] Add validations.
- [x] Snackbar (also skeleton)
- [x] Functional tests -> dhis2 server (also skeleton), checks casper and alternatives.
- [x] Update dhis2-newsletter script to use new params
- [x] Use 2 custom attributes (boolean) in dhis2-newsletter script.
 | 1.0 | App for notifications settings - For DHIS 2.30
- [x] Create skeleton app: dhis2-app-skeleton. Check existing apps for best reference.:
- [x] d2 (30.x.x)
- [x] d2-ui
- [x] d2-i18n
- [x] material-ui v3
- [x] style checks
- [x] build infrastructure (manifest + webapp).
- [x] testing: jest + enzyme
- [x] App: notifications-app, Notifications App.
- [x] Create attributes by hand on DB: user_noInterpretationMentionNotifications, user_noInterpretationSubcriptionNotifications
- [x] Add validations.
- [x] Snackbar (also skeleton)
- [x] Functional tests -> dhis2 server (also skeleton), checks casper and alternatives.
- [x] Update dhis2-newsletter script to use new params
- [x] Use 2 custom attributes (boolean) in dhis2-newsletter script.
 | non_priority | app for notifications settings for dhis create skeleton app app skeleton check existing apps for best reference x x ui material ui style checks build infrastructure manifest webapp testing jest enzyme app notifications app notifications app create attributes by hand on db user nointerpretationmentionnotifications user nointerpretationsubcriptionnotifications add validations snackbar also skeleton functional tests server also skeleton checks casper and alternatives update newsletter script to use new params use custom attributes boolean in newsletter script | 0 |
299,674 | 25,917,275,129 | IssuesEvent | 2022-12-15 18:27:20 | apache/beam | https://api.github.com/repos/apache/beam | closed | testTwoTimersSettingEachOtherWithCreateAsInputBounded flaky | java runners dataflow P1 bug failing test flake beam-fixit | beam_PostCommit_Java_VR_Dataflow_V2_Streaming flakes on org.apache.beam.sdk.transforms.ParDoTest$TimerTests.testTwoTimersSettingEachOtherWithCreateAsInputBounded
java.lang.RuntimeException: generic::unknown: org.apache.beam.sdk.util.UserCodeException: java.lang.AssertionError: ParDoTest.TimerTests.TwoTimerTest/ParDo(Anonymous)/ParMultiDo(Anonymous).output: Expected: iterable with items ["t1:0:0", "t2:0:0", "t1:1:1", "t2:1:1", "t1:2:2", "t2:2:2", "t1:3:3", "t2:3:3", "t1:4:4", "t2:4:4", "t1:5:5", "t2:5:5", "t1:6:6", "t2:6:6", "t1:7:7", "t2:7:7", "t1:8:8", "t2:8:8", "t1:9:9", "t2:9:9", "t1:10:10", "t2:10:10", "t1:11:11", "t2:11:11", "t1:12:12", "t2:12:12", "t1:13:13", "t2:13:13", "t1:14:14", "t2:14:14", "t1:15:15", "t2:15:15", "t1:16:16", "t2:16:16", "t1:17:17", "t2:17:17", "t1:18:18", "t2:18:18", "t1:19:19", "t2:19:19", "t1:20:20", "t2:20:20", "t1:21:21", "t2:21:21", "t1:22:22", "t2:22:22", "t1:23:23", "t2:23:23", "t1:24:24", "t2:24:24", "t1:25:25", "t2:25:25", "t1:26:26", "t2:26:26", "t1:27:27", "t2:27:27", "t1:28:28", "t2:28:28", "t1:29:29", "t2:29:29", "t1:30:30", "t2:30:30", "t1:31:31", "t2:31:31", "t1:32:32", "t2:32:32", "t1:33:33", "t2:33:33", "t1:34:34", "t2:34:34", "t1:35:35", "t2:35:35", "t1:36:36", "t2:36:36", "t1:37:37", "t2:37:37", "t1:38:38", "t2:38:38", "t1:39:39", "t2:39:39", "t1:40:40", "t2:40:40", "t1:41:41", "t2:41:41", "t1:42:42", "t2:42:42", "t1:43:43", "t2:43:43", "t1:44:44", "t2:44:44", "t1:45:45", "t2:45:45", "t1:46:46", "t2:46:46", "t1:47:47", "t2:47:47", "t1:48:48", "t2:48:48", "t1:49:49", "t2:49:49", "t1:50:50", "t2:50:50", "t1:51:51", "t2:51:51", "t1:52:52", "t2:52:52", "t1:53:53", "t2:53:53", "t1:54:54", "t2:54:54", "t1:55:55", "t2:55:55", "t1:56:56", "t2:56:56", "t1:57:57", "t2:57:57", "t1:58:58", "t2:58:58", "t1:59:59", "t2:59:59", "t1:60:60", "t2:60:60", "t1:61:61", "t2:61:61", "t1:62:62", "t2:62:62", "t1:63:63", "t2:63:63", "t1:64:64", "t2:64:64", "t1:65:65", "t2:65:65", "t1:66:66", "t2:66:66", "t1:67:67", "t2:67:67", "t1:68:68", "t2:68:68", "t1:69:69", "t2:69:69", "t1:70:70", "t2:70:70", "t1:71:71", "t2:71:71", "t1:72:72", "t2:72:72", "t1:73:73", "t2:73:73", "t1:74:74", "t2:74:74", "t1:75:75", "t2:75:75", "t1:76:76", "t2:76:76", "t1:77:77", "t2:77:77", "t1:78:78", "t2:78:78", "t1:79:79", "t2:79:79", "t1:80:80", "t2:80:80", "t1:81:81", "t2:81:81", "t1:82:82", "t2:82:82", "t1:83:83", "t2:83:83", "t1:84:84", "t2:84:84", "t1:85:85", "t2:85:85", "t1:86:86", "t2:86:86", "t1:87:87", "t2:87:87", "t1:88:88", "t2:88:88", "t1:89:89", "t2:89:89", "t1:90:90", "t2:90:90", "t1:91:91", "t2:91:91", "t1:92:92", "t2:92:92", "t1:93:93", "t2:93:93", "t1:94:94", "t2:94:94", "t1:95:95", "t2:95:95", "t1:96:96", "t2:96:96", "t1:97:97", "t2:97:97", "t1:98:98", "t2:98:98", "t1:99:99", "t2:99:99", "t1:100:100", "t2:100:100"] in any order but: not matched: "t2:0:101"
https://ci-beam.apache.org/job/beam_PostCommit_Java_VR_Dataflow_V2_Streaming/1133/testReport/junit/org.apache.beam.sdk.transforms/ParDoTest$TimerTests/testTwoTimersSettingEachOtherWithCreateAsInputBounded/history/
Imported from Jira [BEAM-12809](https://issues.apache.org/jira/browse/BEAM-12809). Original Jira may contain additional context.
Reported by: ibzib. | 1.0 | testTwoTimersSettingEachOtherWithCreateAsInputBounded flaky - beam_PostCommit_Java_VR_Dataflow_V2_Streaming flakes on org.apache.beam.sdk.transforms.ParDoTest$TimerTests.testTwoTimersSettingEachOtherWithCreateAsInputBounded
java.lang.RuntimeException: generic::unknown: org.apache.beam.sdk.util.UserCodeException: java.lang.AssertionError: ParDoTest.TimerTests.TwoTimerTest/ParDo(Anonymous)/ParMultiDo(Anonymous).output: Expected: iterable with items ["t1:0:0", "t2:0:0", "t1:1:1", "t2:1:1", "t1:2:2", "t2:2:2", "t1:3:3", "t2:3:3", "t1:4:4", "t2:4:4", "t1:5:5", "t2:5:5", "t1:6:6", "t2:6:6", "t1:7:7", "t2:7:7", "t1:8:8", "t2:8:8", "t1:9:9", "t2:9:9", "t1:10:10", "t2:10:10", "t1:11:11", "t2:11:11", "t1:12:12", "t2:12:12", "t1:13:13", "t2:13:13", "t1:14:14", "t2:14:14", "t1:15:15", "t2:15:15", "t1:16:16", "t2:16:16", "t1:17:17", "t2:17:17", "t1:18:18", "t2:18:18", "t1:19:19", "t2:19:19", "t1:20:20", "t2:20:20", "t1:21:21", "t2:21:21", "t1:22:22", "t2:22:22", "t1:23:23", "t2:23:23", "t1:24:24", "t2:24:24", "t1:25:25", "t2:25:25", "t1:26:26", "t2:26:26", "t1:27:27", "t2:27:27", "t1:28:28", "t2:28:28", "t1:29:29", "t2:29:29", "t1:30:30", "t2:30:30", "t1:31:31", "t2:31:31", "t1:32:32", "t2:32:32", "t1:33:33", "t2:33:33", "t1:34:34", "t2:34:34", "t1:35:35", "t2:35:35", "t1:36:36", "t2:36:36", "t1:37:37", "t2:37:37", "t1:38:38", "t2:38:38", "t1:39:39", "t2:39:39", "t1:40:40", "t2:40:40", "t1:41:41", "t2:41:41", "t1:42:42", "t2:42:42", "t1:43:43", "t2:43:43", "t1:44:44", "t2:44:44", "t1:45:45", "t2:45:45", "t1:46:46", "t2:46:46", "t1:47:47", "t2:47:47", "t1:48:48", "t2:48:48", "t1:49:49", "t2:49:49", "t1:50:50", "t2:50:50", "t1:51:51", "t2:51:51", "t1:52:52", "t2:52:52", "t1:53:53", "t2:53:53", "t1:54:54", "t2:54:54", "t1:55:55", "t2:55:55", "t1:56:56", "t2:56:56", "t1:57:57", "t2:57:57", "t1:58:58", "t2:58:58", "t1:59:59", "t2:59:59", "t1:60:60", "t2:60:60", "t1:61:61", "t2:61:61", "t1:62:62", "t2:62:62", "t1:63:63", "t2:63:63", "t1:64:64", "t2:64:64", "t1:65:65", "t2:65:65", "t1:66:66", "t2:66:66", "t1:67:67", "t2:67:67", "t1:68:68", "t2:68:68", "t1:69:69", "t2:69:69", "t1:70:70", "t2:70:70", "t1:71:71", "t2:71:71", "t1:72:72", "t2:72:72", "t1:73:73", "t2:73:73", "t1:74:74", "t2:74:74", "t1:75:75", "t2:75:75", "t1:76:76", "t2:76:76", "t1:77:77", "t2:77:77", "t1:78:78", "t2:78:78", "t1:79:79", "t2:79:79", "t1:80:80", "t2:80:80", "t1:81:81", "t2:81:81", "t1:82:82", "t2:82:82", "t1:83:83", "t2:83:83", "t1:84:84", "t2:84:84", "t1:85:85", "t2:85:85", "t1:86:86", "t2:86:86", "t1:87:87", "t2:87:87", "t1:88:88", "t2:88:88", "t1:89:89", "t2:89:89", "t1:90:90", "t2:90:90", "t1:91:91", "t2:91:91", "t1:92:92", "t2:92:92", "t1:93:93", "t2:93:93", "t1:94:94", "t2:94:94", "t1:95:95", "t2:95:95", "t1:96:96", "t2:96:96", "t1:97:97", "t2:97:97", "t1:98:98", "t2:98:98", "t1:99:99", "t2:99:99", "t1:100:100", "t2:100:100"] in any order but: not matched: "t2:0:101"
https://ci-beam.apache.org/job/beam_PostCommit_Java_VR_Dataflow_V2_Streaming/1133/testReport/junit/org.apache.beam.sdk.transforms/ParDoTest$TimerTests/testTwoTimersSettingEachOtherWithCreateAsInputBounded/history/
Imported from Jira [BEAM-12809](https://issues.apache.org/jira/browse/BEAM-12809). Original Jira may contain additional context.
Reported by: ibzib. | non_priority | testtwotimerssettingeachotherwithcreateasinputbounded flaky beam postcommit java vr dataflow streaming flakes on org apache beam sdk transforms pardotest timertests testtwotimerssettingeachotherwithcreateasinputbounded java lang runtimeexception generic unknown org apache beam sdk util usercodeexception java lang assertionerror pardotest timertests twotimertest pardo anonymous parmultido anonymous output expected iterable with items in any order but not matched imported from jira original jira may contain additional context reported by ibzib | 0 |
18,189 | 10,024,275,356 | IssuesEvent | 2019-07-16 21:24:23 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | Introduce new tick event cls (Cumulative Layout Shift) | Type: Feature Request WG: performance | ## Describe the new feature or change to an existing feature you'd like to see
We introduced support for viewers to read a new tick event `lj` (layout jank) in #21060. Layout jank was a new, experimental metric behind a Chrome Origin Trial. Since then, the metric has matured a bit. It is now a [draft](https://wicg.github.io/layout-instability) under the Web Platform Incubator Community group, has gone through renaming, along with some internal improvements.
I suggest we introduce `cls` as a new tick event, and make the metric data available to viewers along similar lifecycle events as we do for `lj`. | True | Introduce new tick event cls (Cumulative Layout Shift) - ## Describe the new feature or change to an existing feature you'd like to see
We introduced support for viewers to read a new tick event `lj` (layout jank) in #21060. Layout jank was a new, experimental metric behind a Chrome Origin Trial. Since then, the metric has matured a bit. It is now a [draft](https://wicg.github.io/layout-instability) under the Web Platform Incubator Community group, has gone through renaming, along with some internal improvements.
I suggest we introduce `cls` as a new tick event, and make the metric data available to viewers along similar lifecycle events as we do for `lj`. | non_priority | introduce new tick event cls cumulative layout shift describe the new feature or change to an existing feature you d like to see we introduced support for viewers to read a new tick event lj layout jank in layout jank was a new experimental metric behind a chrome origin trial since then the metric has matured a bit it is now a under the web platform incubator community group has gone through renaming along with some internal improvements i suggest we introduce cls as a new tick event and make the metric data available to viewers along similar lifecycle events as we do for lj | 0 |
311,213 | 26,777,122,209 | IssuesEvent | 2023-01-31 18:00:06 | Kong/kubernetes-ingress-controller | https://api.github.com/repos/Kong/kubernetes-ingress-controller | closed | E2E tests shouldn't use cleaner.Cleanup | area/tests | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Problem Statement
As E2E tests create their own clusters, it doesn't make sense to use `cleaner.Cleanup` (cleaning up in-cluster resources created by the test) on their tear down. Instead, we should simply `cluster.Cleanup` the whole cluster. It should save some time while running the tests.
### Proposed Solution
Remove `cleaner.Cleanup` calls in favour of `cluster.Cleanup` in all E2E tests (`test/e2e/`) that create their own cluster.
### Additional information
Ticket is a result of discussion: https://github.com/Kong/kubernetes-ingress-controller/pull/3013#discussion_r986589068
### Acceptance Criteria
- [ ] Every test case in `test/e2e/` that always creates its own cluster, calls `cluster.Cleanup` instead of `cleaner.Cleanup` on its teardown.
- [ ] Diagnostics generated by the `cleaner` are still being dumped for every test case
| 1.0 | E2E tests shouldn't use cleaner.Cleanup - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Problem Statement
As E2E tests create their own clusters, it doesn't make sense to use `cleaner.Cleanup` (cleaning up in-cluster resources created by the test) on their tear down. Instead, we should simply `cluster.Cleanup` the whole cluster. It should save some time while running the tests.
### Proposed Solution
Remove `cleaner.Cleanup` calls in favour of `cluster.Cleanup` in all E2E tests (`test/e2e/`) that create their own cluster.
### Additional information
Ticket is a result of discussion: https://github.com/Kong/kubernetes-ingress-controller/pull/3013#discussion_r986589068
### Acceptance Criteria
- [ ] Every test case in `test/e2e/` that always creates its own cluster, calls `cluster.Cleanup` instead of `cleaner.Cleanup` on its teardown.
- [ ] Diagnostics generated by the `cleaner` are still being dumped for every test case
| non_priority | tests shouldn t use cleaner cleanup is there an existing issue for this i have searched the existing issues problem statement as tests create their own clusters it doesn t make sense to use cleaner cleanup cleaning up in cluster resources created by the test on their tear down instead we should simply cluster cleanup the whole cluster it should save some time while running the tests proposed solution remove cleaner cleanup calls in favour of cluster cleanup in all tests test that create their own cluster additional information ticket is a result of discussion acceptance criteria every test case in test that always creates its own cluster calls cluster cleanup instead of cleaner cleanup on its teardown diagnostics generated by the cleaner are still being dumped for every test case | 0 |
64,411 | 14,665,072,382 | IssuesEvent | 2020-12-29 13:28:28 | turkdevops/karma-jasmine | https://api.github.com/repos/turkdevops/karma-jasmine | opened | CVE-2020-28282 (High) detected in getobject-0.1.0.tgz | security vulnerability | ## CVE-2020-28282 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>getobject-0.1.0.tgz</b></p></summary>
<p>get.and.set.deep.objects.easily = true</p>
<p>Library home page: <a href="https://registry.npmjs.org/getobject/-/getobject-0.1.0.tgz">https://registry.npmjs.org/getobject/-/getobject-0.1.0.tgz</a></p>
<p>Path to dependency file: karma-jasmine/package.json</p>
<p>Path to vulnerable library: karma-jasmine/node_modules/getobject/package.json</p>
<p>
Dependency Hierarchy:
- grunt-1.2.1.tgz (Root Library)
- grunt-legacy-util-1.1.1.tgz
- :x: **getobject-0.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/karma-jasmine/commit/5b4ded195decdcc676462fc56dd120baadf63204">5b4ded195decdcc676462fc56dd120baadf63204</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in ‘getobject’ version 0.1.0 allows an attacker to cause a denial of service and may lead to remote code execution.
<p>Publish Date: 2020-11-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28282>CVE-2020-28282</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-28282 (High) detected in getobject-0.1.0.tgz - ## CVE-2020-28282 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>getobject-0.1.0.tgz</b></p></summary>
<p>get.and.set.deep.objects.easily = true</p>
<p>Library home page: <a href="https://registry.npmjs.org/getobject/-/getobject-0.1.0.tgz">https://registry.npmjs.org/getobject/-/getobject-0.1.0.tgz</a></p>
<p>Path to dependency file: karma-jasmine/package.json</p>
<p>Path to vulnerable library: karma-jasmine/node_modules/getobject/package.json</p>
<p>
Dependency Hierarchy:
- grunt-1.2.1.tgz (Root Library)
- grunt-legacy-util-1.1.1.tgz
- :x: **getobject-0.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/karma-jasmine/commit/5b4ded195decdcc676462fc56dd120baadf63204">5b4ded195decdcc676462fc56dd120baadf63204</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in ‘getobject’ version 0.1.0 allows an attacker to cause a denial of service and may lead to remote code execution.
<p>Publish Date: 2020-11-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28282>CVE-2020-28282</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in getobject tgz cve high severity vulnerability vulnerable library getobject tgz get and set deep objects easily true library home page a href path to dependency file karma jasmine package json path to vulnerable library karma jasmine node modules getobject package json dependency hierarchy grunt tgz root library grunt legacy util tgz x getobject tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution vulnerability in ‘getobject’ version allows an attacker to cause a denial of service and may lead to remote code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
2,469 | 2,733,194,748 | IssuesEvent | 2015-04-17 12:27:06 | Doola/elmenusFeed | https://api.github.com/repos/Doola/elmenusFeed | closed | SS Import reviews Of Restaurants from mysql to neo4j | Code reviewed Documentation reviewed | SS Import reviews Of Restaurants from mysql to neo4j #72 | 1.0 | SS Import reviews Of Restaurants from mysql to neo4j - SS Import reviews Of Restaurants from mysql to neo4j #72 | non_priority | ss import reviews of restaurants from mysql to ss import reviews of restaurants from mysql to | 0 |
228,909 | 17,484,459,745 | IssuesEvent | 2021-08-09 09:09:05 | oscar-system/Oscar.jl | https://api.github.com/repos/oscar-system/Oscar.jl | opened | docs: disable doctests by default, allow enabling | documentation | I am not at a computer right now, else I'd do this right away, but I don't want to forget about this (again): we wanted to disable doctests when building the docs by default, and add a keyword argument to `Oscar.build_doc` to allow enabling them again. Perhaps also add other KW args to it, e.g. for controlling the `strict` argument of the `doit` function. (And perhaps some of the args of `doit` should be turned into KW args, too?)
| 1.0 | docs: disable doctests by default, allow enabling - I am not at a computer right now, else I'd do this right away, but I don't want to forget about this (again): we wanted to disable doctests when building the docs by default, and add a keyword argument to `Oscar.build_doc` to allow enabling them again. Perhaps also add other KW args to it, e.g. for controlling the `strict` argument of the `doit` function. (And perhaps some of the args of `doit` should be turned into KW args, too?)
| non_priority | docs disable doctests by default allow enabling i am not at a computer right now else i d do this right away but i don t want to forget about this again we wanted to disable doctests when building the docs by default and add a keyword argument to oscar build doc to allow enabling them again perhaps also add other kw args to it e g for controlling the strict argument of the doit function and perhaps some of the args of doit should be turned into kw args too | 0 |
73,105 | 9,645,734,079 | IssuesEvent | 2019-05-17 09:24:16 | cselab/YMeRo | https://api.github.com/repos/cselab/YMeRo | closed | add Tutorials | documentation | Add a section with tutorials in the docs.
Should include several sections, with commented scripts.
The scripts must be part of the tests for maintainability reasons.
- [x] a simple "hello world" setup
- [x] a basic setup with plugins example
- [x] walls creation
- [x] object belonging: membranes with inner/outer
- [ ] object belonging: create rigid objects | 1.0 | add Tutorials - Add a section with tutorials in the docs.
Should include several sections, with commented scripts.
The scripts must be part of the tests for maintainability reasons.
- [x] a simple "hello world" setup
- [x] a basic setup with plugins example
- [x] walls creation
- [x] object belonging: membranes with inner/outer
- [ ] object belonging: create rigid objects | non_priority | add tutorials add a section with tutorials in the docs should include several sections with commented scripts the scripts must be part of the tests for maintainability reasons a simple hello world setup a basic setup with plugins example walls creation object belonging membranes with inner outer object belonging create rigid objects | 0 |
154,012 | 19,710,213,549 | IssuesEvent | 2022-01-13 03:56:33 | CanarysPlayground/Sample45 | https://api.github.com/repos/CanarysPlayground/Sample45 | closed | CVE-2020-25649 (High) detected in jackson-databind-2.2.3.jar | security vulnerability | ## CVE-2020-25649 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.2.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to vulnerable library: /lib/jackson-databind-2.2.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.2.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/CanarysPlayground/Sample45/commit/86aa8337dc54198617b227be04555bc8bbe0c1da">86aa8337dc54198617b227be04555bc8bbe0c1da</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649>CVE-2020-25649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2589">https://github.com/FasterXML/jackson-databind/issues/2589</a></p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-25649 (High) detected in jackson-databind-2.2.3.jar - ## CVE-2020-25649 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.2.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to vulnerable library: /lib/jackson-databind-2.2.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.2.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/CanarysPlayground/Sample45/commit/86aa8337dc54198617b227be04555bc8bbe0c1da">86aa8337dc54198617b227be04555bc8bbe0c1da</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649>CVE-2020-25649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2589">https://github.com/FasterXML/jackson-databind/issues/2589</a></p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to vulnerable library lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was found in fasterxml jackson databind where it did not have entity expansion secured properly this flaw allows vulnerability to xml external entity xxe attacks the highest threat from this vulnerability is data integrity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
257,138 | 19,488,672,209 | IssuesEvent | 2021-12-26 22:39:53 | PandaHugMonster/php-simputils | https://api.github.com/repos/PandaHugMonster/php-simputils | opened | Prepare lots of documentation | documentation | After finalizing initial architecture. Prepare extensive amount of detailed documentation to cover lot's of cases. | 1.0 | Prepare lots of documentation - After finalizing initial architecture. Prepare extensive amount of detailed documentation to cover lot's of cases. | non_priority | prepare lots of documentation after finalizing initial architecture prepare extensive amount of detailed documentation to cover lot s of cases | 0 |
215,389 | 16,601,485,461 | IssuesEvent | 2021-06-01 20:07:53 | krautzource/aria-tree-walker | https://api.github.com/repos/krautzource/aria-tree-walker | closed | add music score example | documentation | Another low hanging fruit.
Some useful links:
* examples: https://w3c.github.io/mnx/docs/comparisons/musicxml/
* svg: https://opensheetmusicdisplay.github.io/demo/
* speech: http://iceb.org/DictatingMusic.pdf
* braille output:
* https://code.google.com/archive/p/freedots/ // http://musicxml2braille.appspot.com/
* https://braillebug.org/music_braille.asp
* http://www.brailleauthority.org/music/music.html | 1.0 | add music score example - Another low hanging fruit.
Some useful links:
* examples: https://w3c.github.io/mnx/docs/comparisons/musicxml/
* svg: https://opensheetmusicdisplay.github.io/demo/
* speech: http://iceb.org/DictatingMusic.pdf
* braille output:
* https://code.google.com/archive/p/freedots/ // http://musicxml2braille.appspot.com/
* https://braillebug.org/music_braille.asp
* http://www.brailleauthority.org/music/music.html | non_priority | add music score example another low hanging fruit some useful links examples svg speech braille output | 0 |
193,269 | 15,372,318,390 | IssuesEvent | 2021-03-02 11:05:40 | GAA-UAM/scikit-fda | https://api.github.com/repos/GAA-UAM/scikit-fda | opened | Add sphinx-bibtex extension | documentation enhancement | **Is your feature request related to a problem? Please describe.**
The references used in this packages are not uniform, but they use different citation styles.
**Describe the solution you'd like**
We should use the [sphinx-bibtex extension](https://sphinxcontrib-bibtex.readthedocs.io/en/latest/). This would allow us to put our references in a Bibtex file and set a common citation style (maybe Chicago). The extension now supports a `footbibliography` statement to insert a local bibliography per file, e.g. in docstrings and examples.
For the name of the bibtex entry, we will use the following formats:
`{author}_{year}_{first_relevant_word} `
<details>
<summary>Example</summary>
```
@article{happ_2020_object,
title = {Object-{{Oriented Software}} for {{Functional Data}}},
author = {{Happ-Kurz}, Clara},
year = {2020},
month = apr,
volume = {93},
pages = {1--38},
issn = {1548-7660},
doi = {10.18637/jss.v093.i05},
url = {https://www.jstatsoft.org/index.php/jss/article/view/v093i05},
urldate = {2020-06-15},
copyright = {Copyright (c) 2020 Clara Happ-Kurz},
journal = {Journal of Statistical Software},
keywords = {functional data analysis,functional principal component analysis,multivariate functional data,object orientation,simulation},
language = {en},
number = {1}
}
```
</details>
`{author1}+{author2}_{year}_{first_relevant_word}`
<details>
<summary>Example</summary>
```
@manual{scheipl+goldsmith_2020_tidyfun,
title = {{{tidyfun}}: {{Tools}} for Tidy Functional Data},
author = {Scheipl, Fabian and Goldsmith, Jeff and Wrobel, Julia},
year = {2020},
url = {https://github.com/tidyfun/tidyfun},
note = {R package version 0.0.83},
type = {Manual}
}
```
</details>
`{author1}++_{year}_{first_relevant_word}`
<details>
<summary>Example</summary>
```
@article{berrendero++_2018_use,
title = {On the {{Use}} of {{Reproducing Kernel Hilbert Spaces}} in {{Functional Classification}}},
author = {Berrendero, Jos{\'e} Ram{\'o}n and Cuevas, Antonio and Torrecilla, Jos{\'e} Luis},
year = {2018},
month = jul,
volume = {113},
pages = {1210--1218},
issn = {0162-1459},
doi = {10.1080/01621459.2017.1320287},
url = {https://doi.org/10.1080/01621459.2017.1320287},
urldate = {2019-09-02},
abstract = {The H\'ajek\textendash Feldman dichotomy establishes that two Gaussian measures are either mutually absolutely continuous with respect to each other (and hence there is a Radon\textendash Nikodym density for each measure with respect to the other one) or mutually singular. Unlike the case of finite-dimensional Gaussian measures, there are nontrivial examples of both situations when dealing with Gaussian stochastic processes. This article provides: (a) Explicit expressions for the optimal (Bayes) rule and the minimal classification error probability in several relevant problems of supervised binary classification of mutually absolutely continuous Gaussian processes. The approach relies on some classical results in the theory of reproducing kernel Hilbert spaces (RKHS). (b) An interpretation, in terms of mutual singularity, for the so-called ``near perfect classification'' phenomenon. We show that the asymptotically optimal rule proposed by these authors can be identified with the sequence of optimal rules for an approximating sequence of classification problems in the absolutely continuous case. (c) As an application, we discuss a natural variable selection method, which essentially consists of taking the original functional data X(t), t {$\in$} [0, 1] to a d-dimensional marginal (X(t1), \ldots, X(td)), which is chosen to minimize the classification error of the corresponding Fisher's linear rule. We give precise conditions under which this discrimination method achieves the minimal classification error of the original functional problem. Supplementary materials for this article are available online.},
journal = {Journal of the American Statistical Association},
keywords = {absolutely continuity,mutually singular processes,Radon–Nikodym derivatives,supervised functional classification,variable selection},
number = {523}
}
```
</details>
**Describe alternatives you've considered**
Inserting manually the references in the documentation is difficult to maintain, and the citation style would be difficult to change if required. | 1.0 | Add sphinx-bibtex extension - **Is your feature request related to a problem? Please describe.**
The references used in this packages are not uniform, but they use different citation styles.
**Describe the solution you'd like**
We should use the [sphinx-bibtex extension](https://sphinxcontrib-bibtex.readthedocs.io/en/latest/). This would allow us to put our references in a Bibtex file and set a common citation style (maybe Chicago). The extension now supports a `footbibliography` statement to insert a local bibliography per file, e.g. in docstrings and examples.
For the name of the bibtex entry, we will use the following formats:
`{author}_{year}_{first_relevant_word} `
<details>
<summary>Example</summary>
```
@article{happ_2020_object,
title = {Object-{{Oriented Software}} for {{Functional Data}}},
author = {{Happ-Kurz}, Clara},
year = {2020},
month = apr,
volume = {93},
pages = {1--38},
issn = {1548-7660},
doi = {10.18637/jss.v093.i05},
url = {https://www.jstatsoft.org/index.php/jss/article/view/v093i05},
urldate = {2020-06-15},
copyright = {Copyright (c) 2020 Clara Happ-Kurz},
journal = {Journal of Statistical Software},
keywords = {functional data analysis,functional principal component analysis,multivariate functional data,object orientation,simulation},
language = {en},
number = {1}
}
```
</details>
`{author1}+{author2}_{year}_{first_relevant_word}`
<details>
<summary>Example</summary>
```
@manual{scheipl+goldsmith_2020_tidyfun,
title = {{{tidyfun}}: {{Tools}} for Tidy Functional Data},
author = {Scheipl, Fabian and Goldsmith, Jeff and Wrobel, Julia},
year = {2020},
url = {https://github.com/tidyfun/tidyfun},
note = {R package version 0.0.83},
type = {Manual}
}
```
</details>
`{author1}++_{year}_{first_relevant_word}`
<details>
<summary>Example</summary>
```
@article{berrendero++_2018_use,
title = {On the {{Use}} of {{Reproducing Kernel Hilbert Spaces}} in {{Functional Classification}}},
author = {Berrendero, Jos{\'e} Ram{\'o}n and Cuevas, Antonio and Torrecilla, Jos{\'e} Luis},
year = {2018},
month = jul,
volume = {113},
pages = {1210--1218},
issn = {0162-1459},
doi = {10.1080/01621459.2017.1320287},
url = {https://doi.org/10.1080/01621459.2017.1320287},
urldate = {2019-09-02},
abstract = {The H\'ajek\textendash Feldman dichotomy establishes that two Gaussian measures are either mutually absolutely continuous with respect to each other (and hence there is a Radon\textendash Nikodym density for each measure with respect to the other one) or mutually singular. Unlike the case of finite-dimensional Gaussian measures, there are nontrivial examples of both situations when dealing with Gaussian stochastic processes. This article provides: (a) Explicit expressions for the optimal (Bayes) rule and the minimal classification error probability in several relevant problems of supervised binary classification of mutually absolutely continuous Gaussian processes. The approach relies on some classical results in the theory of reproducing kernel Hilbert spaces (RKHS). (b) An interpretation, in terms of mutual singularity, for the so-called ``near perfect classification'' phenomenon. We show that the asymptotically optimal rule proposed by these authors can be identified with the sequence of optimal rules for an approximating sequence of classification problems in the absolutely continuous case. (c) As an application, we discuss a natural variable selection method, which essentially consists of taking the original functional data X(t), t {$\in$} [0, 1] to a d-dimensional marginal (X(t1), \ldots, X(td)), which is chosen to minimize the classification error of the corresponding Fisher's linear rule. We give precise conditions under which this discrimination method achieves the minimal classification error of the original functional problem. Supplementary materials for this article are available online.},
journal = {Journal of the American Statistical Association},
keywords = {absolutely continuity,mutually singular processes,Radon–Nikodym derivatives,supervised functional classification,variable selection},
number = {523}
}
```
</details>
**Describe alternatives you've considered**
Inserting manually the references in the documentation is difficult to maintain, and the citation style would be difficult to change if required. | non_priority | add sphinx bibtex extension is your feature request related to a problem please describe the references used in this packages are not uniform but they use different citation styles describe the solution you d like we should use the this would allow us to put our references in a bibtex file and set a common citation style maybe chicago the extension now supports a footbibliography statement to insert a local bibliography per file e g in docstrings and examples for the name of the bibtex entry we will use the following formats author year first relevant word example article happ object title object oriented software for functional data author happ kurz clara year month apr volume pages issn doi jss url urldate copyright copyright c clara happ kurz journal journal of statistical software keywords functional data analysis functional principal component analysis multivariate functional data object orientation simulation language en number year first relevant word example manual scheipl goldsmith tidyfun title tidyfun tools for tidy functional data author scheipl fabian and goldsmith jeff and wrobel julia year url note r package version type manual year first relevant word example article berrendero use title on the use of reproducing kernel hilbert spaces in functional classification author berrendero jos e ram o n and cuevas antonio and torrecilla jos e luis year month jul volume pages issn doi url urldate abstract the h ajek textendash feldman dichotomy establishes that two gaussian measures are either mutually absolutely continuous with respect to each other and hence there is a radon textendash nikodym density for each measure with respect to the other one or mutually singular unlike the case of finite dimensional gaussian measures there are nontrivial examples of both situations when dealing with gaussian stochastic processes this article provides a explicit expressions for the optimal bayes rule and the minimal classification error probability in several relevant problems of supervised binary classification of mutually absolutely continuous gaussian processes the approach relies on some classical results in the theory of reproducing kernel hilbert spaces rkhs b an interpretation in terms of mutual singularity for the so called near perfect classification phenomenon we show that the asymptotically optimal rule proposed by these authors can be identified with the sequence of optimal rules for an approximating sequence of classification problems in the absolutely continuous case c as an application we discuss a natural variable selection method which essentially consists of taking the original functional data x t t in to a d dimensional marginal x ldots x td which is chosen to minimize the classification error of the corresponding fisher s linear rule we give precise conditions under which this discrimination method achieves the minimal classification error of the original functional problem supplementary materials for this article are available online journal journal of the american statistical association keywords absolutely continuity mutually singular processes radon–nikodym derivatives supervised functional classification variable selection number describe alternatives you ve considered inserting manually the references in the documentation is difficult to maintain and the citation style would be difficult to change if required | 0 |
76,854 | 7,547,198,284 | IssuesEvent | 2018-04-18 07:08:52 | vaadin/beverage-starter-flow | https://api.github.com/repos/vaadin/beverage-starter-flow | closed | Test on Apache Tomcat 8.0.x, 8.5, 9 | testing | Test manually with the following Apache Tomcat versions and use cases:
- [x] 8.0.x (newest)
- [x] devevelopent
- [x] production
- [x] push in production
- [x] 8.5
- [x] devevelopent
- [x] production
- [x] push in production
- [x] 9
- [x] devevelopent
- [x] production
- [x] push in production
If any changes were needed for the project, push the necessary things into a branch (or your fork) named with the server name & version. | 1.0 | Test on Apache Tomcat 8.0.x, 8.5, 9 - Test manually with the following Apache Tomcat versions and use cases:
- [x] 8.0.x (newest)
- [x] devevelopent
- [x] production
- [x] push in production
- [x] 8.5
- [x] devevelopent
- [x] production
- [x] push in production
- [x] 9
- [x] devevelopent
- [x] production
- [x] push in production
If any changes were needed for the project, push the necessary things into a branch (or your fork) named with the server name & version. | non_priority | test on apache tomcat x test manually with the following apache tomcat versions and use cases x newest devevelopent production push in production devevelopent production push in production devevelopent production push in production if any changes were needed for the project push the necessary things into a branch or your fork named with the server name version | 0 |
108,168 | 11,582,032,583 | IssuesEvent | 2020-02-22 00:53:16 | keep-network/tbtc | https://api.github.com/repos/keep-network/tbtc | closed | Revisit interlinking between sections | :book: documentation tbtc | We're doing some parallel work right now on multiple sections; we'll have to do a pass where we revisit how the sections are interlinked and see if there are more natural ways to handle them. | 1.0 | Revisit interlinking between sections - We're doing some parallel work right now on multiple sections; we'll have to do a pass where we revisit how the sections are interlinked and see if there are more natural ways to handle them. | non_priority | revisit interlinking between sections we re doing some parallel work right now on multiple sections we ll have to do a pass where we revisit how the sections are interlinked and see if there are more natural ways to handle them | 0 |
6,838 | 6,625,700,743 | IssuesEvent | 2017-09-22 16:24:38 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | [Windows Arm64] CI Trigger words should match job name to eliminate confusion | area-Infrastructure bug | The current trigger `@dotnet-bot test Windows_NT arm64 Checked` is confusing
@jashook | 1.0 | [Windows Arm64] CI Trigger words should match job name to eliminate confusion - The current trigger `@dotnet-bot test Windows_NT arm64 Checked` is confusing
@jashook | non_priority | ci trigger words should match job name to eliminate confusion the current trigger dotnet bot test windows nt checked is confusing jashook | 0 |
10,435 | 12,396,198,870 | IssuesEvent | 2020-05-20 20:04:52 | facebook/hhvm | https://api.github.com/repos/facebook/hhvm | closed | DateTime: issue with timestamps and timezones | php5 incompatibility | I think I may have encountered a bug i DateTime behavior in HHVM - here's the code:
``` PHP
echo "\nDocumenting a bug in HHVM:\n\n";
$date = date_create(null, new DateTimeZone('Europe/Copenhagen'));
$date->setTimestamp(173919600);
echo "expected: 1975-07-07 00:00:00\n result: " . $date->format('Y-m-d H:i:s') . "\n\n";
// attempting work-around: (1)
$date = date_create(null);
$date->setTimestamp(173919600);
$date->setTimezone(new DateTimeZone('Europe/Copenhagen'));
echo "expected: 1975-07-07 00:00:00\n result: " . $date->format('Y-m-d H:i:s') . "\n\n";
// attempting work-around: (2)
$date = date_create_from_format('U', 173919600);
$date->setTimezone(new DateTimeZone('Europe/Copenhagen'));
echo "expected: 1975-07-07 00:00:00\n result: " . $date->format('Y-m-d H:i:s') . "\n\n";
```
Here's the output under PHP 5.3, 5.4, 5.5, 5.6:
```
Documenting a bug in HHVM:
expected: 1975-07-07 00:00:00
result: 1975-07-07 00:00:00
expected: 1975-07-07 00:00:00
result: 1975-07-07 00:00:00
expected: 1975-07-07 00:00:00
result: 1975-07-07 00:00:00
```
And here's the output under HHVM:
```
Documenting a bug in HHVM:
expected: 1975-07-07 00:00:00
result: 1975-07-06 23:00:00
expected: 1975-07-07 00:00:00
result: 1975-07-07 00:00:00
Fatal error: Uncaught exception 'ErrorException' with message 'Argument 2 passed to date_create_from_format() must be an instance of string, int given' in /home/travis/build/mindplay-dk/kissform/test/test.php:724
Stack trace:
#0 (): {closure}()
#1 /home/travis/build/mindplay-dk/kissform/test/test.php(724): date_create_from_format()
#2 {main}
```
In the first example, setting the timezone up front doesn't seem to work - it looks like setting the timestamp wipes out the timezone? That's not how it behaves under PHP.
As you can see, I found the workaround (1) which is to set the timezone _after_ setting the timestamp.
Possibly unrelated (?) but the argument to `date_create_from_format()` is an integer in this example, which normally under PHP would automatically be converted. (The obvious work-around is manually casting to a string.)
Here's the [build log on Travis-CI](https://travis-ci.org/mindplay-dk/kissform/jobs/52241469).
| True | DateTime: issue with timestamps and timezones - I think I may have encountered a bug i DateTime behavior in HHVM - here's the code:
``` PHP
echo "\nDocumenting a bug in HHVM:\n\n";
$date = date_create(null, new DateTimeZone('Europe/Copenhagen'));
$date->setTimestamp(173919600);
echo "expected: 1975-07-07 00:00:00\n result: " . $date->format('Y-m-d H:i:s') . "\n\n";
// attempting work-around: (1)
$date = date_create(null);
$date->setTimestamp(173919600);
$date->setTimezone(new DateTimeZone('Europe/Copenhagen'));
echo "expected: 1975-07-07 00:00:00\n result: " . $date->format('Y-m-d H:i:s') . "\n\n";
// attempting work-around: (2)
$date = date_create_from_format('U', 173919600);
$date->setTimezone(new DateTimeZone('Europe/Copenhagen'));
echo "expected: 1975-07-07 00:00:00\n result: " . $date->format('Y-m-d H:i:s') . "\n\n";
```
Here's the output under PHP 5.3, 5.4, 5.5, 5.6:
```
Documenting a bug in HHVM:
expected: 1975-07-07 00:00:00
result: 1975-07-07 00:00:00
expected: 1975-07-07 00:00:00
result: 1975-07-07 00:00:00
expected: 1975-07-07 00:00:00
result: 1975-07-07 00:00:00
```
And here's the output under HHVM:
```
Documenting a bug in HHVM:
expected: 1975-07-07 00:00:00
result: 1975-07-06 23:00:00
expected: 1975-07-07 00:00:00
result: 1975-07-07 00:00:00
Fatal error: Uncaught exception 'ErrorException' with message 'Argument 2 passed to date_create_from_format() must be an instance of string, int given' in /home/travis/build/mindplay-dk/kissform/test/test.php:724
Stack trace:
#0 (): {closure}()
#1 /home/travis/build/mindplay-dk/kissform/test/test.php(724): date_create_from_format()
#2 {main}
```
In the first example, setting the timezone up front doesn't seem to work - it looks like setting the timestamp wipes out the timezone? That's not how it behaves under PHP.
As you can see, I found the workaround (1) which is to set the timezone _after_ setting the timestamp.
Possibly unrelated (?) but the argument to `date_create_from_format()` is an integer in this example, which normally under PHP would automatically be converted. (The obvious work-around is manually casting to a string.)
Here's the [build log on Travis-CI](https://travis-ci.org/mindplay-dk/kissform/jobs/52241469).
| non_priority | datetime issue with timestamps and timezones i think i may have encountered a bug i datetime behavior in hhvm here s the code php echo ndocumenting a bug in hhvm n n date date create null new datetimezone europe copenhagen date settimestamp echo expected n result date format y m d h i s n n attempting work around date date create null date settimestamp date settimezone new datetimezone europe copenhagen echo expected n result date format y m d h i s n n attempting work around date date create from format u date settimezone new datetimezone europe copenhagen echo expected n result date format y m d h i s n n here s the output under php documenting a bug in hhvm expected result expected result expected result and here s the output under hhvm documenting a bug in hhvm expected result expected result fatal error uncaught exception errorexception with message argument passed to date create from format must be an instance of string int given in home travis build mindplay dk kissform test test php stack trace closure home travis build mindplay dk kissform test test php date create from format main in the first example setting the timezone up front doesn t seem to work it looks like setting the timestamp wipes out the timezone that s not how it behaves under php as you can see i found the workaround which is to set the timezone after setting the timestamp possibly unrelated but the argument to date create from format is an integer in this example which normally under php would automatically be converted the obvious work around is manually casting to a string here s the | 0 |
182,461 | 30,851,934,661 | IssuesEvent | 2023-08-02 17:24:28 | bcgov/cloud-pathfinder | https://api.github.com/repos/bcgov/cloud-pathfinder | closed | Baseline our KPIs for future reporting to governance bodies | Service Design | **Describe the issue**
Our business case for building out the Public Cloud Accelerator Service goes to Digital Investment Board on October 21. It includes several KPIs that will measure our progress towards realizing the desired benefits, and we'll report on these to several governance bodies (ED cloud group (tri-weekly), ADMs (tri-weekly), ES client centric governance model, DIB (quarterly)).
Ideally we should baseline these KPIs in time for the DIB presentation on October 21.
**Which Sprint Goal is this issue related to?**
> Note: 'Milestone' is a ZenHub term that we use synonymously with 'Sprint'.
The 'Milestone' description (created in ZenHub) should clearly list Sprint Goals. This section should indicate which 'goal' this issue is related to.
**Additional context**
Here are the KPIs proposed in our business case:
- Number of compliant public cloud services available to ministry teams
- Number of teams onboarded to compliant public cloud platforms
- Net Promoter Score (1-10, How likely they are to recommend this service)
- Average time to bring a ministry team onboard and enable them to begin deploying an application
- Number of digital government services available
- Number, duration and cost of outages affecting public cloud digital services
**Definition of done**
Determine feasibility of measuring each of these KPIs
If any are not feasible, suggest variations or alternatives
Establish repository for recording our KPIs and their measurements (Airtable?)
Record baseline measurements of all KPIs
| 1.0 | Baseline our KPIs for future reporting to governance bodies - **Describe the issue**
Our business case for building out the Public Cloud Accelerator Service goes to Digital Investment Board on October 21. It includes several KPIs that will measure our progress towards realizing the desired benefits, and we'll report on these to several governance bodies (ED cloud group (tri-weekly), ADMs (tri-weekly), ES client centric governance model, DIB (quarterly)).
Ideally we should baseline these KPIs in time for the DIB presentation on October 21.
**Which Sprint Goal is this issue related to?**
> Note: 'Milestone' is a ZenHub term that we use synonymously with 'Sprint'.
The 'Milestone' description (created in ZenHub) should clearly list Sprint Goals. This section should indicate which 'goal' this issue is related to.
**Additional context**
Here are the KPIs proposed in our business case:
- Number of compliant public cloud services available to ministry teams
- Number of teams onboarded to compliant public cloud platforms
- Net Promoter Score (1-10, How likely they are to recommend this service)
- Average time to bring a ministry team onboard and enable them to begin deploying an application
- Number of digital government services available
- Number, duration and cost of outages affecting public cloud digital services
**Definition of done**
Determine feasibility of measuring each of these KPIs
If any are not feasible, suggest variations or alternatives
Establish repository for recording our KPIs and their measurements (Airtable?)
Record baseline measurements of all KPIs
| non_priority | baseline our kpis for future reporting to governance bodies describe the issue our business case for building out the public cloud accelerator service goes to digital investment board on october it includes several kpis that will measure our progress towards realizing the desired benefits and we ll report on these to several governance bodies ed cloud group tri weekly adms tri weekly es client centric governance model dib quarterly ideally we should baseline these kpis in time for the dib presentation on october which sprint goal is this issue related to note milestone is a zenhub term that we use synonymously with sprint the milestone description created in zenhub should clearly list sprint goals this section should indicate which goal this issue is related to additional context here are the kpis proposed in our business case number of compliant public cloud services available to ministry teams number of teams onboarded to compliant public cloud platforms net promoter score how likely they are to recommend this service average time to bring a ministry team onboard and enable them to begin deploying an application number of digital government services available number duration and cost of outages affecting public cloud digital services definition of done determine feasibility of measuring each of these kpis if any are not feasible suggest variations or alternatives establish repository for recording our kpis and their measurements airtable record baseline measurements of all kpis | 0 |
53,778 | 13,206,548,760 | IssuesEvent | 2020-08-14 20:29:40 | spack/spack | https://api.github.com/repos/spack/spack | opened | Installation issue: rocm-opencl | build-error | <!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install rocm-opencl
$ clinfo
ERROR: clGetPlatformIDs(-1001)
```
### Information on your system
<!-- Please include the output of `spack debug report` -->
* **Spack:** 0.15.4-521-1d152a44f
* **Python:** 3.7.8
* **Platform:** linux-solus4-zen2
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. -->
* [spack-build-out.txt](https://github.com/spack/spack/files/5076810/spack-build-env.txt)
* [spack-build-out.txt](https://github.com/spack/spack/files/5076811/spack-build-out.txt)
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
@arjun-raj-kuppala @srekolam
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [x] I have uploaded the build log and environment files
- [x] I have searched the issues of this repo and believe this is not a duplicate
### Extra Information
I've been watching the recently merged pull request https://github.com/spack/spack/pull/17422 to try and get OpenCL running on my system. The installation step of the rocm-opencl package completes properly but clinfo crashes for a couple reasons.
First, it appears that the OpenCL vendors path is hard coded into the rocm build as "/etc/OpenCL/vendors/" which seems to be a general issue as noted in https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/pull/111. Using the suggestion in https://github.com/RadeonOpenCompute/ROCm/issues/511#issuecomment-415609121 I was able to create the amdocl64.icd using the libamdocl64.so file in the rocm-opencl lib directory. I am not yet familiar enough with Spack to understand the proper way to fix such a path issue. I have seen the RPATH feature but still couldn't quite come up with how to go about it properly.
Second, even after hackily making it find the proper vendors files it still crashes out due to a necessary runtime dependency on comgr. Changing `depends_on('comgr@3.5.0', type='build', when='@3.5.0')` to `depends_on('comgr@3.5.0', type=('build', 'run'), when='@3.5.0')` in the rocm-opencl package.yaml allows clinfo to run and correctly identifies my RX480 as a OpenCL device. The clinfo output is here:
<details>
<summary>clinfo output</summary>
```
$ clinfo
Number of platforms: 1
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 2.0 AMD-APP (3137.0)
Platform Name: AMD Accelerated Parallel Processing
Platform Vendor: Advanced Micro Devices, Inc.
Platform Extensions: cl_khr_icd cl_amd_event_callback
Platform Name: AMD Accelerated Parallel Processing
Number of devices: 1
Device Type: CL_DEVICE_TYPE_GPU
Vendor ID: 1002h
Board name: Ellesmere [Radeon RX 470/480/570/570X/580/580X/590]
Device Topology: PCI[ B#45, D#0, F#0 ]
Max compute units: 36
Max work items dimensions: 3
Max work items[0]: 1024
Max work items[1]: 1024
Max work items[2]: 1024
Max work group size: 256
Preferred vector width char: 4
Preferred vector width short: 2
Preferred vector width int: 1
Preferred vector width long: 1
Preferred vector width float: 1
Preferred vector width double: 1
Native vector width char: 4
Native vector width short: 2
Native vector width int: 1
Native vector width long: 1
Native vector width float: 1
Native vector width double: 1
Max clock frequency: 1266Mhz
Address bits: 64
Max memory allocation: 7301444403
Image support: No
Max size of kernel argument: 1024
Alignment (bits) of base address: 1024
Minimum alignment (bytes) for any datatype: 128
Single precision floating point capability
Denorms: No
Quiet NaNs: Yes
Round to nearest even: Yes
Round to zero: Yes
Round to +ve and infinity: Yes
IEEE754-2008 fused multiply-add: Yes
Cache type: Read/Write
Cache line size: 64
Cache size: 16384
Global memory size: 8589934592
Constant buffer size: 7301444403
Max number of constant args: 8
Local memory type: Scratchpad
Local memory size: 65536
Max pipe arguments: 16
Max pipe active reservations: 16
Max pipe packet size: 3006477107
Max global variable size: 7301444403
Max global variable preferred total size: 8589934592
Max read/write image args: 0
Max on device events: 1024
Queue on device max size: 8388608
Max on device queues: 1
Queue on device preferred size: 262144
SVM capabilities:
Coarse grain buffer: Yes
Fine grain buffer: Yes
Fine grain system: No
Atomics: No
Preferred platform atomic alignment: 0
Preferred global atomic alignment: 0
Preferred local atomic alignment: 0
Kernel Preferred work group size multiple: 64
Error correction support: 0
Unified memory for Host and Device: 0
Profiling timer resolution: 1
Device endianess: Little
Available: Yes
Compiler available: Yes
Execution capabilities:
Execute OpenCL kernels: Yes
Execute native function: No
Queue on Host properties:
Out-of-Order: No
Profiling : Yes
Queue on Device properties:
Out-of-Order: Yes
Profiling : Yes
Platform ID: 0x7fae78e09c90
Name: gfx803
Vendor: Advanced Micro Devices, Inc.
Device OpenCL C version: OpenCL C 2.0
Driver version: 3137.0 (HSA1.1,LC)
Profile: FULL_PROFILE
Version: OpenCL 1.2
Extensions: cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program
```
</details>
This is at the point where things really stopped working though. The build is missing image support which makes testing it difficult. At the very least running darktable-cltest while in a Spack environment where rocm-opencl is installed appears to miss the GPU altogether as seen here:
<details>
<summary>darktable-cltest output</summary>
```
$ darktable-cltest
0.019987 [opencl_init] opencl related configuration options:
0.019997 [opencl_init]
0.019999 [opencl_init] opencl: 1
0.020000 [opencl_init] opencl_scheduling_profile: 'default'
0.020002 [opencl_init] opencl_library: ''
0.020008 [opencl_init] opencl_memory_requirement: 768
0.020013 [opencl_init] opencl_memory_headroom: 400
0.020015 [opencl_init] opencl_device_priority: '*/!0,*/*/*/!0,*'
0.020017 [opencl_init] opencl_mandatory_timeout: 200
0.020019 [opencl_init] opencl_size_roundup: 16
0.020021 [opencl_init] opencl_async_pixelpipe: 0
0.020022 [opencl_init] opencl_synch_cache: active module
0.020024 [opencl_init] opencl_number_event_handles: 25
0.020026 [opencl_init] opencl_micro_nap: 1000
0.020028 [opencl_init] opencl_use_pinned_memory: 0
0.020030 [opencl_init] opencl_use_cpu_devices: 0
0.020031 [opencl_init] opencl_avoid_atomics: 0
0.020033 [opencl_init]
0.023562 [opencl_init] found opencl runtime library 'libOpenCL'
0.023591 [opencl_init] opencl library 'libOpenCL' found on your system and loaded
0.023595 [opencl_init] found 1 platform
0.042325 [opencl_init] found 1 device
0.042358 [opencl_init] discarding CPU device 0 `pthread-AMD Ryzen 9 3900X 12-Core Processor'.
0.042364 [opencl_init] no suitable devices found.
0.042367 [opencl_init] FINALLY: opencl is NOT AVAILABLE on this system.
0.042369 [opencl_init] initial status of opencl enabled flag is OFF.
```
</details>
Attempting to run Blender from the command prompt seems to get it stuck in a never ending loop with one thread running at 100%. Unfortunately the strace for Blender wasn't particularly helpful due to the large number of commands it was running. The last application I tried was plaidml which, similar to darktable, was able to find my CPU somehow but appeared to never load the configuration for the GPU. With strace I was able to find something that possibly is causing the issue though. It appears that the rocm-opencl build creates both a lib and a lib64 folder and both contain a libOpenCL.so file. Plaidml appears to find the lib64 version first as seen here
`openat(AT_FDCWD, "/mnt/fe9dcbea-9428-40db-8650-4a941be01d52/Documents/spack/var/spack/environments/cltest/.spack-env/view/lib64/libOpenCL.so", O_RDONLY|O_CLOEXEC) = 4`
I am guessing this is the problem because only the lib folder contains libamdocl64.so which I believe means that it never tries to pull in the vendor file but I am a bit out of my depth here.
Any assistance would be greatly appreciated, I understand the rocm components just got pulled in and issues were to be expected, I have just hit the end of my knowledge and am not quite sure how to progress from here. | 1.0 | Installation issue: rocm-opencl - <!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install rocm-opencl
$ clinfo
ERROR: clGetPlatformIDs(-1001)
```
### Information on your system
<!-- Please include the output of `spack debug report` -->
* **Spack:** 0.15.4-521-1d152a44f
* **Python:** 3.7.8
* **Platform:** linux-solus4-zen2
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. -->
* [spack-build-out.txt](https://github.com/spack/spack/files/5076810/spack-build-env.txt)
* [spack-build-out.txt](https://github.com/spack/spack/files/5076811/spack-build-out.txt)
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
@arjun-raj-kuppala @srekolam
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [x] I have uploaded the build log and environment files
- [x] I have searched the issues of this repo and believe this is not a duplicate
### Extra Information
I've been watching the recently merged pull request https://github.com/spack/spack/pull/17422 to try and get OpenCL running on my system. The installation step of the rocm-opencl package completes properly but clinfo crashes for a couple reasons.
First, it appears that the OpenCL vendors path is hard coded into the rocm build as "/etc/OpenCL/vendors/" which seems to be a general issue as noted in https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/pull/111. Using the suggestion in https://github.com/RadeonOpenCompute/ROCm/issues/511#issuecomment-415609121 I was able to create the amdocl64.icd using the libamdocl64.so file in the rocm-opencl lib directory. I am not yet familiar enough with Spack to understand the proper way to fix such a path issue. I have seen the RPATH feature but still couldn't quite come up with how to go about it properly.
Second, even after hackily making it find the proper vendors files it still crashes out due to a necessary runtime dependency on comgr. Changing `depends_on('comgr@3.5.0', type='build', when='@3.5.0')` to `depends_on('comgr@3.5.0', type=('build', 'run'), when='@3.5.0')` in the rocm-opencl package.yaml allows clinfo to run and correctly identifies my RX480 as a OpenCL device. The clinfo output is here:
<details>
<summary>clinfo output</summary>
```
$ clinfo
Number of platforms: 1
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 2.0 AMD-APP (3137.0)
Platform Name: AMD Accelerated Parallel Processing
Platform Vendor: Advanced Micro Devices, Inc.
Platform Extensions: cl_khr_icd cl_amd_event_callback
Platform Name: AMD Accelerated Parallel Processing
Number of devices: 1
Device Type: CL_DEVICE_TYPE_GPU
Vendor ID: 1002h
Board name: Ellesmere [Radeon RX 470/480/570/570X/580/580X/590]
Device Topology: PCI[ B#45, D#0, F#0 ]
Max compute units: 36
Max work items dimensions: 3
Max work items[0]: 1024
Max work items[1]: 1024
Max work items[2]: 1024
Max work group size: 256
Preferred vector width char: 4
Preferred vector width short: 2
Preferred vector width int: 1
Preferred vector width long: 1
Preferred vector width float: 1
Preferred vector width double: 1
Native vector width char: 4
Native vector width short: 2
Native vector width int: 1
Native vector width long: 1
Native vector width float: 1
Native vector width double: 1
Max clock frequency: 1266Mhz
Address bits: 64
Max memory allocation: 7301444403
Image support: No
Max size of kernel argument: 1024
Alignment (bits) of base address: 1024
Minimum alignment (bytes) for any datatype: 128
Single precision floating point capability
Denorms: No
Quiet NaNs: Yes
Round to nearest even: Yes
Round to zero: Yes
Round to +ve and infinity: Yes
IEEE754-2008 fused multiply-add: Yes
Cache type: Read/Write
Cache line size: 64
Cache size: 16384
Global memory size: 8589934592
Constant buffer size: 7301444403
Max number of constant args: 8
Local memory type: Scratchpad
Local memory size: 65536
Max pipe arguments: 16
Max pipe active reservations: 16
Max pipe packet size: 3006477107
Max global variable size: 7301444403
Max global variable preferred total size: 8589934592
Max read/write image args: 0
Max on device events: 1024
Queue on device max size: 8388608
Max on device queues: 1
Queue on device preferred size: 262144
SVM capabilities:
Coarse grain buffer: Yes
Fine grain buffer: Yes
Fine grain system: No
Atomics: No
Preferred platform atomic alignment: 0
Preferred global atomic alignment: 0
Preferred local atomic alignment: 0
Kernel Preferred work group size multiple: 64
Error correction support: 0
Unified memory for Host and Device: 0
Profiling timer resolution: 1
Device endianess: Little
Available: Yes
Compiler available: Yes
Execution capabilities:
Execute OpenCL kernels: Yes
Execute native function: No
Queue on Host properties:
Out-of-Order: No
Profiling : Yes
Queue on Device properties:
Out-of-Order: Yes
Profiling : Yes
Platform ID: 0x7fae78e09c90
Name: gfx803
Vendor: Advanced Micro Devices, Inc.
Device OpenCL C version: OpenCL C 2.0
Driver version: 3137.0 (HSA1.1,LC)
Profile: FULL_PROFILE
Version: OpenCL 1.2
Extensions: cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program
```
</details>
This is at the point where things really stopped working though. The build is missing image support which makes testing it difficult. At the very least running darktable-cltest while in a Spack environment where rocm-opencl is installed appears to miss the GPU altogether as seen here:
<details>
<summary>darktable-cltest output</summary>
```
$ darktable-cltest
0.019987 [opencl_init] opencl related configuration options:
0.019997 [opencl_init]
0.019999 [opencl_init] opencl: 1
0.020000 [opencl_init] opencl_scheduling_profile: 'default'
0.020002 [opencl_init] opencl_library: ''
0.020008 [opencl_init] opencl_memory_requirement: 768
0.020013 [opencl_init] opencl_memory_headroom: 400
0.020015 [opencl_init] opencl_device_priority: '*/!0,*/*/*/!0,*'
0.020017 [opencl_init] opencl_mandatory_timeout: 200
0.020019 [opencl_init] opencl_size_roundup: 16
0.020021 [opencl_init] opencl_async_pixelpipe: 0
0.020022 [opencl_init] opencl_synch_cache: active module
0.020024 [opencl_init] opencl_number_event_handles: 25
0.020026 [opencl_init] opencl_micro_nap: 1000
0.020028 [opencl_init] opencl_use_pinned_memory: 0
0.020030 [opencl_init] opencl_use_cpu_devices: 0
0.020031 [opencl_init] opencl_avoid_atomics: 0
0.020033 [opencl_init]
0.023562 [opencl_init] found opencl runtime library 'libOpenCL'
0.023591 [opencl_init] opencl library 'libOpenCL' found on your system and loaded
0.023595 [opencl_init] found 1 platform
0.042325 [opencl_init] found 1 device
0.042358 [opencl_init] discarding CPU device 0 `pthread-AMD Ryzen 9 3900X 12-Core Processor'.
0.042364 [opencl_init] no suitable devices found.
0.042367 [opencl_init] FINALLY: opencl is NOT AVAILABLE on this system.
0.042369 [opencl_init] initial status of opencl enabled flag is OFF.
```
</details>
Attempting to run Blender from the command prompt seems to get it stuck in a never ending loop with one thread running at 100%. Unfortunately the strace for Blender wasn't particularly helpful due to the large number of commands it was running. The last application I tried was plaidml which, similar to darktable, was able to find my CPU somehow but appeared to never load the configuration for the GPU. With strace I was able to find something that possibly is causing the issue though. It appears that the rocm-opencl build creates both a lib and a lib64 folder and both contain a libOpenCL.so file. Plaidml appears to find the lib64 version first as seen here
`openat(AT_FDCWD, "/mnt/fe9dcbea-9428-40db-8650-4a941be01d52/Documents/spack/var/spack/environments/cltest/.spack-env/view/lib64/libOpenCL.so", O_RDONLY|O_CLOEXEC) = 4`
I am guessing this is the problem because only the lib folder contains libamdocl64.so which I believe means that it never tries to pull in the vendor file but I am a bit out of my depth here.
Any assistance would be greatly appreciated, I understand the rocm components just got pulled in and issues were to be expected, I have just hit the end of my knowledge and am not quite sure how to progress from here. | non_priority | installation issue rocm opencl thanks for taking the time to report this build failure to proceed with the report please title the issue installation issue provide the information required below we encourage you to try as much as possible to reduce your problem to the minimal example that still reproduces the issue that would help us a lot in fixing it quickly and effectively steps to reproduce the issue console spack install rocm opencl clinfo error clgetplatformids information on your system spack python platform linux additional information and mention them here if they exist arjun raj kuppala srekolam general information i have run spack debug report and reported the version of spack python platform i have run spack maintainers and mentioned any maintainers i have uploaded the build log and environment files i have searched the issues of this repo and believe this is not a duplicate extra information i ve been watching the recently merged pull request to try and get opencl running on my system the installation step of the rocm opencl package completes properly but clinfo crashes for a couple reasons first it appears that the opencl vendors path is hard coded into the rocm build as etc opencl vendors which seems to be a general issue as noted in using the suggestion in i was able to create the icd using the so file in the rocm opencl lib directory i am not yet familiar enough with spack to understand the proper way to fix such a path issue i have seen the rpath feature but still couldn t quite come up with how to go about it properly second even after hackily making it find the proper vendors files it still crashes out due to a necessary runtime dependency on comgr changing depends on comgr type build when to depends on comgr type build run when in the rocm opencl package yaml allows clinfo to run and correctly identifies my as a opencl device the clinfo output is here clinfo output clinfo number of platforms platform profile full profile platform version opencl amd app platform name amd accelerated parallel processing platform vendor advanced micro devices inc platform extensions cl khr icd cl amd event callback platform name amd accelerated parallel processing number of devices device type cl device type gpu vendor id board name ellesmere device topology pci max compute units max work items dimensions max work items max work items max work items max work group size preferred vector width char preferred vector width short preferred vector width int preferred vector width long preferred vector width float preferred vector width double native vector width char native vector width short native vector width int native vector width long native vector width float native vector width double max clock frequency address bits max memory allocation image support no max size of kernel argument alignment bits of base address minimum alignment bytes for any datatype single precision floating point capability denorms no quiet nans yes round to nearest even yes round to zero yes round to ve and infinity yes fused multiply add yes cache type read write cache line size cache size global memory size constant buffer size max number of constant args local memory type scratchpad local memory size max pipe arguments max pipe active reservations max pipe packet size max global variable size max global variable preferred total size max read write image args max on device events queue on device max size max on device queues queue on device preferred size svm capabilities coarse grain buffer yes fine grain buffer yes fine grain system no atomics no preferred platform atomic alignment preferred global atomic alignment preferred local atomic alignment kernel preferred work group size multiple error correction support unified memory for host and device profiling timer resolution device endianess little available yes compiler available yes execution capabilities execute opencl kernels yes execute native function no queue on host properties out of order no profiling yes queue on device properties out of order yes profiling yes platform id name vendor advanced micro devices inc device opencl c version opencl c driver version lc profile full profile version opencl extensions cl khr cl khr global base atomics cl khr global extended atomics cl khr local base atomics cl khr local extended atomics cl khr base atomics cl khr extended atomics cl khr image writes cl khr byte addressable store cl khr cl khr gl sharing cl amd device attribute query cl amd media ops cl amd media cl khr from buffer cl khr subgroups cl khr depth images cl amd copy buffer cl amd assembly program this is at the point where things really stopped working though the build is missing image support which makes testing it difficult at the very least running darktable cltest while in a spack environment where rocm opencl is installed appears to miss the gpu altogether as seen here darktable cltest output darktable cltest opencl related configuration options opencl opencl scheduling profile default opencl library opencl memory requirement opencl memory headroom opencl device priority opencl mandatory timeout opencl size roundup opencl async pixelpipe opencl synch cache active module opencl number event handles opencl micro nap opencl use pinned memory opencl use cpu devices opencl avoid atomics found opencl runtime library libopencl opencl library libopencl found on your system and loaded found platform found device discarding cpu device pthread amd ryzen core processor no suitable devices found finally opencl is not available on this system initial status of opencl enabled flag is off attempting to run blender from the command prompt seems to get it stuck in a never ending loop with one thread running at unfortunately the strace for blender wasn t particularly helpful due to the large number of commands it was running the last application i tried was plaidml which similar to darktable was able to find my cpu somehow but appeared to never load the configuration for the gpu with strace i was able to find something that possibly is causing the issue though it appears that the rocm opencl build creates both a lib and a folder and both contain a libopencl so file plaidml appears to find the version first as seen here openat at fdcwd mnt documents spack var spack environments cltest spack env view libopencl so o rdonly o cloexec i am guessing this is the problem because only the lib folder contains so which i believe means that it never tries to pull in the vendor file but i am a bit out of my depth here any assistance would be greatly appreciated i understand the rocm components just got pulled in and issues were to be expected i have just hit the end of my knowledge and am not quite sure how to progress from here | 0 |
121,957 | 10,208,267,511 | IssuesEvent | 2019-08-14 09:42:28 | maidsafe/safe_client_libs | https://api.github.com/repos/maidsafe/safe_client_libs | closed | Add tests for Unpublished Unsequenced AppendOnly data | testing | Similar to [this test module](https://github.com/maidsafe/safe_client_libs/blob/experimental/safe_app/src/tests/unpublished_mutable_data.rs) there should be extensive tests to test the various scenarios of Unpublished Unsequenced AppendOnly data. | 1.0 | Add tests for Unpublished Unsequenced AppendOnly data - Similar to [this test module](https://github.com/maidsafe/safe_client_libs/blob/experimental/safe_app/src/tests/unpublished_mutable_data.rs) there should be extensive tests to test the various scenarios of Unpublished Unsequenced AppendOnly data. | non_priority | add tests for unpublished unsequenced appendonly data similar to there should be extensive tests to test the various scenarios of unpublished unsequenced appendonly data | 0 |
337,656 | 30,253,303,752 | IssuesEvent | 2023-07-06 22:51:57 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix manipulation.test_squeeze | Sub Task Failing Test | | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5412847572/jobs/9837524581"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5412847572/jobs/9837524581"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5412847572/jobs/9837524581"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5442531781"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5480661567"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix manipulation.test_squeeze - | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5412847572/jobs/9837524581"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5412847572/jobs/9837524581"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5412847572/jobs/9837524581"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5442531781"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5480661567"><img src=https://img.shields.io/badge/-success-success></a>
| non_priority | fix manipulation test squeeze jax a href src numpy a href src tensorflow a href src torch a href src paddle a href src | 0 |
138,542 | 11,204,914,251 | IssuesEvent | 2020-01-05 10:19:18 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | HTTPRequest fails on sites with IPv6 | bug needs testing topic:network | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 3.0.2 Stable
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Windows 10. I have IPv6 enabled.
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:** Accessing some sites with HTTPRequest fails with Error 2
<!-- What happened, and what was expected. -->
**Steps to reproduce:**
Try to HTTPRequest a website with IPv6 support, with IPv6 enabled. (I believe)
**Minimal reproduction project:**
[httprequest-fail.zip](https://github.com/godotengine/godot/files/1924795/httprequest-fail.zip)
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
Here's what it outputs on my Windows 10:
```
Loaded certs from 'res://ca-certificates.crt': 151
https://www.reddit.com/ succeeds.
It uses its own servers and DigiCert CA
http://api.myjson.com/bins/pnmlz succeeds.
It uses unknown hosting, no SSL
https://api.myjson.com/bins/pnmlz succeeds.
It uses the same site, with GEOTRUST CA
https://www.humblebundle.com/ fails with error #2
It uses Google App Engine and COMODO CA
http://lunaphippscostin.com/ fails with error #2
It uses Google App Engine but no SSL
https://letsencrypt.org/ fails with error #2
It uses unknown hosting and Let's Encrypt CA
```
On linux (idk if I have IPv6 enabled though) it works perfectly:
```
CERT STR: /C=US/ST=California/L=San Francisco/O=Reddit Inc./CN=*.reddit.com
VALID: 1
CONNECTION RESULT: 1
cert_ok: 1
https://www.reddit.com/ succeeds.
It uses its own servers and DigiCert CA
http://api.myjson.com/bins/pnmlz succeeds.
It uses unknown hosting, no SSL
CERT STR: /CN=api.myjson.com
VALID: 1
CONNECTION RESULT: 1
cert_ok: 1
https://api.myjson.com/bins/pnmlz succeeds.
It uses the same site, with GEOTRUST CA
CERT STR: /serialNumber=4903485/jurisdictionC=US/jurisdictionST=Delaware/businessCategory=Private Organization/C=US/postalCode=94108/ST=CA/L=San Francisco/street=Floor 11/street=201 Post St/O=Humble Bundle, Inc./OU=COMODO EV SSL/CN=www.humblebundle.com
VALID: 1
CONNECTION RESULT: 1
cert_ok: 1
https://www.humblebundle.com/ succeeds.
It uses Google App Engine and COMODO CA
http://lunaphippscostin.com/ succeeds.
It uses Google App Engine but no SSL
CERT STR: /CN=www.letsencrypt.org
VALID: 1
CONNECTION RESULT: 1
cert_ok: 1
https://letsencrypt.org/ succeeds.
It uses unknown hosting and Let's Encrypt CA
```
The ca-certificates is the one from linux /etc/whatever | 1.0 | HTTPRequest fails on sites with IPv6 - <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 3.0.2 Stable
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Windows 10. I have IPv6 enabled.
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:** Accessing some sites with HTTPRequest fails with Error 2
<!-- What happened, and what was expected. -->
**Steps to reproduce:**
Try to HTTPRequest a website with IPv6 support, with IPv6 enabled. (I believe)
**Minimal reproduction project:**
[httprequest-fail.zip](https://github.com/godotengine/godot/files/1924795/httprequest-fail.zip)
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
Here's what it outputs on my Windows 10:
```
Loaded certs from 'res://ca-certificates.crt': 151
https://www.reddit.com/ succeeds.
It uses its own servers and DigiCert CA
http://api.myjson.com/bins/pnmlz succeeds.
It uses unknown hosting, no SSL
https://api.myjson.com/bins/pnmlz succeeds.
It uses the same site, with GEOTRUST CA
https://www.humblebundle.com/ fails with error #2
It uses Google App Engine and COMODO CA
http://lunaphippscostin.com/ fails with error #2
It uses Google App Engine but no SSL
https://letsencrypt.org/ fails with error #2
It uses unknown hosting and Let's Encrypt CA
```
On linux (idk if I have IPv6 enabled though) it works perfectly:
```
CERT STR: /C=US/ST=California/L=San Francisco/O=Reddit Inc./CN=*.reddit.com
VALID: 1
CONNECTION RESULT: 1
cert_ok: 1
https://www.reddit.com/ succeeds.
It uses its own servers and DigiCert CA
http://api.myjson.com/bins/pnmlz succeeds.
It uses unknown hosting, no SSL
CERT STR: /CN=api.myjson.com
VALID: 1
CONNECTION RESULT: 1
cert_ok: 1
https://api.myjson.com/bins/pnmlz succeeds.
It uses the same site, with GEOTRUST CA
CERT STR: /serialNumber=4903485/jurisdictionC=US/jurisdictionST=Delaware/businessCategory=Private Organization/C=US/postalCode=94108/ST=CA/L=San Francisco/street=Floor 11/street=201 Post St/O=Humble Bundle, Inc./OU=COMODO EV SSL/CN=www.humblebundle.com
VALID: 1
CONNECTION RESULT: 1
cert_ok: 1
https://www.humblebundle.com/ succeeds.
It uses Google App Engine and COMODO CA
http://lunaphippscostin.com/ succeeds.
It uses Google App Engine but no SSL
CERT STR: /CN=www.letsencrypt.org
VALID: 1
CONNECTION RESULT: 1
cert_ok: 1
https://letsencrypt.org/ succeeds.
It uses unknown hosting and Let's Encrypt CA
```
The ca-certificates is the one from linux /etc/whatever | non_priority | httprequest fails on sites with please search existing issues for potential duplicates before filing yours godot version stable os device including version windows i have enabled issue description accessing some sites with httprequest fails with error steps to reproduce try to httprequest a website with support with enabled i believe minimal reproduction project here s what it outputs on my windows loaded certs from res ca certificates crt succeeds it uses its own servers and digicert ca succeeds it uses unknown hosting no ssl succeeds it uses the same site with geotrust ca fails with error it uses google app engine and comodo ca fails with error it uses google app engine but no ssl fails with error it uses unknown hosting and let s encrypt ca on linux idk if i have enabled though it works perfectly cert str c us st california l san francisco o reddit inc cn reddit com valid connection result cert ok succeeds it uses its own servers and digicert ca succeeds it uses unknown hosting no ssl cert str cn api myjson com valid connection result cert ok succeeds it uses the same site with geotrust ca cert str serialnumber jurisdictionc us jurisdictionst delaware businesscategory private organization c us postalcode st ca l san francisco street floor street post st o humble bundle inc ou comodo ev ssl cn valid connection result cert ok succeeds it uses google app engine and comodo ca succeeds it uses google app engine but no ssl cert str cn valid connection result cert ok succeeds it uses unknown hosting and let s encrypt ca the ca certificates is the one from linux etc whatever | 0 |
392,905 | 26,964,531,807 | IssuesEvent | 2023-02-08 21:04:38 | py-why/dodiscover | https://api.github.com/repos/py-why/dodiscover | opened | [DOC] Relevant in-depth tutorial on FCI | documentation help wanted | Brought up in https://github.com/py-why/dodiscover/pull/106#discussion_r1100661794, we want to develop a good example (or set of examples) of causal graphs that we then generate data from that we then feed into FCI.
This can be then used to illustrate how FCI applies its rules and that selection bias (i.e. undirected edges), bidirected edges and directed edges can all be recovered in certain settings. We can both use an oracle and real CI test to further illustrate the impact of faithfulness assumption.
Note: compared to dowhy, we want to run all examples in every single CI run, so ideally the full example should be runnable on your own computer/laptop within <20 seconds. | 1.0 | [DOC] Relevant in-depth tutorial on FCI - Brought up in https://github.com/py-why/dodiscover/pull/106#discussion_r1100661794, we want to develop a good example (or set of examples) of causal graphs that we then generate data from that we then feed into FCI.
This can be then used to illustrate how FCI applies its rules and that selection bias (i.e. undirected edges), bidirected edges and directed edges can all be recovered in certain settings. We can both use an oracle and real CI test to further illustrate the impact of faithfulness assumption.
Note: compared to dowhy, we want to run all examples in every single CI run, so ideally the full example should be runnable on your own computer/laptop within <20 seconds. | non_priority | relevant in depth tutorial on fci brought up in we want to develop a good example or set of examples of causal graphs that we then generate data from that we then feed into fci this can be then used to illustrate how fci applies its rules and that selection bias i e undirected edges bidirected edges and directed edges can all be recovered in certain settings we can both use an oracle and real ci test to further illustrate the impact of faithfulness assumption note compared to dowhy we want to run all examples in every single ci run so ideally the full example should be runnable on your own computer laptop within seconds | 0 |
98,130 | 29,489,013,681 | IssuesEvent | 2023-06-02 12:06:03 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | [Task]: Refactor sticky canvas Arena to avoid unnecessary forced reflow | Performance Pod UI Builders Pod Task Drag & Drop Canvas / Grid Performance | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
Refactor sticky canvas Arena to avoid unnecessary forced reflow. This is done by enabling the canvas to observe only when required i.e, only when user is using,
- [ ] Drag to select
- [ ] Dragging widgets | 1.0 | [Task]: Refactor sticky canvas Arena to avoid unnecessary forced reflow - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
Refactor sticky canvas Arena to avoid unnecessary forced reflow. This is done by enabling the canvas to observe only when required i.e, only when user is using,
- [ ] Drag to select
- [ ] Dragging widgets | non_priority | refactor sticky canvas arena to avoid unnecessary forced reflow is there an existing issue for this i have searched the existing issues subtasks refactor sticky canvas arena to avoid unnecessary forced reflow this is done by enabling the canvas to observe only when required i e only when user is using drag to select dragging widgets | 0 |
186,117 | 15,047,653,346 | IssuesEvent | 2021-02-03 09:12:26 | apache/buildstream | https://api.github.com/repos/apache/buildstream | closed | Follow-up from "Refer readers to our tutorial before referring them to existing bst projects" | bug documentation | [See original issue on GitLab](https://gitlab.com/BuildStream/buildstream/-/issues/608)
In GitLab by [[Gitlab user @tristanvb]](https://gitlab.com/tristanvb) on Aug 25, 2018, 10:53
The following discussion from !578 should be addressed:
- [x] [[Gitlab user @tristanvb]](https://gitlab.com/tristanvb) started a [discussion](https://gitlab.com/BuildStream/buildstream/merge_requests/578#note_96824224):
> Just noticed this in the install guide today.
>
> Why are we recommending that windows users use the docker install guide ?
>
> Is it even marginally possible that BuildStream will actually work with docker on windows ? Has anyone tested this ?
>
> I think we should be careful to remove this claim quickly before someone reads this an complains about it.
Note: The above refers to the `.. note::` in the [install guide](http://buildstream.gitlab.io/buildstream/main_install.html) which currently states:
> *"BuildStream is not currently supported natively on macOS and Windows. Windows and macOS users should refer to BuildStream inside Docker."*
| 1.0 | Follow-up from "Refer readers to our tutorial before referring them to existing bst projects" - [See original issue on GitLab](https://gitlab.com/BuildStream/buildstream/-/issues/608)
In GitLab by [[Gitlab user @tristanvb]](https://gitlab.com/tristanvb) on Aug 25, 2018, 10:53
The following discussion from !578 should be addressed:
- [x] [[Gitlab user @tristanvb]](https://gitlab.com/tristanvb) started a [discussion](https://gitlab.com/BuildStream/buildstream/merge_requests/578#note_96824224):
> Just noticed this in the install guide today.
>
> Why are we recommending that windows users use the docker install guide ?
>
> Is it even marginally possible that BuildStream will actually work with docker on windows ? Has anyone tested this ?
>
> I think we should be careful to remove this claim quickly before someone reads this an complains about it.
Note: The above refers to the `.. note::` in the [install guide](http://buildstream.gitlab.io/buildstream/main_install.html) which currently states:
> *"BuildStream is not currently supported natively on macOS and Windows. Windows and macOS users should refer to BuildStream inside Docker."*
| non_priority | follow up from refer readers to our tutorial before referring them to existing bst projects in gitlab by on aug the following discussion from should be addressed started a just noticed this in the install guide today why are we recommending that windows users use the docker install guide is it even marginally possible that buildstream will actually work with docker on windows has anyone tested this i think we should be careful to remove this claim quickly before someone reads this an complains about it note the above refers to the note in the which currently states buildstream is not currently supported natively on macos and windows windows and macos users should refer to buildstream inside docker | 0 |
133,458 | 29,181,422,691 | IssuesEvent | 2023-05-19 12:14:24 | AllYarnsAreBeautiful/ayab-desktop | https://api.github.com/repos/AllYarnsAreBeautiful/ayab-desktop | closed | API version sent by device is not used by host | code quality | `control.api_version` is set in state `VERSION_CHECK` but `control.operate()` does not use the value: instead, it uses a default value of 6. | 1.0 | API version sent by device is not used by host - `control.api_version` is set in state `VERSION_CHECK` but `control.operate()` does not use the value: instead, it uses a default value of 6. | non_priority | api version sent by device is not used by host control api version is set in state version check but control operate does not use the value instead it uses a default value of | 0 |
43,055 | 11,456,005,222 | IssuesEvent | 2020-02-06 20:17:09 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | Correlation between map pin and cards in search results | 508-defect-0 functional vsa | ## User Story:
As a Veteran, I need to be able to see correlation between location on map and list
## Tasks
[ ] Style map pins to correlate between map and list

## Acceptance Criteria:
[ ] Map pins reflect design update to Alpha & "dark gray", consistent with design spec shown above
[ ] Cards reflect Alpha & correlate to Alpha presentation/location on map
[ ] Users with assistive devices are presented with information linking the card ID with droplet/pin on map (aria-describedby)
[ ] Map pin accurately references correct card in result list
| 1.0 | Correlation between map pin and cards in search results - ## User Story:
As a Veteran, I need to be able to see correlation between location on map and list
## Tasks
[ ] Style map pins to correlate between map and list

## Acceptance Criteria:
[ ] Map pins reflect design update to Alpha & "dark gray", consistent with design spec shown above
[ ] Cards reflect Alpha & correlate to Alpha presentation/location on map
[ ] Users with assistive devices are presented with information linking the card ID with droplet/pin on map (aria-describedby)
[ ] Map pin accurately references correct card in result list
| non_priority | correlation between map pin and cards in search results user story as a veteran i need to be able to see correlation between location on map and list tasks style map pins to correlate between map and list acceptance criteria map pins reflect design update to alpha dark gray consistent with design spec shown above cards reflect alpha correlate to alpha presentation location on map users with assistive devices are presented with information linking the card id with droplet pin on map aria describedby map pin accurately references correct card in result list | 0 |
56,053 | 14,078,379,905 | IssuesEvent | 2020-11-04 13:29:32 | themagicalmammal/android_kernel_samsung_j7elte | https://api.github.com/repos/themagicalmammal/android_kernel_samsung_j7elte | opened | CVE-2018-1000028 (High) detected in linuxv3.10 | security vulnerability | ## CVE-2018-1000028 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.10</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_j7elte/commit/adc86a86e0ac98007fd3af905bc71e9f29c1502c">adc86a86e0ac98007fd3af905bc71e9f29c1502c</a></p>
<p>Found in base branch: <b>xsentinel-1.7-experimental</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_j7elte/fs/nfsd/auth.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_j7elte/fs/nfsd/auth.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux kernel version after commit bdcf0a423ea1 - 4.15-rc4+, 4.14.8+, 4.9.76+, 4.4.111+ contains a Incorrect Access Control vulnerability in NFS server (nfsd) that can result in remote users reading or writing files they should not be able to via NFS. This attack appear to be exploitable via NFS server must export a filesystem with the "rootsquash" options enabled. This vulnerability appears to have been fixed in after commit 1995266727fa.
<p>Publish Date: 2018-02-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000028>CVE-2018-1000028</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-1000028">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-1000028</a></p>
<p>Release Date: 2018-02-09</p>
<p>Fix Resolution: v4.15</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-1000028 (High) detected in linuxv3.10 - ## CVE-2018-1000028 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.10</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_j7elte/commit/adc86a86e0ac98007fd3af905bc71e9f29c1502c">adc86a86e0ac98007fd3af905bc71e9f29c1502c</a></p>
<p>Found in base branch: <b>xsentinel-1.7-experimental</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_j7elte/fs/nfsd/auth.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_j7elte/fs/nfsd/auth.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux kernel version after commit bdcf0a423ea1 - 4.15-rc4+, 4.14.8+, 4.9.76+, 4.4.111+ contains a Incorrect Access Control vulnerability in NFS server (nfsd) that can result in remote users reading or writing files they should not be able to via NFS. This attack appear to be exploitable via NFS server must export a filesystem with the "rootsquash" options enabled. This vulnerability appears to have been fixed in after commit 1995266727fa.
<p>Publish Date: 2018-02-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000028>CVE-2018-1000028</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-1000028">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-1000028</a></p>
<p>Release Date: 2018-02-09</p>
<p>Fix Resolution: v4.15</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in cve high severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch xsentinel experimental vulnerable source files android kernel samsung fs nfsd auth c android kernel samsung fs nfsd auth c vulnerability details linux kernel version after commit contains a incorrect access control vulnerability in nfs server nfsd that can result in remote users reading or writing files they should not be able to via nfs this attack appear to be exploitable via nfs server must export a filesystem with the rootsquash options enabled this vulnerability appears to have been fixed in after commit publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
254,898 | 19,276,345,678 | IssuesEvent | 2021-12-10 12:19:22 | Gaius-Augustus/clamsa | https://api.github.com/repos/Gaius-Augustus/clamsa | opened | Warning for out-of-phase inputs | documentation | Users should be warned if input appears to be a genome alignment without an assumed ORF in the same phase for all alignment rows. | 1.0 | Warning for out-of-phase inputs - Users should be warned if input appears to be a genome alignment without an assumed ORF in the same phase for all alignment rows. | non_priority | warning for out of phase inputs users should be warned if input appears to be a genome alignment without an assumed orf in the same phase for all alignment rows | 0 |
27,811 | 8,037,782,094 | IssuesEvent | 2018-07-30 13:41:45 | angular/angular-cli | https://api.github.com/repos/angular/angular-cli | closed | Rebuilding or serving library project | comp: devkit/build-angular type: feature | <!--
We will close this issue if you don't provide the needed information.
For feature requests, delete the form below and describe the requirements and use case.
-->
### Feature Request
Some interface that would rebuild library project that is generated with `ng g library niceLib`.
Of Course the best option would be `ng serve niceLib` or include it to the application serve. Or at least another build that allow to add `--watch` parameter like it does for for application. Here what I mean:
* for application we are allowed to run `ng build --watch`
* for library we have `ng build niceLib`
* but we can't use `ng build niceLib --watch`, as it throws `Unknown option: '--watch'`
| 1.0 | Rebuilding or serving library project - <!--
We will close this issue if you don't provide the needed information.
For feature requests, delete the form below and describe the requirements and use case.
-->
### Feature Request
Some interface that would rebuild library project that is generated with `ng g library niceLib`.
Of Course the best option would be `ng serve niceLib` or include it to the application serve. Or at least another build that allow to add `--watch` parameter like it does for for application. Here what I mean:
* for application we are allowed to run `ng build --watch`
* for library we have `ng build niceLib`
* but we can't use `ng build niceLib --watch`, as it throws `Unknown option: '--watch'`
| non_priority | rebuilding or serving library project we will close this issue if you don t provide the needed information for feature requests delete the form below and describe the requirements and use case feature request some interface that would rebuild library project that is generated with ng g library nicelib of course the best option would be ng serve nicelib or include it to the application serve or at least another build that allow to add watch parameter like it does for for application here what i mean for application we are allowed to run ng build watch for library we have ng build nicelib but we can t use ng build nicelib watch as it throws unknown option watch | 0 |
262,152 | 19,762,291,821 | IssuesEvent | 2022-01-16 15:59:28 | degawa/dictos | https://api.github.com/repos/degawa/dictos | opened | remove 0 from stencil and table for the staggered grid | documentation | value at `0` is not used.
`0` breaks the consistency of the finite difference equation on the staggered grid. | 1.0 | remove 0 from stencil and table for the staggered grid - value at `0` is not used.
`0` breaks the consistency of the finite difference equation on the staggered grid. | non_priority | remove from stencil and table for the staggered grid value at is not used breaks the consistency of the finite difference equation on the staggered grid | 0 |
7,277 | 24,564,481,610 | IssuesEvent | 2022-10-13 00:42:08 | Roche/rtables | https://api.github.com/repos/Roche/rtables | closed | Add pkgdown publishing to repo automation | automation | @cicdguy would you mind doing this?
If you can add a multi-version documentation that would be awesome, otherwise you can just use the root (CRAN release) and `dev/` (`main`) method that `pkgdown` supports. | 1.0 | Add pkgdown publishing to repo automation - @cicdguy would you mind doing this?
If you can add a multi-version documentation that would be awesome, otherwise you can just use the root (CRAN release) and `dev/` (`main`) method that `pkgdown` supports. | non_priority | add pkgdown publishing to repo automation cicdguy would you mind doing this if you can add a multi version documentation that would be awesome otherwise you can just use the root cran release and dev main method that pkgdown supports | 0 |
66,711 | 14,798,942,185 | IssuesEvent | 2021-01-13 01:02:27 | jtimberlake/cloud-inquisitor | https://api.github.com/repos/jtimberlake/cloud-inquisitor | opened | CVE-2020-24025 (Medium) detected in node-sass-4.14.1.tgz | security vulnerability | ## CVE-2020-24025 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: cloud-inquisitor/frontend/package.json</p>
<p>Path to vulnerable library: cloud-inquisitor/frontend/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.0.2.tgz (Root Library)
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certificate validation in node-sass 2.0.0 to 4.14.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.
<p>Publish Date: 2021-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24025>CVE-2020-24025</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.14.1","isTransitiveDependency":true,"dependencyTree":"gulp-sass:4.0.2;node-sass:4.14.1","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2020-24025","vulnerabilityDetails":"Certificate validation in node-sass 2.0.0 to 4.14.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24025","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-24025 (Medium) detected in node-sass-4.14.1.tgz - ## CVE-2020-24025 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: cloud-inquisitor/frontend/package.json</p>
<p>Path to vulnerable library: cloud-inquisitor/frontend/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.0.2.tgz (Root Library)
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certificate validation in node-sass 2.0.0 to 4.14.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.
<p>Publish Date: 2021-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24025>CVE-2020-24025</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.14.1","isTransitiveDependency":true,"dependencyTree":"gulp-sass:4.0.2;node-sass:4.14.1","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2020-24025","vulnerabilityDetails":"Certificate validation in node-sass 2.0.0 to 4.14.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24025","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in node sass tgz cve medium severity vulnerability vulnerable library node sass tgz wrapper around libsass library home page a href path to dependency file cloud inquisitor frontend package json path to vulnerable library cloud inquisitor frontend node modules node sass package json dependency hierarchy gulp sass tgz root library x node sass tgz vulnerable library vulnerability details certificate validation in node sass to is disabled when requesting binaries even if the user is not specifying an alternative download path publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails certificate validation in node sass to is disabled when requesting binaries even if the user is not specifying an alternative download path vulnerabilityurl | 0 |
104,444 | 22,669,309,610 | IssuesEvent | 2022-07-03 11:20:25 | codechef-org/status | https://api.github.com/repos/codechef-org/status | closed | 🛑 CodeChef Goodies is down | status code-chef-goodies | In [`c4f21fa`](https://github.com/codechef-org/status/commit/c4f21fa2ca32e740310e26aae8eb35178b000a08
), CodeChef Goodies (https://goodies.codechef.com) was **down**:
- HTTP code: 0
- Response time: 0 ms
| 1.0 | 🛑 CodeChef Goodies is down - In [`c4f21fa`](https://github.com/codechef-org/status/commit/c4f21fa2ca32e740310e26aae8eb35178b000a08
), CodeChef Goodies (https://goodies.codechef.com) was **down**:
- HTTP code: 0
- Response time: 0 ms
| non_priority | 🛑 codechef goodies is down in codechef goodies was down http code response time ms | 0 |
109,797 | 23,824,318,564 | IssuesEvent | 2022-09-05 13:45:08 | aimhubio/aim | https://api.github.com/repos/aimhubio/aim | closed | Handle deprecations in PyTorch Lightning 1.7 API | area / integrations type / code-health phase / shipped | ## Proposed refactoring or deprecation
Change imports in `aim.sdk.adaptors.pytorch_lightning` to handle deprecations in PyTorch Lightning API.
* `pytorch_lightning.loggers.base.rank_zero_experiment` -> `pytorch_lightning.loggers.logger.rank_zero_experiment`
* `pytorch_lightning.loggers.base.LightningLoggerBase` -> `pytorch_lightning.loggers.logger.Logger`
### Motivation
Some of the API in PyTorch Lightning regarding custom loggers has been deprecated in versions 1.7. The imports should be adjusted accordingly. | 1.0 | Handle deprecations in PyTorch Lightning 1.7 API - ## Proposed refactoring or deprecation
Change imports in `aim.sdk.adaptors.pytorch_lightning` to handle deprecations in PyTorch Lightning API.
* `pytorch_lightning.loggers.base.rank_zero_experiment` -> `pytorch_lightning.loggers.logger.rank_zero_experiment`
* `pytorch_lightning.loggers.base.LightningLoggerBase` -> `pytorch_lightning.loggers.logger.Logger`
### Motivation
Some of the API in PyTorch Lightning regarding custom loggers has been deprecated in versions 1.7. The imports should be adjusted accordingly. | non_priority | handle deprecations in pytorch lightning api proposed refactoring or deprecation change imports in aim sdk adaptors pytorch lightning to handle deprecations in pytorch lightning api pytorch lightning loggers base rank zero experiment pytorch lightning loggers logger rank zero experiment pytorch lightning loggers base lightningloggerbase pytorch lightning loggers logger logger motivation some of the api in pytorch lightning regarding custom loggers has been deprecated in versions the imports should be adjusted accordingly | 0 |
417,552 | 28,110,620,689 | IssuesEvent | 2023-03-31 06:50:59 | slackernoob/ped | https://api.github.com/repos/slackernoob/ped | opened | Delete Assignment given in User Guide not working | type.DocumentationBug severity.High | Delete Assignment command example given in the User Guide does not work.

`delete_asgn n/Lab_1`

Perhaps a working example can be given instead.
<!--session: 1680242382440-c3f7337e-3056-4a4e-a64a-5a38ec12d212-->
<!--Version: Web v3.4.7--> | 1.0 | Delete Assignment given in User Guide not working - Delete Assignment command example given in the User Guide does not work.

`delete_asgn n/Lab_1`

Perhaps a working example can be given instead.
<!--session: 1680242382440-c3f7337e-3056-4a4e-a64a-5a38ec12d212-->
<!--Version: Web v3.4.7--> | non_priority | delete assignment given in user guide not working delete assignment command example given in the user guide does not work delete asgn n lab perhaps a working example can be given instead | 0 |
199,811 | 22,715,352,266 | IssuesEvent | 2022-07-06 01:07:38 | nexmo-community/stream-audio-into-call-php | https://api.github.com/repos/nexmo-community/stream-audio-into-call-php | opened | vonage/client-2.4.0: 6 vulnerabilities (highest severity is: 8.1) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>vonage/client-2.4.0</b></p></summary>
<p></p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-29248](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29248) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.1 | guzzlehttp/guzzle-7.2.0 | Transitive | N/A | ❌ |
| [CVE-2022-31091](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31091) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.7 | guzzlehttp/guzzle-7.2.0 | Transitive | N/A | ❌ |
| [CVE-2022-31090](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31090) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.7 | guzzlehttp/guzzle-7.2.0 | Transitive | N/A | ❌ |
| [CVE-2022-31043](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31043) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | guzzlehttp/guzzle-7.2.0 | Transitive | N/A | ❌ |
| [CVE-2022-31042](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31042) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | guzzlehttp/guzzle-7.2.0 | Transitive | N/A | ❌ |
| [CVE-2021-41106](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41106) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.3 | lcobucci/jwt-3.3.3 | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-29248</summary>
### Vulnerable Library - <b>guzzlehttp/guzzle-7.2.0</b></p>
<p>Guzzle is a PHP HTTP client library</p>
<p>Library home page: <a href="https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79">https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- :x: **guzzlehttp/guzzle-7.2.0** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Guzzle is a PHP HTTP client. Guzzle prior to versions 6.5.6 and 7.4.3 contains a vulnerability with the cookie middleware. The vulnerability is that it is not checked if the cookie domain equals the domain of the server which sets the cookie via the Set-Cookie header, allowing a malicious server to set cookies for unrelated domains. The cookie middleware is disabled by default, so most library consumers will not be affected by this issue. Only those who manually add the cookie middleware to the handler stack or construct the client with ['cookies' => true] are affected. Moreover, those who do not use the same Guzzle client to call multiple domains and have disabled redirect forwarding are not affected by this vulnerability. Guzzle versions 6.5.6 and 7.4.3 contain a patch for this issue. As a workaround, turn off the cookie middleware.
<p>Publish Date: 2022-05-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29248>CVE-2022-29248</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29248">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29248</a></p>
<p>Release Date: 2022-05-25</p>
<p>Fix Resolution: guzzlehttp/guzzle - 6.5.6,guzzlehttp/guzzle - 7.4.3</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31091</summary>
### Vulnerable Library - <b>guzzlehttp/guzzle-7.2.0</b></p>
<p>Guzzle is a PHP HTTP client library</p>
<p>Library home page: <a href="https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79">https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- :x: **guzzlehttp/guzzle-7.2.0** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Guzzle, an extensible PHP HTTP client. `Authorization` and `Cookie` headers on requests are sensitive information. In affected versions on making a request which responds with a redirect to a URI with a different port, if we choose to follow it, we should remove the `Authorization` and `Cookie` headers from the request, before containing. Previously, we would only consider a change in host or scheme. Affected Guzzle 7 users should upgrade to Guzzle 7.4.5 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.8 or 7.4.5. Note that a partial fix was implemented in Guzzle 7.4.2, where a change in host would trigger removal of the curl-added Authorization header, however this earlier fix did not cover change in scheme or change in port. An alternative approach would be to use your own redirect middleware, rather than ours, if you are unable to upgrade. If you do not require or expect redirects to be followed, one should simply disable redirects all together.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31091>CVE-2022-31091</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-31091">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-31091</a></p>
<p>Release Date: 2022-06-27</p>
<p>Fix Resolution: 6.5.8,7.4.5</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31090</summary>
### Vulnerable Library - <b>guzzlehttp/guzzle-7.2.0</b></p>
<p>Guzzle is a PHP HTTP client library</p>
<p>Library home page: <a href="https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79">https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- :x: **guzzlehttp/guzzle-7.2.0** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Guzzle, an extensible PHP HTTP client. `Authorization` headers on requests are sensitive information. In affected versions when using our Curl handler, it is possible to use the `CURLOPT_HTTPAUTH` option to specify an `Authorization` header. On making a request which responds with a redirect to a URI with a different origin (change in host, scheme or port), if we choose to follow it, we should remove the `CURLOPT_HTTPAUTH` option before continuing, stopping curl from appending the `Authorization` header to the new request. Affected Guzzle 7 users should upgrade to Guzzle 7.4.5 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.8 or 7.4.5. Note that a partial fix was implemented in Guzzle 7.4.2, where a change in host would trigger removal of the curl-added Authorization header, however this earlier fix did not cover change in scheme or change in port. If you do not require or expect redirects to be followed, one should simply disable redirects all together. Alternatively, one can specify to use the Guzzle steam handler backend, rather than curl.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31090>CVE-2022-31090</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/guzzle/guzzle/security/advisories/GHSA-25mq-v84q-4j7r">https://github.com/guzzle/guzzle/security/advisories/GHSA-25mq-v84q-4j7r</a></p>
<p>Release Date: 2022-05-19</p>
<p>Fix Resolution: 6.5.8,7.4.5</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31043</summary>
### Vulnerable Library - <b>guzzlehttp/guzzle-7.2.0</b></p>
<p>Guzzle is a PHP HTTP client library</p>
<p>Library home page: <a href="https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79">https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- :x: **guzzlehttp/guzzle-7.2.0** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Guzzle is an open source PHP HTTP client. In affected versions `Authorization` headers on requests are sensitive information. On making a request using the `https` scheme to a server which responds with a redirect to a URI with the `http` scheme, we should not forward the `Authorization` header on. This is much the same as to how we don't forward on the header if the host changes. Prior to this fix, `https` to `http` downgrades did not result in the `Authorization` header being removed, only changes to the host. Affected Guzzle 7 users should upgrade to Guzzle 7.4.4 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.7 or 7.4.4. Users unable to upgrade may consider an alternative approach which would be to use their own redirect middleware. Alternately users may simply disable redirects all together if redirects are not expected or required.
<p>Publish Date: 2022-06-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31043>CVE-2022-31043</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/guzzle/guzzle/security/advisories/GHSA-w248-ffj2-4v5q">https://github.com/guzzle/guzzle/security/advisories/GHSA-w248-ffj2-4v5q</a></p>
<p>Release Date: 2022-06-10</p>
<p>Fix Resolution: 6.5.7,7.4.4</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31042</summary>
### Vulnerable Library - <b>guzzlehttp/guzzle-7.2.0</b></p>
<p>Guzzle is a PHP HTTP client library</p>
<p>Library home page: <a href="https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79">https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- :x: **guzzlehttp/guzzle-7.2.0** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Guzzle is an open source PHP HTTP client. In affected versions the `Cookie` headers on requests are sensitive information. On making a request using the `https` scheme to a server which responds with a redirect to a URI with the `http` scheme, or on making a request to a server which responds with a redirect to a a URI to a different host, we should not forward the `Cookie` header on. Prior to this fix, only cookies that were managed by our cookie middleware would be safely removed, and any `Cookie` header manually added to the initial request would not be stripped. We now always strip it, and allow the cookie middleware to re-add any cookies that it deems should be there. Affected Guzzle 7 users should upgrade to Guzzle 7.4.4 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.7 or 7.4.4. Users unable to upgrade may consider an alternative approach to use your own redirect middleware, rather than ours. If you do not require or expect redirects to be followed, one should simply disable redirects all together.
<p>Publish Date: 2022-06-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31042>CVE-2022-31042</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/guzzle/guzzle/security/advisories/GHSA-f2wf-25xc-69c9">https://github.com/guzzle/guzzle/security/advisories/GHSA-f2wf-25xc-69c9</a></p>
<p>Release Date: 2022-06-10</p>
<p>Fix Resolution: 6.5.7,7.4.4</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2021-41106</summary>
### Vulnerable Library - <b>lcobucci/jwt-3.3.3</b></p>
<p>A simple library to work with JSON Web Token and JSON Web Signature</p>
<p>Library home page: <a href="https://api.github.com/repos/lcobucci/jwt/zipball/c1123697f6a2ec29162b82f170dd4a491f524773">https://api.github.com/repos/lcobucci/jwt/zipball/c1123697f6a2ec29162b82f170dd4a491f524773</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- vonage/client-core-v2.6.0
- :x: **lcobucci/jwt-3.3.3** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
JWT is a library to work with JSON Web Token and JSON Web Signature. Prior to versions 3.4.6, 4.0.4, and 4.1.5, users of HMAC-based algorithms (HS256, HS384, and HS512) combined with `Lcobucci\JWT\Signer\Key\LocalFileReference` as key are having their tokens issued/validated using the file path as hashing key - instead of the contents. The HMAC hashing functions take any string as input and, since users can issue and validate tokens, users are lead to believe that everything works properly. Versions 3.4.6, 4.0.4, and 4.1.5 have been patched to always load the file contents, deprecated the `Lcobucci\JWT\Signer\Key\LocalFileReference`, and suggest `Lcobucci\JWT\Signer\Key\InMemory` as the alternative. As a workaround, use `Lcobucci\JWT\Signer\Key\InMemory` instead of `Lcobucci\JWT\Signer\Key\LocalFileReference` to create the instances of one's keys.
<p>Publish Date: 2021-09-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41106>CVE-2021-41106</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41106">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41106</a></p>
<p>Release Date: 2021-09-28</p>
<p>Fix Resolution: lcobucci/jwt - 3.4.6,4.0.4,4.1.5</p>
</p>
<p></p>
</details> | True | vonage/client-2.4.0: 6 vulnerabilities (highest severity is: 8.1) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>vonage/client-2.4.0</b></p></summary>
<p></p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-29248](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29248) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.1 | guzzlehttp/guzzle-7.2.0 | Transitive | N/A | ❌ |
| [CVE-2022-31091](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31091) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.7 | guzzlehttp/guzzle-7.2.0 | Transitive | N/A | ❌ |
| [CVE-2022-31090](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31090) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.7 | guzzlehttp/guzzle-7.2.0 | Transitive | N/A | ❌ |
| [CVE-2022-31043](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31043) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | guzzlehttp/guzzle-7.2.0 | Transitive | N/A | ❌ |
| [CVE-2022-31042](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31042) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | guzzlehttp/guzzle-7.2.0 | Transitive | N/A | ❌ |
| [CVE-2021-41106](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41106) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.3 | lcobucci/jwt-3.3.3 | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-29248</summary>
### Vulnerable Library - <b>guzzlehttp/guzzle-7.2.0</b></p>
<p>Guzzle is a PHP HTTP client library</p>
<p>Library home page: <a href="https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79">https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- :x: **guzzlehttp/guzzle-7.2.0** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Guzzle is a PHP HTTP client. Guzzle prior to versions 6.5.6 and 7.4.3 contains a vulnerability with the cookie middleware. The vulnerability is that it is not checked if the cookie domain equals the domain of the server which sets the cookie via the Set-Cookie header, allowing a malicious server to set cookies for unrelated domains. The cookie middleware is disabled by default, so most library consumers will not be affected by this issue. Only those who manually add the cookie middleware to the handler stack or construct the client with ['cookies' => true] are affected. Moreover, those who do not use the same Guzzle client to call multiple domains and have disabled redirect forwarding are not affected by this vulnerability. Guzzle versions 6.5.6 and 7.4.3 contain a patch for this issue. As a workaround, turn off the cookie middleware.
<p>Publish Date: 2022-05-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29248>CVE-2022-29248</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29248">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29248</a></p>
<p>Release Date: 2022-05-25</p>
<p>Fix Resolution: guzzlehttp/guzzle - 6.5.6,guzzlehttp/guzzle - 7.4.3</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31091</summary>
### Vulnerable Library - <b>guzzlehttp/guzzle-7.2.0</b></p>
<p>Guzzle is a PHP HTTP client library</p>
<p>Library home page: <a href="https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79">https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- :x: **guzzlehttp/guzzle-7.2.0** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Guzzle, an extensible PHP HTTP client. `Authorization` and `Cookie` headers on requests are sensitive information. In affected versions on making a request which responds with a redirect to a URI with a different port, if we choose to follow it, we should remove the `Authorization` and `Cookie` headers from the request, before containing. Previously, we would only consider a change in host or scheme. Affected Guzzle 7 users should upgrade to Guzzle 7.4.5 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.8 or 7.4.5. Note that a partial fix was implemented in Guzzle 7.4.2, where a change in host would trigger removal of the curl-added Authorization header, however this earlier fix did not cover change in scheme or change in port. An alternative approach would be to use your own redirect middleware, rather than ours, if you are unable to upgrade. If you do not require or expect redirects to be followed, one should simply disable redirects all together.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31091>CVE-2022-31091</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-31091">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-31091</a></p>
<p>Release Date: 2022-06-27</p>
<p>Fix Resolution: 6.5.8,7.4.5</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31090</summary>
### Vulnerable Library - <b>guzzlehttp/guzzle-7.2.0</b></p>
<p>Guzzle is a PHP HTTP client library</p>
<p>Library home page: <a href="https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79">https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- :x: **guzzlehttp/guzzle-7.2.0** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Guzzle, an extensible PHP HTTP client. `Authorization` headers on requests are sensitive information. In affected versions when using our Curl handler, it is possible to use the `CURLOPT_HTTPAUTH` option to specify an `Authorization` header. On making a request which responds with a redirect to a URI with a different origin (change in host, scheme or port), if we choose to follow it, we should remove the `CURLOPT_HTTPAUTH` option before continuing, stopping curl from appending the `Authorization` header to the new request. Affected Guzzle 7 users should upgrade to Guzzle 7.4.5 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.8 or 7.4.5. Note that a partial fix was implemented in Guzzle 7.4.2, where a change in host would trigger removal of the curl-added Authorization header, however this earlier fix did not cover change in scheme or change in port. If you do not require or expect redirects to be followed, one should simply disable redirects all together. Alternatively, one can specify to use the Guzzle steam handler backend, rather than curl.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31090>CVE-2022-31090</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/guzzle/guzzle/security/advisories/GHSA-25mq-v84q-4j7r">https://github.com/guzzle/guzzle/security/advisories/GHSA-25mq-v84q-4j7r</a></p>
<p>Release Date: 2022-05-19</p>
<p>Fix Resolution: 6.5.8,7.4.5</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31043</summary>
### Vulnerable Library - <b>guzzlehttp/guzzle-7.2.0</b></p>
<p>Guzzle is a PHP HTTP client library</p>
<p>Library home page: <a href="https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79">https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- :x: **guzzlehttp/guzzle-7.2.0** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Guzzle is an open source PHP HTTP client. In affected versions `Authorization` headers on requests are sensitive information. On making a request using the `https` scheme to a server which responds with a redirect to a URI with the `http` scheme, we should not forward the `Authorization` header on. This is much the same as to how we don't forward on the header if the host changes. Prior to this fix, `https` to `http` downgrades did not result in the `Authorization` header being removed, only changes to the host. Affected Guzzle 7 users should upgrade to Guzzle 7.4.4 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.7 or 7.4.4. Users unable to upgrade may consider an alternative approach which would be to use their own redirect middleware. Alternately users may simply disable redirects all together if redirects are not expected or required.
<p>Publish Date: 2022-06-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31043>CVE-2022-31043</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/guzzle/guzzle/security/advisories/GHSA-w248-ffj2-4v5q">https://github.com/guzzle/guzzle/security/advisories/GHSA-w248-ffj2-4v5q</a></p>
<p>Release Date: 2022-06-10</p>
<p>Fix Resolution: 6.5.7,7.4.4</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31042</summary>
### Vulnerable Library - <b>guzzlehttp/guzzle-7.2.0</b></p>
<p>Guzzle is a PHP HTTP client library</p>
<p>Library home page: <a href="https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79">https://api.github.com/repos/guzzle/guzzle/zipball/0aa74dfb41ae110835923ef10a9d803a22d50e79</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- :x: **guzzlehttp/guzzle-7.2.0** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Guzzle is an open source PHP HTTP client. In affected versions the `Cookie` headers on requests are sensitive information. On making a request using the `https` scheme to a server which responds with a redirect to a URI with the `http` scheme, or on making a request to a server which responds with a redirect to a a URI to a different host, we should not forward the `Cookie` header on. Prior to this fix, only cookies that were managed by our cookie middleware would be safely removed, and any `Cookie` header manually added to the initial request would not be stripped. We now always strip it, and allow the cookie middleware to re-add any cookies that it deems should be there. Affected Guzzle 7 users should upgrade to Guzzle 7.4.4 as soon as possible. Affected users using any earlier series of Guzzle should upgrade to Guzzle 6.5.7 or 7.4.4. Users unable to upgrade may consider an alternative approach to use your own redirect middleware, rather than ours. If you do not require or expect redirects to be followed, one should simply disable redirects all together.
<p>Publish Date: 2022-06-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31042>CVE-2022-31042</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/guzzle/guzzle/security/advisories/GHSA-f2wf-25xc-69c9">https://github.com/guzzle/guzzle/security/advisories/GHSA-f2wf-25xc-69c9</a></p>
<p>Release Date: 2022-06-10</p>
<p>Fix Resolution: 6.5.7,7.4.4</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2021-41106</summary>
### Vulnerable Library - <b>lcobucci/jwt-3.3.3</b></p>
<p>A simple library to work with JSON Web Token and JSON Web Signature</p>
<p>Library home page: <a href="https://api.github.com/repos/lcobucci/jwt/zipball/c1123697f6a2ec29162b82f170dd4a491f524773">https://api.github.com/repos/lcobucci/jwt/zipball/c1123697f6a2ec29162b82f170dd4a491f524773</a></p>
<p>
Dependency Hierarchy:
- vonage/client-2.4.0 (Root Library)
- vonage/client-core-v2.6.0
- :x: **lcobucci/jwt-3.3.3** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
JWT is a library to work with JSON Web Token and JSON Web Signature. Prior to versions 3.4.6, 4.0.4, and 4.1.5, users of HMAC-based algorithms (HS256, HS384, and HS512) combined with `Lcobucci\JWT\Signer\Key\LocalFileReference` as key are having their tokens issued/validated using the file path as hashing key - instead of the contents. The HMAC hashing functions take any string as input and, since users can issue and validate tokens, users are lead to believe that everything works properly. Versions 3.4.6, 4.0.4, and 4.1.5 have been patched to always load the file contents, deprecated the `Lcobucci\JWT\Signer\Key\LocalFileReference`, and suggest `Lcobucci\JWT\Signer\Key\InMemory` as the alternative. As a workaround, use `Lcobucci\JWT\Signer\Key\InMemory` instead of `Lcobucci\JWT\Signer\Key\LocalFileReference` to create the instances of one's keys.
<p>Publish Date: 2021-09-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41106>CVE-2021-41106</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41106">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41106</a></p>
<p>Release Date: 2021-09-28</p>
<p>Fix Resolution: lcobucci/jwt - 3.4.6,4.0.4,4.1.5</p>
</p>
<p></p>
</details> | non_priority | vonage client vulnerabilities highest severity is vulnerable library vonage client vulnerabilities cve severity cvss dependency type fixed in remediation available high guzzlehttp guzzle transitive n a high guzzlehttp guzzle transitive n a high guzzlehttp guzzle transitive n a high guzzlehttp guzzle transitive n a high guzzlehttp guzzle transitive n a low lcobucci jwt transitive n a details cve vulnerable library guzzlehttp guzzle guzzle is a php http client library library home page a href dependency hierarchy vonage client root library x guzzlehttp guzzle vulnerable library found in base branch main vulnerability details guzzle is a php http client guzzle prior to versions and contains a vulnerability with the cookie middleware the vulnerability is that it is not checked if the cookie domain equals the domain of the server which sets the cookie via the set cookie header allowing a malicious server to set cookies for unrelated domains the cookie middleware is disabled by default so most library consumers will not be affected by this issue only those who manually add the cookie middleware to the handler stack or construct the client with are affected moreover those who do not use the same guzzle client to call multiple domains and have disabled redirect forwarding are not affected by this vulnerability guzzle versions and contain a patch for this issue as a workaround turn off the cookie middleware publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution guzzlehttp guzzle guzzlehttp guzzle cve vulnerable library guzzlehttp guzzle guzzle is a php http client library library home page a href dependency hierarchy vonage client root library x guzzlehttp guzzle vulnerable library found in base branch main vulnerability details guzzle an extensible php http client authorization and cookie headers on requests are sensitive information in affected versions on making a request which responds with a redirect to a uri with a different port if we choose to follow it we should remove the authorization and cookie headers from the request before containing previously we would only consider a change in host or scheme affected guzzle users should upgrade to guzzle as soon as possible affected users using any earlier series of guzzle should upgrade to guzzle or note that a partial fix was implemented in guzzle where a change in host would trigger removal of the curl added authorization header however this earlier fix did not cover change in scheme or change in port an alternative approach would be to use your own redirect middleware rather than ours if you are unable to upgrade if you do not require or expect redirects to be followed one should simply disable redirects all together publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution cve vulnerable library guzzlehttp guzzle guzzle is a php http client library library home page a href dependency hierarchy vonage client root library x guzzlehttp guzzle vulnerable library found in base branch main vulnerability details guzzle an extensible php http client authorization headers on requests are sensitive information in affected versions when using our curl handler it is possible to use the curlopt httpauth option to specify an authorization header on making a request which responds with a redirect to a uri with a different origin change in host scheme or port if we choose to follow it we should remove the curlopt httpauth option before continuing stopping curl from appending the authorization header to the new request affected guzzle users should upgrade to guzzle as soon as possible affected users using any earlier series of guzzle should upgrade to guzzle or note that a partial fix was implemented in guzzle where a change in host would trigger removal of the curl added authorization header however this earlier fix did not cover change in scheme or change in port if you do not require or expect redirects to be followed one should simply disable redirects all together alternatively one can specify to use the guzzle steam handler backend rather than curl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution cve vulnerable library guzzlehttp guzzle guzzle is a php http client library library home page a href dependency hierarchy vonage client root library x guzzlehttp guzzle vulnerable library found in base branch main vulnerability details guzzle is an open source php http client in affected versions authorization headers on requests are sensitive information on making a request using the https scheme to a server which responds with a redirect to a uri with the http scheme we should not forward the authorization header on this is much the same as to how we don t forward on the header if the host changes prior to this fix https to http downgrades did not result in the authorization header being removed only changes to the host affected guzzle users should upgrade to guzzle as soon as possible affected users using any earlier series of guzzle should upgrade to guzzle or users unable to upgrade may consider an alternative approach which would be to use their own redirect middleware alternately users may simply disable redirects all together if redirects are not expected or required publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution cve vulnerable library guzzlehttp guzzle guzzle is a php http client library library home page a href dependency hierarchy vonage client root library x guzzlehttp guzzle vulnerable library found in base branch main vulnerability details guzzle is an open source php http client in affected versions the cookie headers on requests are sensitive information on making a request using the https scheme to a server which responds with a redirect to a uri with the http scheme or on making a request to a server which responds with a redirect to a a uri to a different host we should not forward the cookie header on prior to this fix only cookies that were managed by our cookie middleware would be safely removed and any cookie header manually added to the initial request would not be stripped we now always strip it and allow the cookie middleware to re add any cookies that it deems should be there affected guzzle users should upgrade to guzzle as soon as possible affected users using any earlier series of guzzle should upgrade to guzzle or users unable to upgrade may consider an alternative approach to use your own redirect middleware rather than ours if you do not require or expect redirects to be followed one should simply disable redirects all together publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution cve vulnerable library lcobucci jwt a simple library to work with json web token and json web signature library home page a href dependency hierarchy vonage client root library vonage client core x lcobucci jwt vulnerable library found in base branch main vulnerability details jwt is a library to work with json web token and json web signature prior to versions and users of hmac based algorithms and combined with lcobucci jwt signer key localfilereference as key are having their tokens issued validated using the file path as hashing key instead of the contents the hmac hashing functions take any string as input and since users can issue and validate tokens users are lead to believe that everything works properly versions and have been patched to always load the file contents deprecated the lcobucci jwt signer key localfilereference and suggest lcobucci jwt signer key inmemory as the alternative as a workaround use lcobucci jwt signer key inmemory instead of lcobucci jwt signer key localfilereference to create the instances of one s keys publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lcobucci jwt | 0 |
20,207 | 6,827,658,068 | IssuesEvent | 2017-11-08 17:44:17 | zooniverse/Panoptes-Front-End | https://api.github.com/repos/zooniverse/Panoptes-Front-End | closed | Subject exists and show in the classification interface but can't be previewed under the subject set project builder page | bug project builder | I'm making a project for my summer student to mark boulders in Planet Four images (thanks for making a create toolset!) so it's a private project at the moment. I had trouble uploading images. Finally got it work, but they're not showing up in the preview on the project builder but the images are showing up in the project classification interface.

same image showing in the interface

@chrissnyder since you were looking at this the last time I added you to the project - let me know if I should make it public for other people to take a look
| 1.0 | Subject exists and show in the classification interface but can't be previewed under the subject set project builder page - I'm making a project for my summer student to mark boulders in Planet Four images (thanks for making a create toolset!) so it's a private project at the moment. I had trouble uploading images. Finally got it work, but they're not showing up in the preview on the project builder but the images are showing up in the project classification interface.

same image showing in the interface

@chrissnyder since you were looking at this the last time I added you to the project - let me know if I should make it public for other people to take a look
| non_priority | subject exists and show in the classification interface but can t be previewed under the subject set project builder page i m making a project for my summer student to mark boulders in planet four images thanks for making a create toolset so it s a private project at the moment i had trouble uploading images finally got it work but they re not showing up in the preview on the project builder but the images are showing up in the project classification interface same image showing in the interface chrissnyder since you were looking at this the last time i added you to the project let me know if i should make it public for other people to take a look | 0 |
55,186 | 14,262,327,163 | IssuesEvent | 2020-11-20 12:47:52 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Parser cannot look up column from tables by same name from different schemas | C: Parser E: All Editions P: Medium T: Defect | Given a database like this:
```sql
create table a.t (i int);
create table b.t (j int);
```
The following statement:
```sql
select i, j from a.t, b.t
```
Fails to parse with this error:
```
org.jooq.impl.ParserException: Unknown field identifier: [1:9] select i[*], j from a.t, b.t
at org.jooq.impl.ParserContext.exception(ParserImpl.java:12274)
at org.jooq.impl.ParserContext.unknownField(ParserImpl.java:12529)
at org.jooq.impl.ParserContext.scopeEnd(ParserImpl.java:12514)
at org.jooq.impl.ParserImpl.parseQuery(ParserImpl.java:1042)
at org.jooq.impl.ParserImpl.parseSelect(ParserImpl.java:735)
at org.jooq.impl.ParserImpl.parseSelect(ParserImpl.java:729)
at org.jooq.impl.ParserMetaTest.testUnqualifiedFieldFromTableList(ParserMetaTest.java:454)
``` | 1.0 | Parser cannot look up column from tables by same name from different schemas - Given a database like this:
```sql
create table a.t (i int);
create table b.t (j int);
```
The following statement:
```sql
select i, j from a.t, b.t
```
Fails to parse with this error:
```
org.jooq.impl.ParserException: Unknown field identifier: [1:9] select i[*], j from a.t, b.t
at org.jooq.impl.ParserContext.exception(ParserImpl.java:12274)
at org.jooq.impl.ParserContext.unknownField(ParserImpl.java:12529)
at org.jooq.impl.ParserContext.scopeEnd(ParserImpl.java:12514)
at org.jooq.impl.ParserImpl.parseQuery(ParserImpl.java:1042)
at org.jooq.impl.ParserImpl.parseSelect(ParserImpl.java:735)
at org.jooq.impl.ParserImpl.parseSelect(ParserImpl.java:729)
at org.jooq.impl.ParserMetaTest.testUnqualifiedFieldFromTableList(ParserMetaTest.java:454)
``` | non_priority | parser cannot look up column from tables by same name from different schemas given a database like this sql create table a t i int create table b t j int the following statement sql select i j from a t b t fails to parse with this error org jooq impl parserexception unknown field identifier select i j from a t b t at org jooq impl parsercontext exception parserimpl java at org jooq impl parsercontext unknownfield parserimpl java at org jooq impl parsercontext scopeend parserimpl java at org jooq impl parserimpl parsequery parserimpl java at org jooq impl parserimpl parseselect parserimpl java at org jooq impl parserimpl parseselect parserimpl java at org jooq impl parsermetatest testunqualifiedfieldfromtablelist parsermetatest java | 0 |
126,368 | 17,023,385,216 | IssuesEvent | 2021-07-03 01:45:18 | USACE/cumulus | https://api.github.com/repos/USACE/cumulus | closed | [New UI] Overview | design enhancement | ### New Features
- [x] Left Navigation menu (mobile friendly)
- [x] **Home**
- [x] Design is more mobile friendly, but subject to change
- [x] Latest Updates Section to keep users informed of new products/changes
- [x] **Products**
- [x] Product names are more descriptive and human friendly
- [x] Product Tags to visually indicate differences
- [x] Simple parameter filter
- [x] Refresh button on Product Details "Availability Heatmap" enables refreshing data and visualization without refreshing page
- [x] New "Suite" table in database enabled grouping/classifying products by product suite. Product details page subheading demonstrates the product suite.
- [x] **Downloads**
- [x] Download History now full page width
- [x] Products are now displayed for each download
- [x] Time window is now displayed for each download
- [x] Download modal - now includes proper datepickers with time (local to user) for Start and End
- [x] Download modal - Products dropdown will stay open for easier selection of multiple products
- [ ] **Admin**
- [x] Add/Edit/Delete Products Parameters
- [x] Add/Edit/Delete Products
- [x] Add/Edit/Delete Products Suites
- [x] Add/Edit/Delete Products Tag
- [x] Add/Edit/Delete Products Units
- [ ] Add/Edit/Delete Products Watersheds (not complete)
- [x] **Help**
- [x] major links for API Docs, CAVI Script setup, contact support
- [x] FAQs section
- [x] **Docs**
- [x] API Documentation page generated by redoc library using existing apidoc.yml
- [x] CAVI script docs (to be added back soon with slightly better visual display)
### Bug Fixes
- [x] Products
- [x] Last Record better indicated future datetimes (ex: "in 16 hours" instead of "16 hours ago")
- [x] Heatmap limited to 20 years back for display to prevent browser lock up. More fixes to come later.
- [x] Header breadcrumb navigation is limited, but now functional
### Issues to fix before going to Stable
- [x] Product Detail - Display tags in product metadata
- [x] Downloads - Remove height limit on table display
- [x] Download Modal - Check/fix date logic for enabling products for selection
- [x] Download Modal - Increase datetime field width
- [x] Docs/rts-script - Finish
- [x] Help/FAQs
- [x] New user not getting redirected to profile/create on login | 1.0 | [New UI] Overview - ### New Features
- [x] Left Navigation menu (mobile friendly)
- [x] **Home**
- [x] Design is more mobile friendly, but subject to change
- [x] Latest Updates Section to keep users informed of new products/changes
- [x] **Products**
- [x] Product names are more descriptive and human friendly
- [x] Product Tags to visually indicate differences
- [x] Simple parameter filter
- [x] Refresh button on Product Details "Availability Heatmap" enables refreshing data and visualization without refreshing page
- [x] New "Suite" table in database enabled grouping/classifying products by product suite. Product details page subheading demonstrates the product suite.
- [x] **Downloads**
- [x] Download History now full page width
- [x] Products are now displayed for each download
- [x] Time window is now displayed for each download
- [x] Download modal - now includes proper datepickers with time (local to user) for Start and End
- [x] Download modal - Products dropdown will stay open for easier selection of multiple products
- [ ] **Admin**
- [x] Add/Edit/Delete Products Parameters
- [x] Add/Edit/Delete Products
- [x] Add/Edit/Delete Products Suites
- [x] Add/Edit/Delete Products Tag
- [x] Add/Edit/Delete Products Units
- [ ] Add/Edit/Delete Products Watersheds (not complete)
- [x] **Help**
- [x] major links for API Docs, CAVI Script setup, contact support
- [x] FAQs section
- [x] **Docs**
- [x] API Documentation page generated by redoc library using existing apidoc.yml
- [x] CAVI script docs (to be added back soon with slightly better visual display)
### Bug Fixes
- [x] Products
- [x] Last Record better indicated future datetimes (ex: "in 16 hours" instead of "16 hours ago")
- [x] Heatmap limited to 20 years back for display to prevent browser lock up. More fixes to come later.
- [x] Header breadcrumb navigation is limited, but now functional
### Issues to fix before going to Stable
- [x] Product Detail - Display tags in product metadata
- [x] Downloads - Remove height limit on table display
- [x] Download Modal - Check/fix date logic for enabling products for selection
- [x] Download Modal - Increase datetime field width
- [x] Docs/rts-script - Finish
- [x] Help/FAQs
- [x] New user not getting redirected to profile/create on login | non_priority | overview new features left navigation menu mobile friendly home design is more mobile friendly but subject to change latest updates section to keep users informed of new products changes products product names are more descriptive and human friendly product tags to visually indicate differences simple parameter filter refresh button on product details availability heatmap enables refreshing data and visualization without refreshing page new suite table in database enabled grouping classifying products by product suite product details page subheading demonstrates the product suite downloads download history now full page width products are now displayed for each download time window is now displayed for each download download modal now includes proper datepickers with time local to user for start and end download modal products dropdown will stay open for easier selection of multiple products admin add edit delete products parameters add edit delete products add edit delete products suites add edit delete products tag add edit delete products units add edit delete products watersheds not complete help major links for api docs cavi script setup contact support faqs section docs api documentation page generated by redoc library using existing apidoc yml cavi script docs to be added back soon with slightly better visual display bug fixes products last record better indicated future datetimes ex in hours instead of hours ago heatmap limited to years back for display to prevent browser lock up more fixes to come later header breadcrumb navigation is limited but now functional issues to fix before going to stable product detail display tags in product metadata downloads remove height limit on table display download modal check fix date logic for enabling products for selection download modal increase datetime field width docs rts script finish help faqs new user not getting redirected to profile create on login | 0 |
95,186 | 10,868,310,410 | IssuesEvent | 2019-11-15 03:13:24 | IntelPython/sdc | https://api.github.com/repos/IntelPython/sdc | opened | [SDC] Update license.md | Documentation | Present [license](https://github.com/IntelPython/sdc/blob/master/LICENSE.md) file does not look right. Need to change it to the correct one | 1.0 | [SDC] Update license.md - Present [license](https://github.com/IntelPython/sdc/blob/master/LICENSE.md) file does not look right. Need to change it to the correct one | non_priority | update license md present file does not look right need to change it to the correct one | 0 |
26,592 | 13,061,725,168 | IssuesEvent | 2020-07-30 14:15:51 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | Perfomance issues (High consume of CPU and RAM) | type:performance | Stable Osu! (the other one) runs quite well on my notebook (it is old, but still works):
- Intel Celeron Bay Trail N2806 @ 1.6 Ghz;
- 2 GB RAM;
- Intel HD graphics 4400;
- Windows 8.1 Single Language;
The next screenshot show the hardware usage when osu! is in stand-by (not playing nor downloading beatmaps...):

As you can see, the CPU and RAM usage is very low, so Osu! runs quite smoothly.
But when i launch Osu!lazer, the performance decreases to the point i can't play at all;

Please, notice the higher usage of CPU and RAM.
In both situations, i ran nothing but Osu! or Osu!lazer.
Is this because my old notebook will not handle Osu!Lazer or because Osu!Lazer still in development?
| True | Perfomance issues (High consume of CPU and RAM) - Stable Osu! (the other one) runs quite well on my notebook (it is old, but still works):
- Intel Celeron Bay Trail N2806 @ 1.6 Ghz;
- 2 GB RAM;
- Intel HD graphics 4400;
- Windows 8.1 Single Language;
The next screenshot show the hardware usage when osu! is in stand-by (not playing nor downloading beatmaps...):

As you can see, the CPU and RAM usage is very low, so Osu! runs quite smoothly.
But when i launch Osu!lazer, the performance decreases to the point i can't play at all;

Please, notice the higher usage of CPU and RAM.
In both situations, i ran nothing but Osu! or Osu!lazer.
Is this because my old notebook will not handle Osu!Lazer or because Osu!Lazer still in development?
| non_priority | perfomance issues high consume of cpu and ram stable osu the other one runs quite well on my notebook it is old but still works intel celeron bay trail ghz gb ram intel hd graphics windows single language the next screenshot show the hardware usage when osu is in stand by not playing nor downloading beatmaps as you can see the cpu and ram usage is very low so osu runs quite smoothly but when i launch osu lazer the performance decreases to the point i can t play at all please notice the higher usage of cpu and ram in both situations i ran nothing but osu or osu lazer is this because my old notebook will not handle osu lazer or because osu lazer still in development | 0 |
226,875 | 18,045,932,799 | IssuesEvent | 2021-09-18 22:29:56 | logicmoo/logicmoo_workspace | https://api.github.com/repos/logicmoo/logicmoo_workspace | opened | logicmoo.pfc.test.sanity_base.NEG_01E JUnit | Test_9999 logicmoo.pfc.test.sanity_base unit_test NEG_01E | (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc)
GH_MASTER_ISSUE_FINFO=
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ANEG_01E
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/neg_01e.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/NEG_01E/logicmoo_pfc_test_sanity_base_NEG_01E_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/NEG_01E/logicmoo_pfc_test_sanity_base_NEG_01E_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/neg_01e.pfc
```
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01e.pfc'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- use_module(library(statistics)).
%:- mpred_notrace_exec.
% reset runtime counter
%:- mpred_notrace_exec.
% reset runtime counter
:- statistics(runtime,_Secs).
:- cls.
%~ skipped(messy_on_output,cls)
~path(1,3).
~path(1,4).
path(1,2).
path(2,3).
path(3,4).
path(1,3).
path(1,4).
:- listing(path/2).
%~ skipped( listing( path/2))
:- break.
%~ skipped(blocks_on_input,break)
:- mpred_test(path(3, 4)).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01e.pfc:27
%~ mpred_test("Test_0001_Line_0000__path_3",baseKB:path(3,4))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L27
/*~
%~ mpred_test("Test_0001_Line_0000__path_3",baseKB:path(3,4))
passed=info(why_was_true(baseKB:path(3,4)))
Justifications for path(3,4):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L19 ',19) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01E-Test_0001_Line_0000__path_3'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01E'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01E-Test_0001_Line_0000__path_3-junit.xml
~*/
:- mpred_test(path(2, 3)).
%~ mpred_test("Test_0002_Line_0000__path_2",baseKB:path(2,3))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L28
/*~
%~ mpred_test("Test_0002_Line_0000__path_2",baseKB:path(2,3))
passed=info(why_was_true(baseKB:path(2,3)))
Justifications for path(2,3):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L18 ',18) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01E-Test_0002_Line_0000__path_2'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01E'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01E-Test_0002_Line_0000__path_2-junit.xml
~*/
:- mpred_test(path(1, 2)).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01e.pfc:29
%~ mpred_test("Test_0003_Line_0000__path_1",baseKB:path(1,2))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L29
/*~
%~ mpred_test("Test_0003_Line_0000__path_1",baseKB:path(1,2))
passed=info(why_was_true(baseKB:path(1,2)))
Justifications for path(1,2):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L17 ',17) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01E-Test_0003_Line_0000__path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01E'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01E-Test_0003_Line_0000__path_1-junit.xml
~*/
:- mpred_test(~path(1,3)).
%~ mpred_test("Test_0004_Line_0000__path_1",baseKB: ~path(1,3))
/*~
%~ mpred_test("Test_0004_Line_0000__path_1",baseKB: ~path(1,3))
^ Call: (68) [baseKB] ~path(1, 3)
^ Unify: (68) [baseKB] ~ (baseKB:path(1, 3))
^ Call: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 3)), info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 3)))))
^ Unify: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 3)), info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 3)))))
Call: (76) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (76) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (76) [loop_check] prolog_frame_attribute(1320, parent_goal, loop_check_term_frame(_93842, info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, _93848, _93850))
^ Fail: (76) [loop_check] prolog_frame_attribute(1320, parent_goal, loop_check_term_frame(_93842, info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, _93848, _93850))
^ Redo: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 3)), info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 3)))))
Call: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 3))
Unify: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 3))
^ Call: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 3)), info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 3)))))
^ Unify: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 3)), info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 3)))))
Call: (83) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (83) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (83) [loop_check] prolog_frame_attribute(1459, parent_goal, loop_check_term_frame(_99566, info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, _99572, _99574))
^ Fail: (83) [loop_check] prolog_frame_attribute(1459, parent_goal, loop_check_term_frame(_99566, info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, _99572, _99574))
^ Redo: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 3)), info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 3)))))
Call: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 3))
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 3))
^ Call: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _102898)
^ Unify: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _102898)
^ Call: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _102898)
^ Fail: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _102898)
^ Fail: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _102898)
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 3))
^ Call: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 3))
^ Unify: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 3))
^ Call: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 3))
^ Unify: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 3))
^ Fail: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 3))
^ Redo: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 3))
^ Exit: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 3))
^ Call: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _111068), call(_111068)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Unify: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _111068), call(_111068)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _111068)
^ Fail: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _111068)
Call: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
Unify: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Call: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _114854))
^ Unify: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _114854))
^ Call: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _114854)
^ Fail: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _114854)
^ Fail: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _114854))
Fail: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Fail: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _111068), call(_111068)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _119274)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_119844))
^ Call: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _119844)
^ Fail: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _119844)
^ Redo: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _121846)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _122474)
^ Call: (88) [pfc_lib] mpred_core_database_term(path, 2, _123042)
^ Fail: (88) [pfc_lib] mpred_core_database_term(path, 2, _123042)
^ Fail: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _124416)
^ Call: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _125020), call(_125020)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Unify: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _125020), call(_125020)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _125020)
^ Fail: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _125020)
Call: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
Unify: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _128806))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _128806))
^ Call: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _128806)
^ Fail: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _128806)
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _128806))
Fail: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Fail: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _125020), call(_125020)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_133172, path, 2, prologHybrid), _133202), call(_133202)*->true;clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_133172, path, 2, prologHybrid), _133202), call(_133202)*->true;clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))))
^ Call: (91) [hook_database] clause(mpred_prop(_133172, path, 2, prologHybrid), _133202)
^ Fail: (91) [hook_database] clause(mpred_prop(_133172, path, 2, prologHybrid), _133202)
Call: (91) [hook_database] clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))
Unify: (91) [hook_database] clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))
^ Call: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_133172, path, 2, prologHybrid), _136988))
^ Unify: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_133172, path, 2, prologHybrid), _136988))
^ Call: (93) [baseKB] clause(mpred_prop(_133172, path, 2, prologHybrid), _136988)
^ Fail: (93) [baseKB] clause(mpred_prop(_133172, path, 2, prologHybrid), _136988)
^ Fail: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_133172, path, 2, prologHybrid), _136988))
Fail: (91) [hook_database] clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_133172, path, 2, prologHybrid), _133202), call(_133202)*->true;clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))))
^ Call: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _141408)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_141978))
^ Call: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _141978)
^ Fail: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _141978)
^ Redo: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _143980)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _144608)
^ Call: (89) [pfc_lib] mpred_core_database_term(path, 2, _145176)
^ Fail: (89) [pfc_lib] mpred_core_database_term(path, 2, _145176)
^ Fail: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _146550)
Call: (98) [$autoload] leave_sandbox(_147148)
Unify: (98) [$autoload] leave_sandbox(_147148)
Exit: (98) [$autoload] leave_sandbox(false)
Call: (97) [$autoload] restore_sandbox(false)
Unify: (97) [$autoload] restore_sandbox(false)
Exit: (97) [$autoload] restore_sandbox(false)
Fail: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 3))
^ Fail: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 3)), info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 3)))))
Fail: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 3))
^ Fail: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 3)), info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 3)))))
^ Fail: (68) [baseKB] ~ (baseKB:path(1, 3))
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+ ~path(1,3))),rtrace(baseKB: ~path(1,3))))
no_proof_for(\+ ~path(1,3)).
no_proof_for(\+ ~path(1,3)).
no_proof_for(\+ ~path(1,3)).
name ='logicmoo.pfc.test.sanity_base.NEG_01E-Test_0004_Line_0000__path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01E'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01E-Test_0004_Line_0000__path_1-junit.xml
~*/
:- mpred_test(~path(1,4)).
%:- mpred_test(\+path(1,4)).
%:- mpred_test(\+path(1,3)).
%~ mpred_test("Test_0005_Line_0000__path_1",baseKB: ~path(1,4))
/*~
%~ mpred_test("Test_0005_Line_0000__path_1",baseKB: ~path(1,4))
^ Call: (68) [baseKB] ~path(1, 4)
^ Unify: (68) [baseKB] ~ (baseKB:path(1, 4))
^ Call: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 4)), info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 4)))))
^ Unify: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 4)), info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 4)))))
Call: (76) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (76) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (76) [loop_check] prolog_frame_attribute(1189, parent_goal, loop_check_term_frame(_261436, info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, _261442, _261444))
^ Fail: (76) [loop_check] prolog_frame_attribute(1189, parent_goal, loop_check_term_frame(_261436, info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, _261442, _261444))
^ Redo: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 4)), info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 4)))))
Call: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 4))
Unify: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 4))
^ Call: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 4)), info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 4)))))
^ Unify: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 4)), info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 4)))))
Call: (83) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (83) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (83) [loop_check] prolog_frame_attribute(1328, parent_goal, loop_check_term_frame(_267160, info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, _267166, _267168))
^ Fail: (83) [loop_check] prolog_frame_attribute(1328, parent_goal, loop_check_term_frame(_267160, info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, _267166, _267168))
^ Redo: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 4)), info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 4)))))
Call: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 4))
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 4))
^ Call: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _270492)
^ Unify: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _270492)
^ Call: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _270492)
^ Fail: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _270492)
^ Fail: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _270492)
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 4))
^ Call: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 4))
^ Unify: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 4))
^ Call: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 4))
^ Unify: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 4))
^ Fail: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 4))
^ Redo: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 4))
^ Exit: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 4))
^ Call: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _278662), call(_278662)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Unify: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _278662), call(_278662)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _278662)
^ Fail: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _278662)
Call: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
Unify: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Call: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _282448))
^ Unify: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _282448))
^ Call: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _282448)
^ Fail: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _282448)
^ Fail: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _282448))
Fail: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Fail: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _278662), call(_278662)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _286868)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_287438))
^ Call: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _287438)
^ Fail: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _287438)
^ Redo: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _289440)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _290068)
^ Call: (88) [pfc_lib] mpred_core_database_term(path, 2, _290636)
^ Fail: (88) [pfc_lib] mpred_core_database_term(path, 2, _290636)
^ Fail: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _292010)
^ Call: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _292614), call(_292614)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Unify: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _292614), call(_292614)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _292614)
^ Fail: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _292614)
Call: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
Unify: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _296400))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _296400))
^ Call: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _296400)
^ Fail: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _296400)
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _296400))
Fail: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Fail: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _292614), call(_292614)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_300766, path, 2, prologHybrid), _300796), call(_300796)*->true;clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_300766, path, 2, prologHybrid), _300796), call(_300796)*->true;clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))))
^ Call: (91) [hook_database] clause(mpred_prop(_300766, path, 2, prologHybrid), _300796)
^ Fail: (91) [hook_database] clause(mpred_prop(_300766, path, 2, prologHybrid), _300796)
Call: (91) [hook_database] clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))
Unify: (91) [hook_database] clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))
^ Call: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_300766, path, 2, prologHybrid), _304582))
^ Unify: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_300766, path, 2, prologHybrid), _304582))
^ Call: (93) [baseKB] clause(mpred_prop(_300766, path, 2, prologHybrid), _304582)
^ Fail: (93) [baseKB] clause(mpred_prop(_300766, path, 2, prologHybrid), _304582)
^ Fail: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_300766, path, 2, prologHybrid), _304582))
Fail: (91) [hook_database] clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_300766, path, 2, prologHybrid), _300796), call(_300796)*->true;clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))))
^ Call: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _309002)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_309572))
^ Call: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _309572)
^ Fail: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _309572)
^ Redo: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _311574)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _312202)
^ Call: (89) [pfc_lib] mpred_core_database_term(path, 2, _312770)
^ Fail: (89) [pfc_lib] mpred_core_database_term(path, 2, _312770)
^ Fail: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _314144)
Call: (98) [$autoload] leave_sandbox(_314742)
Unify: (98) [$autoload] leave_sandbox(_314742)
Exit: (98) [$autoload] leave_sandbox(false)
Call: (97) [$autoload] restore_sandbox(false)
Unify: (97) [$autoload] restore_sandbox(false)
Exit: (97) [$autoload] restore_sandbox(false)
Fail: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 4))
^ Fail: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 4)), info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 4)))))
Fail: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 4))
^ Fail: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 4)), info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 4)))))
^ Fail: (68) [baseKB] ~ (baseKB:path(1, 4))
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+ ~path(1,4))),rtrace(baseKB: ~path(1,4))))
no_proof_for(\+ ~path(1,4)).
no_proof_for(\+ ~path(1,4)).
no_proof_for(\+ ~path(1,4)).
name ='logicmoo.pfc.test.sanity_base.NEG_01E-Test_0005_Line_0000__path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01E'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01E-Test_0005_Line_0000__path_1-junit.xml
~*/
%~ unused(save_junit_results)
%~ test_completed_exit(8)
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
```
totalTime=0
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ANEG_01E
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/neg_01e.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/NEG_01E/logicmoo_pfc_test_sanity_base_NEG_01E_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/NEG_01E/logicmoo_pfc_test_sanity_base_NEG_01E_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/neg_01e.pfc
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k neg_01e.pfc (returned 8)
| 3.0 | logicmoo.pfc.test.sanity_base.NEG_01E JUnit - (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc)
GH_MASTER_ISSUE_FINFO=
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ANEG_01E
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/neg_01e.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/NEG_01E/logicmoo_pfc_test_sanity_base_NEG_01E_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/NEG_01E/logicmoo_pfc_test_sanity_base_NEG_01E_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/neg_01e.pfc
```
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01e.pfc'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- use_module(library(statistics)).
%:- mpred_notrace_exec.
% reset runtime counter
%:- mpred_notrace_exec.
% reset runtime counter
:- statistics(runtime,_Secs).
:- cls.
%~ skipped(messy_on_output,cls)
~path(1,3).
~path(1,4).
path(1,2).
path(2,3).
path(3,4).
path(1,3).
path(1,4).
:- listing(path/2).
%~ skipped( listing( path/2))
:- break.
%~ skipped(blocks_on_input,break)
:- mpred_test(path(3, 4)).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01e.pfc:27
%~ mpred_test("Test_0001_Line_0000__path_3",baseKB:path(3,4))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L27
/*~
%~ mpred_test("Test_0001_Line_0000__path_3",baseKB:path(3,4))
passed=info(why_was_true(baseKB:path(3,4)))
Justifications for path(3,4):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L19 ',19) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01E-Test_0001_Line_0000__path_3'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01E'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01E-Test_0001_Line_0000__path_3-junit.xml
~*/
:- mpred_test(path(2, 3)).
%~ mpred_test("Test_0002_Line_0000__path_2",baseKB:path(2,3))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L28
/*~
%~ mpred_test("Test_0002_Line_0000__path_2",baseKB:path(2,3))
passed=info(why_was_true(baseKB:path(2,3)))
Justifications for path(2,3):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L18 ',18) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01E-Test_0002_Line_0000__path_2'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01E'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01E-Test_0002_Line_0000__path_2-junit.xml
~*/
:- mpred_test(path(1, 2)).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01e.pfc:29
%~ mpred_test("Test_0003_Line_0000__path_1",baseKB:path(1,2))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L29
/*~
%~ mpred_test("Test_0003_Line_0000__path_1",baseKB:path(1,2))
passed=info(why_was_true(baseKB:path(1,2)))
Justifications for path(1,2):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01e.pfc#L17 ',17) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01E-Test_0003_Line_0000__path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01E'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01E-Test_0003_Line_0000__path_1-junit.xml
~*/
:- mpred_test(~path(1,3)).
%~ mpred_test("Test_0004_Line_0000__path_1",baseKB: ~path(1,3))
/*~
%~ mpred_test("Test_0004_Line_0000__path_1",baseKB: ~path(1,3))
^ Call: (68) [baseKB] ~path(1, 3)
^ Unify: (68) [baseKB] ~ (baseKB:path(1, 3))
^ Call: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 3)), info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 3)))))
^ Unify: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 3)), info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 3)))))
Call: (76) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (76) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (76) [loop_check] prolog_frame_attribute(1320, parent_goal, loop_check_term_frame(_93842, info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, _93848, _93850))
^ Fail: (76) [loop_check] prolog_frame_attribute(1320, parent_goal, loop_check_term_frame(_93842, info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, _93848, _93850))
^ Redo: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 3)), info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 3)))))
Call: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 3))
Unify: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 3))
^ Call: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 3)), info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 3)))))
^ Unify: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 3)), info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 3)))))
Call: (83) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (83) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (83) [loop_check] prolog_frame_attribute(1459, parent_goal, loop_check_term_frame(_99566, info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, _99572, _99574))
^ Fail: (83) [loop_check] prolog_frame_attribute(1459, parent_goal, loop_check_term_frame(_99566, info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, _99572, _99574))
^ Redo: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 3)), info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 3)))))
Call: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 3))
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 3))
^ Call: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _102898)
^ Unify: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _102898)
^ Call: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _102898)
^ Fail: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _102898)
^ Fail: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _102898)
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 3))
^ Call: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 3))
^ Unify: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 3))
^ Call: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 3))
^ Unify: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 3))
^ Fail: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 3))
^ Redo: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 3))
^ Exit: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 3))
^ Call: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _111068), call(_111068)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Unify: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _111068), call(_111068)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _111068)
^ Fail: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _111068)
Call: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
Unify: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Call: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _114854))
^ Unify: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _114854))
^ Call: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _114854)
^ Fail: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _114854)
^ Fail: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _114854))
Fail: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Fail: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _111068), call(_111068)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _119274)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_119844))
^ Call: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _119844)
^ Fail: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _119844)
^ Redo: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _121846)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _122474)
^ Call: (88) [pfc_lib] mpred_core_database_term(path, 2, _123042)
^ Fail: (88) [pfc_lib] mpred_core_database_term(path, 2, _123042)
^ Fail: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _124416)
^ Call: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _125020), call(_125020)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Unify: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _125020), call(_125020)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _125020)
^ Fail: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _125020)
Call: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
Unify: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _128806))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _128806))
^ Call: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _128806)
^ Fail: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _128806)
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _128806))
Fail: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Fail: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _125020), call(_125020)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_133172, path, 2, prologHybrid), _133202), call(_133202)*->true;clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_133172, path, 2, prologHybrid), _133202), call(_133202)*->true;clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))))
^ Call: (91) [hook_database] clause(mpred_prop(_133172, path, 2, prologHybrid), _133202)
^ Fail: (91) [hook_database] clause(mpred_prop(_133172, path, 2, prologHybrid), _133202)
Call: (91) [hook_database] clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))
Unify: (91) [hook_database] clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))
^ Call: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_133172, path, 2, prologHybrid), _136988))
^ Unify: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_133172, path, 2, prologHybrid), _136988))
^ Call: (93) [baseKB] clause(mpred_prop(_133172, path, 2, prologHybrid), _136988)
^ Fail: (93) [baseKB] clause(mpred_prop(_133172, path, 2, prologHybrid), _136988)
^ Fail: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_133172, path, 2, prologHybrid), _136988))
Fail: (91) [hook_database] clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_133172, path, 2, prologHybrid), _133202), call(_133202)*->true;clause_b(baseKB:mpred_prop(_133172, path, 2, prologHybrid))))
^ Call: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _141408)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_141978))
^ Call: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _141978)
^ Fail: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _141978)
^ Redo: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _143980)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _144608)
^ Call: (89) [pfc_lib] mpred_core_database_term(path, 2, _145176)
^ Fail: (89) [pfc_lib] mpred_core_database_term(path, 2, _145176)
^ Fail: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _146550)
Call: (98) [$autoload] leave_sandbox(_147148)
Unify: (98) [$autoload] leave_sandbox(_147148)
Exit: (98) [$autoload] leave_sandbox(false)
Call: (97) [$autoload] restore_sandbox(false)
Unify: (97) [$autoload] restore_sandbox(false)
Exit: (97) [$autoload] restore_sandbox(false)
Fail: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 3))
^ Fail: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 3)), info(pfc_lib:neg_may_naf(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 3)))))
Fail: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 3))
^ Fail: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 3)), info(pfc_lib:neg_in_code0(baseKB:path(1, 3)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 3)))))
^ Fail: (68) [baseKB] ~ (baseKB:path(1, 3))
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+ ~path(1,3))),rtrace(baseKB: ~path(1,3))))
no_proof_for(\+ ~path(1,3)).
no_proof_for(\+ ~path(1,3)).
no_proof_for(\+ ~path(1,3)).
name ='logicmoo.pfc.test.sanity_base.NEG_01E-Test_0004_Line_0000__path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01E'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01E-Test_0004_Line_0000__path_1-junit.xml
~*/
:- mpred_test(~path(1,4)).
%:- mpred_test(\+path(1,4)).
%:- mpred_test(\+path(1,3)).
%~ mpred_test("Test_0005_Line_0000__path_1",baseKB: ~path(1,4))
/*~
%~ mpred_test("Test_0005_Line_0000__path_1",baseKB: ~path(1,4))
^ Call: (68) [baseKB] ~path(1, 4)
^ Unify: (68) [baseKB] ~ (baseKB:path(1, 4))
^ Call: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 4)), info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 4)))))
^ Unify: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 4)), info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 4)))))
Call: (76) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (76) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (76) [loop_check] prolog_frame_attribute(1189, parent_goal, loop_check_term_frame(_261436, info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, _261442, _261444))
^ Fail: (76) [loop_check] prolog_frame_attribute(1189, parent_goal, loop_check_term_frame(_261436, info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, _261442, _261444))
^ Redo: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 4)), info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 4)))))
Call: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 4))
Unify: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 4))
^ Call: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 4)), info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 4)))))
^ Unify: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 4)), info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 4)))))
Call: (83) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (83) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (83) [loop_check] prolog_frame_attribute(1328, parent_goal, loop_check_term_frame(_267160, info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, _267166, _267168))
^ Fail: (83) [loop_check] prolog_frame_attribute(1328, parent_goal, loop_check_term_frame(_267160, info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, _267166, _267168))
^ Redo: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 4)), info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 4)))))
Call: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 4))
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 4))
^ Call: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _270492)
^ Unify: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _270492)
^ Call: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _270492)
^ Fail: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _270492)
^ Fail: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _270492)
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 4))
^ Call: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 4))
^ Unify: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 4))
^ Call: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 4))
^ Unify: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 4))
^ Fail: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 4))
^ Redo: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 4))
^ Exit: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 4))
^ Call: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _278662), call(_278662)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Unify: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _278662), call(_278662)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _278662)
^ Fail: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _278662)
Call: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
Unify: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Call: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _282448))
^ Unify: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _282448))
^ Call: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _282448)
^ Fail: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _282448)
^ Fail: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _282448))
Fail: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Fail: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _278662), call(_278662)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _286868)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_287438))
^ Call: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _287438)
^ Fail: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _287438)
^ Redo: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _289440)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _290068)
^ Call: (88) [pfc_lib] mpred_core_database_term(path, 2, _290636)
^ Fail: (88) [pfc_lib] mpred_core_database_term(path, 2, _290636)
^ Fail: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _292010)
^ Call: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _292614), call(_292614)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Unify: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _292614), call(_292614)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _292614)
^ Fail: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _292614)
Call: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
Unify: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _296400))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _296400))
^ Call: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _296400)
^ Fail: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _296400)
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _296400))
Fail: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Fail: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _292614), call(_292614)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_300766, path, 2, prologHybrid), _300796), call(_300796)*->true;clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_300766, path, 2, prologHybrid), _300796), call(_300796)*->true;clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))))
^ Call: (91) [hook_database] clause(mpred_prop(_300766, path, 2, prologHybrid), _300796)
^ Fail: (91) [hook_database] clause(mpred_prop(_300766, path, 2, prologHybrid), _300796)
Call: (91) [hook_database] clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))
Unify: (91) [hook_database] clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))
^ Call: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_300766, path, 2, prologHybrid), _304582))
^ Unify: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_300766, path, 2, prologHybrid), _304582))
^ Call: (93) [baseKB] clause(mpred_prop(_300766, path, 2, prologHybrid), _304582)
^ Fail: (93) [baseKB] clause(mpred_prop(_300766, path, 2, prologHybrid), _304582)
^ Fail: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_300766, path, 2, prologHybrid), _304582))
Fail: (91) [hook_database] clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_300766, path, 2, prologHybrid), _300796), call(_300796)*->true;clause_b(baseKB:mpred_prop(_300766, path, 2, prologHybrid))))
^ Call: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _309002)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_309572))
^ Call: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _309572)
^ Fail: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _309572)
^ Redo: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _311574)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _312202)
^ Call: (89) [pfc_lib] mpred_core_database_term(path, 2, _312770)
^ Fail: (89) [pfc_lib] mpred_core_database_term(path, 2, _312770)
^ Fail: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _314144)
Call: (98) [$autoload] leave_sandbox(_314742)
Unify: (98) [$autoload] leave_sandbox(_314742)
Exit: (98) [$autoload] leave_sandbox(false)
Call: (97) [$autoload] restore_sandbox(false)
Unify: (97) [$autoload] restore_sandbox(false)
Exit: (97) [$autoload] restore_sandbox(false)
Fail: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 4))
^ Fail: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 4)), info(pfc_lib:neg_may_naf(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 4)))))
Fail: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 4))
^ Fail: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 4)), info(pfc_lib:neg_in_code0(baseKB:path(1, 4)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 4)))))
^ Fail: (68) [baseKB] ~ (baseKB:path(1, 4))
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+ ~path(1,4))),rtrace(baseKB: ~path(1,4))))
no_proof_for(\+ ~path(1,4)).
no_proof_for(\+ ~path(1,4)).
no_proof_for(\+ ~path(1,4)).
name ='logicmoo.pfc.test.sanity_base.NEG_01E-Test_0005_Line_0000__path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01E'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01e.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01E-Test_0005_Line_0000__path_1-junit.xml
~*/
%~ unused(save_junit_results)
%~ test_completed_exit(8)
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
```
totalTime=0
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ANEG_01E
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/neg_01e.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/NEG_01E/logicmoo_pfc_test_sanity_base_NEG_01E_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/NEG_01E/logicmoo_pfc_test_sanity_base_NEG_01E_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/neg_01e.pfc
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k neg_01e.pfc (returned 8)
| non_priority | logicmoo pfc test sanity base neg junit cd var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base timeout foreground preserve status s sigkill k lmoo clif neg pfc gh master issue finfo issue search gitlab latest this build github running var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base neg pfc this test might need use module library logicmoo plarkc use module library statistics mpred notrace exec reset runtime counter mpred notrace exec reset runtime counter statistics runtime secs cls skipped messy on output cls path path path path path path path listing path skipped listing path break skipped blocks on input break mpred test path var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base neg pfc mpred test test line path basekb path file mpred test test line path basekb path passed info why was true basekb path justifications for path basekb name logicmoo pfc test sanity base neg test line path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line path junit xml mpred test path mpred test test line path basekb path file mpred test test line path basekb path passed info why was true basekb path justifications for path basekb name logicmoo pfc test sanity base neg test line path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line path junit xml mpred test path var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base neg pfc mpred test test line path basekb path file mpred test test line path basekb path passed info why was true basekb path justifications for path basekb name logicmoo pfc test sanity base neg test line path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line path junit xml mpred test path mpred test test line path basekb path mpred test test line path basekb path call path unify basekb path call loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path unify loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info pfc lib neg in basekb path mpred core pl fail prolog frame attribute parent goal loop check term frame info pfc lib neg in basekb path mpred core pl redo loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path call neg in basekb path unify neg in basekb path call loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path unify loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info pfc lib neg may naf basekb path mpred core pl fail prolog frame attribute parent goal loop check term frame info pfc lib neg may naf basekb path mpred core pl redo loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path call neg may naf basekb path unify neg may naf basekb path call hook database clause i pfc lib prolognegbyfailure path true unify hook database clause i pfc lib prolognegbyfailure path true call clause pfc lib prolognegbyfailure path true fail clause pfc lib prolognegbyfailure path true fail hook database clause i pfc lib prolognegbyfailure path true unify neg may naf basekb path call ucatch is ftcompound basekb path unify ucatch is ftcompound basekb path call ucatch is ftvar basekb path unify ucatch is ftvar basekb path fail ucatch is ftvar basekb path redo ucatch is ftcompound basekb path exit ucatch is ftcompound basekb path call hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid unify hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid call clause mpred prop basekb path prologhybrid fail clause mpred prop basekb path prologhybrid call clause b basekb mpred prop basekb path prologhybrid unify clause b basekb mpred prop basekb path prologhybrid call hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid unify hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid call clause mpred prop basekb path prologhybrid fail clause mpred prop basekb path prologhybrid fail hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid fail clause b basekb mpred prop basekb path prologhybrid fail hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid call basekb mpred database term path unify basekb mpred database term path syntaxic call mpred database term syntax path fail mpred database term syntax path redo basekb mpred database term path unify basekb mpred database term path call mpred core database term path fail mpred core database term path fail basekb mpred database term path call hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin unify hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin call clause mpred prop basekb path prologbuiltin fail clause mpred prop basekb path prologbuiltin call clause b basekb mpred prop basekb path prologbuiltin unify clause b basekb mpred prop basekb path prologbuiltin call hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin unify hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin call clause mpred prop basekb path prologbuiltin fail clause mpred prop basekb path prologbuiltin fail hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin fail clause b basekb mpred prop basekb path prologbuiltin fail hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin call hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid unify hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid call clause mpred prop path prologhybrid fail clause mpred prop path prologhybrid call clause b basekb mpred prop path prologhybrid unify clause b basekb mpred prop path prologhybrid call hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid unify hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid call clause mpred prop path prologhybrid fail clause mpred prop path prologhybrid fail hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid fail clause b basekb mpred prop path prologhybrid fail hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid call basekb mpred database term path unify basekb mpred database term path syntaxic call mpred database term syntax path fail mpred database term syntax path redo basekb mpred database term path unify basekb mpred database term path call mpred core database term path fail mpred core database term path fail basekb mpred database term path call leave sandbox unify leave sandbox exit leave sandbox false call restore sandbox false unify restore sandbox false exit restore sandbox false fail neg may naf basekb path fail loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path fail neg in basekb path fail loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path fail basekb path call must sanity mquietly if true rtrace tat normal unify must sanity mquietly if true rtrace tat normal failure info why was true basekb path rtrace basekb path no proof for path no proof for path no proof for path name logicmoo pfc test sanity base neg test line path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line path junit xml mpred test path mpred test path mpred test path mpred test test line path basekb path mpred test test line path basekb path call path unify basekb path call loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path unify loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info pfc lib neg in basekb path mpred core pl fail prolog frame attribute parent goal loop check term frame info pfc lib neg in basekb path mpred core pl redo loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path call neg in basekb path unify neg in basekb path call loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path unify loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info pfc lib neg may naf basekb path mpred core pl fail prolog frame attribute parent goal loop check term frame info pfc lib neg may naf basekb path mpred core pl redo loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path call neg may naf basekb path unify neg may naf basekb path call hook database clause i pfc lib prolognegbyfailure path true unify hook database clause i pfc lib prolognegbyfailure path true call clause pfc lib prolognegbyfailure path true fail clause pfc lib prolognegbyfailure path true fail hook database clause i pfc lib prolognegbyfailure path true unify neg may naf basekb path call ucatch is ftcompound basekb path unify ucatch is ftcompound basekb path call ucatch is ftvar basekb path unify ucatch is ftvar basekb path fail ucatch is ftvar basekb path redo ucatch is ftcompound basekb path exit ucatch is ftcompound basekb path call hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid unify hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid call clause mpred prop basekb path prologhybrid fail clause mpred prop basekb path prologhybrid call clause b basekb mpred prop basekb path prologhybrid unify clause b basekb mpred prop basekb path prologhybrid call hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid unify hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid call clause mpred prop basekb path prologhybrid fail clause mpred prop basekb path prologhybrid fail hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid fail clause b basekb mpred prop basekb path prologhybrid fail hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid call basekb mpred database term path unify basekb mpred database term path syntaxic call mpred database term syntax path fail mpred database term syntax path redo basekb mpred database term path unify basekb mpred database term path call mpred core database term path fail mpred core database term path fail basekb mpred database term path call hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin unify hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin call clause mpred prop basekb path prologbuiltin fail clause mpred prop basekb path prologbuiltin call clause b basekb mpred prop basekb path prologbuiltin unify clause b basekb mpred prop basekb path prologbuiltin call hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin unify hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin call clause mpred prop basekb path prologbuiltin fail clause mpred prop basekb path prologbuiltin fail hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin fail clause b basekb mpred prop basekb path prologbuiltin fail hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin call hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid unify hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid call clause mpred prop path prologhybrid fail clause mpred prop path prologhybrid call clause b basekb mpred prop path prologhybrid unify clause b basekb mpred prop path prologhybrid call hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid unify hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid call clause mpred prop path prologhybrid fail clause mpred prop path prologhybrid fail hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid fail clause b basekb mpred prop path prologhybrid fail hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid call basekb mpred database term path unify basekb mpred database term path syntaxic call mpred database term syntax path fail mpred database term syntax path redo basekb mpred database term path unify basekb mpred database term path call mpred core database term path fail mpred core database term path fail basekb mpred database term path call leave sandbox unify leave sandbox exit leave sandbox false call restore sandbox false unify restore sandbox false exit restore sandbox false fail neg may naf basekb path fail loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path fail neg in basekb path fail loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path fail basekb path call must sanity mquietly if true rtrace tat normal unify must sanity mquietly if true rtrace tat normal failure info why was true basekb path rtrace basekb path no proof for path no proof for path no proof for path name logicmoo pfc test sanity base neg test line path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line path junit xml unused save junit results test completed exit dynamic junit prop dynamic junit prop dynamic junit prop totaltime issue search gitlab latest this build github failed var lib jenkins workspace logicmoo workspace bin lmoo junit minor k neg pfc returned | 0 |
29,020 | 13,923,694,393 | IssuesEvent | 2020-10-21 14:42:47 | ManageIQ/manageiq-api | https://api.github.com/repos/ManageIQ/manageiq-api | closed | API Performance | performance | Issues related to improving the performance of the ManageIQ Rest API
- [x] [Slow but small response for virtual columns seemingly due to N+1 queries](https://github.com/ManageIQ/manageiq-api/issues/869)
- [x] _[renderer.rb] Don't re-Rbac: part deux:_ #874
- [x] _AR VirtualAttribute queries + introspection:_ ~~#877~~ https://github.com/ManageIQ/manageiq-api/pull/887 https://github.com/ManageIQ/manageiq-api/pull/890
- [x] Improve user login timings
- [x] ~~_Use~~ ~~`.update_attribute`~~ ~~over~~ ~~`.save!`~~ ~~for User login:_ ManageIQ/manageiq#20471~~
- [x] Reduce queries for `User#save` validations: https://github.com/ManageIQ/manageiq/pull/20590
- [x] [Add support for returning nil/null in associations](https://github.com/ManageIQ/manageiq-api/issues/870)
- [x] [Querying for `&attributes=hardware.cpu_sockets` is 0.5 seconds faster than `num_cpu`, WAT](https://github.com/ManageIQ/manageiq-api/issues/872)
| True | API Performance - Issues related to improving the performance of the ManageIQ Rest API
- [x] [Slow but small response for virtual columns seemingly due to N+1 queries](https://github.com/ManageIQ/manageiq-api/issues/869)
- [x] _[renderer.rb] Don't re-Rbac: part deux:_ #874
- [x] _AR VirtualAttribute queries + introspection:_ ~~#877~~ https://github.com/ManageIQ/manageiq-api/pull/887 https://github.com/ManageIQ/manageiq-api/pull/890
- [x] Improve user login timings
- [x] ~~_Use~~ ~~`.update_attribute`~~ ~~over~~ ~~`.save!`~~ ~~for User login:_ ManageIQ/manageiq#20471~~
- [x] Reduce queries for `User#save` validations: https://github.com/ManageIQ/manageiq/pull/20590
- [x] [Add support for returning nil/null in associations](https://github.com/ManageIQ/manageiq-api/issues/870)
- [x] [Querying for `&attributes=hardware.cpu_sockets` is 0.5 seconds faster than `num_cpu`, WAT](https://github.com/ManageIQ/manageiq-api/issues/872)
| non_priority | api performance issues related to improving the performance of the manageiq rest api don t re rbac part deux ar virtualattribute queries introspection improve user login timings use update attribute over save for user login manageiq manageiq reduce queries for user save validations | 0 |
157,727 | 13,721,774,179 | IssuesEvent | 2020-10-03 00:15:05 | SJSU-Robotic/ros-curriculum | https://api.github.com/repos/SJSU-Robotic/ros-curriculum | opened | RE: lecture1.pdf, `roscore` behavior requires further elaboration | documentation | ## Problem Description
Regarding [lecture1.pdf](https://github.com/SJSU-Robotic/ros-curriculum/blob/master/readings/lecture1.pdf)
On slide 10, after running `roscore` in `terminal`, end users are presented with various links:

Users unfamiliar with `roscore`'s architecture are likely to assume that these links need to be opened and interacted with via a web browser in order to proceed.
> After you put that command your presented with a link. I thought you have to open it proceed further but I think we have to open a new terminal and progress on ward.
Thanks @anaik23 for reporting this
## Suggestion
Insert a slide between slides 10 and 11 to warn users against opening the resulting URLs via a web browser | 1.0 | RE: lecture1.pdf, `roscore` behavior requires further elaboration - ## Problem Description
Regarding [lecture1.pdf](https://github.com/SJSU-Robotic/ros-curriculum/blob/master/readings/lecture1.pdf)
On slide 10, after running `roscore` in `terminal`, end users are presented with various links:

Users unfamiliar with `roscore`'s architecture are likely to assume that these links need to be opened and interacted with via a web browser in order to proceed.
> After you put that command your presented with a link. I thought you have to open it proceed further but I think we have to open a new terminal and progress on ward.
Thanks @anaik23 for reporting this
## Suggestion
Insert a slide between slides 10 and 11 to warn users against opening the resulting URLs via a web browser | non_priority | re pdf roscore behavior requires further elaboration problem description regarding on slide after running roscore in terminal end users are presented with various links users unfamiliar with roscore s architecture are likely to assume that these links need to be opened and interacted with via a web browser in order to proceed after you put that command your presented with a link i thought you have to open it proceed further but i think we have to open a new terminal and progress on ward thanks for reporting this suggestion insert a slide between slides and to warn users against opening the resulting urls via a web browser | 0 |
227,118 | 17,374,574,596 | IssuesEvent | 2021-07-30 18:49:53 | arnaudmillergoupil/veganrecipe | https://api.github.com/repos/arnaudmillergoupil/veganrecipe | opened | Web Design | documentation enhancement | - [ ] Arborescence générale du site
- [ ] UI, de quoi aura l'air du site, ses principales charactéristiques, et son
- [ ] Template d'une recette | 1.0 | Web Design - - [ ] Arborescence générale du site
- [ ] UI, de quoi aura l'air du site, ses principales charactéristiques, et son
- [ ] Template d'une recette | non_priority | web design arborescence générale du site ui de quoi aura l air du site ses principales charactéristiques et son template d une recette | 0 |
189,751 | 14,521,181,217 | IssuesEvent | 2020-12-14 06:55:52 | HumanBrainProject/interactive-viewer | https://api.github.com/repos/HumanBrainProject/interactive-viewer | closed | [Bug] After change connectivity source, if new source does not have data it breaks | bug needs test v2.3.0 | Steps to reproduce:
1. Load Cytoarchitectonic maps - v1.18
2. Select region "Area STS2 (STS) - right hemisphere"
3. Expand the connectivity
4. change connectivity from "1000BRAINS study" to "Averaged_FC_JuBrain_184Regions"
Expected Behavior
Instead of a connectivity diagram, it should return a message that connectivity is not available for the current region
actual Bihevior
It keeps "1000BRAINS study" diagram on the screen
| 1.0 | [Bug] After change connectivity source, if new source does not have data it breaks - Steps to reproduce:
1. Load Cytoarchitectonic maps - v1.18
2. Select region "Area STS2 (STS) - right hemisphere"
3. Expand the connectivity
4. change connectivity from "1000BRAINS study" to "Averaged_FC_JuBrain_184Regions"
Expected Behavior
Instead of a connectivity diagram, it should return a message that connectivity is not available for the current region
actual Bihevior
It keeps "1000BRAINS study" diagram on the screen
| non_priority | after change connectivity source if new source does not have data it breaks steps to reproduce load cytoarchitectonic maps select region area sts right hemisphere expand the connectivity change connectivity from study to averaged fc jubrain expected behavior instead of a connectivity diagram it should return a message that connectivity is not available for the current region actual bihevior it keeps study diagram on the screen | 0 |
21,332 | 4,704,099,070 | IssuesEvent | 2016-10-13 10:13:42 | Sylius/Sylius | https://api.github.com/repos/Sylius/Sylius | closed | [Docs] Models are not in bundle, but in component | Bug Documentation | For example, in this file `docs/bundles/SyliusPromotionBundle/models.rst` we have: All the models of this bundle are defined in Sylius\Bundle\PromotionBundle\Model.
But in PromotionBundle there is no Model folder...

While in component:

Is it me or the docs should be updated? | 1.0 | [Docs] Models are not in bundle, but in component - For example, in this file `docs/bundles/SyliusPromotionBundle/models.rst` we have: All the models of this bundle are defined in Sylius\Bundle\PromotionBundle\Model.
But in PromotionBundle there is no Model folder...

While in component:

Is it me or the docs should be updated? | non_priority | models are not in bundle but in component for example in this file docs bundles syliuspromotionbundle models rst we have all the models of this bundle are defined in sylius bundle promotionbundle model but in promotionbundle there is no model folder while in component is it me or the docs should be updated | 0 |
133,359 | 10,819,280,929 | IssuesEvent | 2019-11-08 14:04:28 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | flake: TestWebhookAdmission | kind/failing-test kind/flake sig/api-machinery | <!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs -->
**Which jobs are failing**:
pull-kubernetes-integration
**Which test(s) are failing**:
https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1172176898202013696
```
=== RUN TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete
I0912 16:14:28.130136 104492 client.go:361] parsed scheme: "endpoint"
I0912 16:14:28.130461 104492 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}]
--- FAIL: TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete (0.24s)
admission_test.go:676: waiting for schema.GroupVersionResource{Group:"apiextensions.k8s.io", Version:"v1", Resource:"customresourcedefinitions"} to be deleted (name: openshiftwebconsoleconfigs.webconsole2.operator.openshift.io, finalizers: [customresourcecleanup.apiextensions.k8s.io])...
admission_test.go:702: CustomResourceDefinition.apiextensions.k8s.io "openshiftwebconsoleconfigs.webconsole2.operator.openshift.io" is invalid: metadata.finalizers: Forbidden: no new finalizers can be added if the object is being deleted, found new finalizers []string{"test/k8s.io"}
```
**Since when has it been failing**:
**Testgrid link**:
https://testgrid.k8s.io/presubmits-kubernetes-blocking#pull-kubernetes-integration&sort-by-flakiness=&include-filter-by-regex=TestWebhookAdmission&width=5
**Reason for failure**:
**Anything else we need to know**:
The flake is currently rare. It is unclear yet if it is a test issue, timing issue, or a bug.
/sig api-machinery
/assign | 1.0 | flake: TestWebhookAdmission - <!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs -->
**Which jobs are failing**:
pull-kubernetes-integration
**Which test(s) are failing**:
https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1172176898202013696
```
=== RUN TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete
I0912 16:14:28.130136 104492 client.go:361] parsed scheme: "endpoint"
I0912 16:14:28.130461 104492 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}]
--- FAIL: TestWebhookAdmissionWithWatchCache/apiextensions.k8s.io.v1.customresourcedefinitions/delete (0.24s)
admission_test.go:676: waiting for schema.GroupVersionResource{Group:"apiextensions.k8s.io", Version:"v1", Resource:"customresourcedefinitions"} to be deleted (name: openshiftwebconsoleconfigs.webconsole2.operator.openshift.io, finalizers: [customresourcecleanup.apiextensions.k8s.io])...
admission_test.go:702: CustomResourceDefinition.apiextensions.k8s.io "openshiftwebconsoleconfigs.webconsole2.operator.openshift.io" is invalid: metadata.finalizers: Forbidden: no new finalizers can be added if the object is being deleted, found new finalizers []string{"test/k8s.io"}
```
**Since when has it been failing**:
**Testgrid link**:
https://testgrid.k8s.io/presubmits-kubernetes-blocking#pull-kubernetes-integration&sort-by-flakiness=&include-filter-by-regex=TestWebhookAdmission&width=5
**Reason for failure**:
**Anything else we need to know**:
The flake is currently rare. It is unclear yet if it is a test issue, timing issue, or a bug.
/sig api-machinery
/assign | non_priority | flake testwebhookadmission which jobs are failing pull kubernetes integration which test s are failing run testwebhookadmissionwithwatchcache apiextensions io customresourcedefinitions delete client go parsed scheme endpoint endpoint go ccresolverwrapper sending new addresses to cc fail testwebhookadmissionwithwatchcache apiextensions io customresourcedefinitions delete admission test go waiting for schema groupversionresource group apiextensions io version resource customresourcedefinitions to be deleted name openshiftwebconsoleconfigs operator openshift io finalizers admission test go customresourcedefinition apiextensions io openshiftwebconsoleconfigs operator openshift io is invalid metadata finalizers forbidden no new finalizers can be added if the object is being deleted found new finalizers string test io since when has it been failing testgrid link reason for failure anything else we need to know the flake is currently rare it is unclear yet if it is a test issue timing issue or a bug sig api machinery assign | 0 |
180,969 | 21,629,319,936 | IssuesEvent | 2022-05-05 08:02:45 | druidfi/security-checker-action | https://api.github.com/repos/druidfi/security-checker-action | opened | Pending security updates in production! | security | ## Security updates available
- `drupal/core` from 9.3.9 to [9.3.12](https://www.drupal.org/project/drupal/releases/9.3.12)
- `drupal/ctools` from 3.4.0 to [3.7.0](https://www.drupal.org/project/ctools/releases/8.x-3.7)
These updates are pending and were found with scanning `composer.lock` and checking for available security updates.
Branch: `refs/heads/main`
| True | Pending security updates in production! - ## Security updates available
- `drupal/core` from 9.3.9 to [9.3.12](https://www.drupal.org/project/drupal/releases/9.3.12)
- `drupal/ctools` from 3.4.0 to [3.7.0](https://www.drupal.org/project/ctools/releases/8.x-3.7)
These updates are pending and were found with scanning `composer.lock` and checking for available security updates.
Branch: `refs/heads/main`
| non_priority | pending security updates in production security updates available drupal core from to drupal ctools from to these updates are pending and were found with scanning composer lock and checking for available security updates branch refs heads main | 0 |
135,130 | 18,667,101,553 | IssuesEvent | 2021-10-30 02:14:59 | eclipse/rdf4j | https://api.github.com/repos/eclipse/rdf4j | closed | XML-based parsers should not load external DTDs by default | 🐞 bug 📦 rio security | ### Problem description
We recently received a improvement request for Any23 to [optionally disable remote HTTP connections when resolving XML entities](https://issues.apache.org/jira/browse/ANY23-504). Any23 utilizes rdf4j 3.1.2. The stack trace provided by the reporter indicates that `org.eclipse.rdf4j.rio.trix.TriXParser` parsing can lead to a hung thread (for about two minutes) with an open HTTP connection.
I am writing here to see if this is something we can configure in RDF4J or whether we need to go deeper into Xerces or even the SUN HttpClient. I am looking for some guidance.
### Preferred solution
The [test file](https://github.com/apache/nutch/blob/master/src/plugin/any23/sample/BBC_News_Scotland.html) is available for anyone interested in trying to reproduce this issue. I am looking for some guidance on where this configuration would actually be implemented. Thanks for any suggestions.
### Are you interested in contributing a solution yourself?
Yes
### Alternatives you've considered
Nothing yet. Apart from studying the `org.eclipse.rdf4j.rio.trix.TriXParser` source code this is the first port of call. Thanks for anyone who is interested in this issue.
### Anything else?
_No response_ | True | XML-based parsers should not load external DTDs by default - ### Problem description
We recently received a improvement request for Any23 to [optionally disable remote HTTP connections when resolving XML entities](https://issues.apache.org/jira/browse/ANY23-504). Any23 utilizes rdf4j 3.1.2. The stack trace provided by the reporter indicates that `org.eclipse.rdf4j.rio.trix.TriXParser` parsing can lead to a hung thread (for about two minutes) with an open HTTP connection.
I am writing here to see if this is something we can configure in RDF4J or whether we need to go deeper into Xerces or even the SUN HttpClient. I am looking for some guidance.
### Preferred solution
The [test file](https://github.com/apache/nutch/blob/master/src/plugin/any23/sample/BBC_News_Scotland.html) is available for anyone interested in trying to reproduce this issue. I am looking for some guidance on where this configuration would actually be implemented. Thanks for any suggestions.
### Are you interested in contributing a solution yourself?
Yes
### Alternatives you've considered
Nothing yet. Apart from studying the `org.eclipse.rdf4j.rio.trix.TriXParser` source code this is the first port of call. Thanks for anyone who is interested in this issue.
### Anything else?
_No response_ | non_priority | xml based parsers should not load external dtds by default problem description we recently received a improvement request for to utilizes the stack trace provided by the reporter indicates that org eclipse rio trix trixparser parsing can lead to a hung thread for about two minutes with an open http connection i am writing here to see if this is something we can configure in or whether we need to go deeper into xerces or even the sun httpclient i am looking for some guidance preferred solution the is available for anyone interested in trying to reproduce this issue i am looking for some guidance on where this configuration would actually be implemented thanks for any suggestions are you interested in contributing a solution yourself yes alternatives you ve considered nothing yet apart from studying the org eclipse rio trix trixparser source code this is the first port of call thanks for anyone who is interested in this issue anything else no response | 0 |
55,609 | 13,647,449,462 | IssuesEvent | 2020-09-26 03:22:45 | TerryCavanagh/diceydungeonsbeta | https://api.github.com/repos/TerryCavanagh/diceydungeonsbeta | closed | Suggestion/Ideas: Silence for Robot and Jester | v0.5: 21st June Build | for robot:
* only the calculate button is visible
or
* no jackpot
for jester:
* only the discard/snap button is visible (like multicard with less multi)
or
* can't discard/snap | 1.0 | Suggestion/Ideas: Silence for Robot and Jester - for robot:
* only the calculate button is visible
or
* no jackpot
for jester:
* only the discard/snap button is visible (like multicard with less multi)
or
* can't discard/snap | non_priority | suggestion ideas silence for robot and jester for robot only the calculate button is visible or no jackpot for jester only the discard snap button is visible like multicard with less multi or can t discard snap | 0 |
306,229 | 23,150,457,214 | IssuesEvent | 2022-07-29 07:47:40 | damdalf/Personal-Projects | https://api.github.com/repos/damdalf/Personal-Projects | closed | Remove all 'TODO' comments from stock.py and make them Git issues | documentation | **Requirements**
All existing 'TODO' comments shall be converted into Git issues and removed from stock.py. | 1.0 | Remove all 'TODO' comments from stock.py and make them Git issues - **Requirements**
All existing 'TODO' comments shall be converted into Git issues and removed from stock.py. | non_priority | remove all todo comments from stock py and make them git issues requirements all existing todo comments shall be converted into git issues and removed from stock py | 0 |
313,753 | 23,490,420,022 | IssuesEvent | 2022-08-17 18:08:51 | astropy/astroquery | https://api.github.com/repos/astropy/astroquery | opened | DOC: identify smaller datasets to plug into documentation examples | Documentation esa.esa_hubble esa | Some of the code examples in the ESA modules are being skipped for testing as they are pulling largish datasets. It would be wonderful to identify much smaller datasets to include the examples in the testing.
cc @jespinosaar | 1.0 | DOC: identify smaller datasets to plug into documentation examples - Some of the code examples in the ESA modules are being skipped for testing as they are pulling largish datasets. It would be wonderful to identify much smaller datasets to include the examples in the testing.
cc @jespinosaar | non_priority | doc identify smaller datasets to plug into documentation examples some of the code examples in the esa modules are being skipped for testing as they are pulling largish datasets it would be wonderful to identify much smaller datasets to include the examples in the testing cc jespinosaar | 0 |
245,290 | 18,778,801,636 | IssuesEvent | 2021-11-08 02:03:37 | AY2122S1-CS2103T-T15-1/tp | https://api.github.com/repos/AY2122S1-CS2103T-T15-1/tp | closed | [DOC] Rama Update DG | type.Documentation | Each member should describe the implementation of at least one enhancement she/he has added (or planning to add).
Expected length: 1+ page per person | 1.0 | [DOC] Rama Update DG - Each member should describe the implementation of at least one enhancement she/he has added (or planning to add).
Expected length: 1+ page per person | non_priority | rama update dg each member should describe the implementation of at least one enhancement she he has added or planning to add expected length page per person | 0 |
62,928 | 7,657,573,833 | IssuesEvent | 2018-05-10 20:07:48 | endangereddataweek/resources | https://api.github.com/repos/endangereddataweek/resources | opened | Revise existing workshop material to be more generalizable | brainstorming design enhancement mozsprint review | On a suggestion by @chadsansing, I'll work on assessing and redesigning my existing workshop slides to make them more flexible for adaptability. Any review of the slides for what's working vs. what's not would be helpful! | 1.0 | Revise existing workshop material to be more generalizable - On a suggestion by @chadsansing, I'll work on assessing and redesigning my existing workshop slides to make them more flexible for adaptability. Any review of the slides for what's working vs. what's not would be helpful! | non_priority | revise existing workshop material to be more generalizable on a suggestion by chadsansing i ll work on assessing and redesigning my existing workshop slides to make them more flexible for adaptability any review of the slides for what s working vs what s not would be helpful | 0 |
128,193 | 18,040,476,659 | IssuesEvent | 2021-09-18 01:19:55 | Reid-Turner/uppy | https://api.github.com/repos/Reid-Turner/uppy | opened | CVE-2021-3801 (Medium) detected in prismjs-1.22.0.tgz | security vulnerability | ## CVE-2021-3801 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>prismjs-1.22.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.22.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.22.0.tgz</a></p>
<p>
Dependency Hierarchy:
- uppy.io-file:website.tgz (Root Library)
- :x: **prismjs-1.22.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
prism is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3801>CVE-2021-3801</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.22.0","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"uppy.io:file:website;prismjs:1.22.0","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3801","vulnerabilityDetails":"prism is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3801","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-3801 (Medium) detected in prismjs-1.22.0.tgz - ## CVE-2021-3801 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>prismjs-1.22.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.22.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.22.0.tgz</a></p>
<p>
Dependency Hierarchy:
- uppy.io-file:website.tgz (Root Library)
- :x: **prismjs-1.22.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
prism is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3801>CVE-2021-3801</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.22.0","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"uppy.io:file:website;prismjs:1.22.0","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3801","vulnerabilityDetails":"prism is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3801","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in prismjs tgz cve medium severity vulnerability vulnerable library prismjs tgz lightweight robust elegant syntax highlighting a spin off project from dabblet library home page a href dependency hierarchy uppy io file website tgz root library x prismjs tgz vulnerable library found in base branch master vulnerability details prism is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree uppy io file website prismjs isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails prism is vulnerable to inefficient regular expression complexity vulnerabilityurl | 0 |
16,704 | 4,076,807,568 | IssuesEvent | 2016-05-30 03:08:23 | twinlabs/forum | https://api.github.com/repos/twinlabs/forum | opened | Add documentation for brand new contributors. | documentation | Currently, when cloning onto a new machine I don't have explicit instructions for how to get up and running. A short guide that describes what to put in place would do wonders. | 1.0 | Add documentation for brand new contributors. - Currently, when cloning onto a new machine I don't have explicit instructions for how to get up and running. A short guide that describes what to put in place would do wonders. | non_priority | add documentation for brand new contributors currently when cloning onto a new machine i don t have explicit instructions for how to get up and running a short guide that describes what to put in place would do wonders | 0 |
87,780 | 15,790,319,031 | IssuesEvent | 2021-04-02 01:07:32 | kadirselcuk/zaproxy | https://api.github.com/repos/kadirselcuk/zaproxy | opened | CVE-2020-36181 (High) detected in jackson-databind-2.9.2.jar | security vulnerability | ## CVE-2020-36181 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: zaproxy/buildSrc/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.2/1d8d8cb7cf26920ba57fb61fa56da88cc123b21f/jackson-databind-2.9.2.jar</p>
<p>
Dependency Hierarchy:
- github-api-1.95.jar (Root Library)
- :x: **jackson-databind-2.9.2.jar** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-36181 (High) detected in jackson-databind-2.9.2.jar - ## CVE-2020-36181 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: zaproxy/buildSrc/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.2/1d8d8cb7cf26920ba57fb61fa56da88cc123b21f/jackson-databind-2.9.2.jar</p>
<p>
Dependency Hierarchy:
- github-api-1.95.jar (Root Library)
- :x: **jackson-databind-2.9.2.jar** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file zaproxy buildsrc build gradle kts path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy github api jar root library x jackson databind jar vulnerable library found in base branch develop vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
49,524 | 7,521,200,873 | IssuesEvent | 2018-04-12 16:26:44 | qorelanguage/qore | https://api.github.com/repos/qorelanguage/qore | opened | type error with complex hash assignments | bug c++ documentation types | ```
david@greybeard:~/src/qore/git/qore/src$ qore -ne 'hash<string, hash<auto>> h.a = {"a": "str", "b": 1};'
unhandled QORE System exception thrown in TID 1 at 2018-04-12 18:26:08.223928 Thu +02:00 (CEST) at <command-line>:1
PARSE-TYPE-ERROR: lvalue for assignment operator '=' expects hash<string, any> or no value (NOTHING), but right-hand side is type 'hash'
``` | 1.0 | type error with complex hash assignments - ```
david@greybeard:~/src/qore/git/qore/src$ qore -ne 'hash<string, hash<auto>> h.a = {"a": "str", "b": 1};'
unhandled QORE System exception thrown in TID 1 at 2018-04-12 18:26:08.223928 Thu +02:00 (CEST) at <command-line>:1
PARSE-TYPE-ERROR: lvalue for assignment operator '=' expects hash<string, any> or no value (NOTHING), but right-hand side is type 'hash'
``` | non_priority | type error with complex hash assignments david greybeard src qore git qore src qore ne hash h a a str b unhandled qore system exception thrown in tid at thu cest at parse type error lvalue for assignment operator expects hash or no value nothing but right hand side is type hash | 0 |
61,143 | 25,377,152,428 | IssuesEvent | 2022-11-21 14:54:36 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | [Enhancement]: aws_amplify_app add support for platform = "WEB_COMPUTE" | enhancement service/amplify | ### Description
# Context
I am building a React front end with NextJS for server side rendering.
I use AWS Amplify for hosting and provision the resources using terraform cloud.
As of November 2022, AWS supports NextJS version 12 and 13 via and new Amplify Platform Type called "WEB_COMPUTE"
The terraform provider does not support this value and will error when a plan is run.
### ERROR
```
{"@level":"info","@message":"Terraform 1.3.5","@module":"terraform.ui","@timestamp":"2022-11-20T07:40:43.158542Z","terraform":"1.3.5","type":"version","ui":"1.0"}
{"@level":"error","@message":"Error: expected platform to be one of [WEB WEB_DYNAMIC], got WEB_COMPUTE","@module":"terraform.ui","@timestamp":"2022-11-20T07:40:46.855878Z","diagnostic":{"severity":"error","summary":"expected platform to be one of [WEB WEB_DYNAMIC], got WEB_COMPUTE","detail":"","address":"aws_amplify_app.frontend","range":{"filename":"amplify.tf","start":{"line":64,"column":35,"byte":1466},"end":{"line":64,"column":48,"byte":1479}},"snippet":{"context":"resource \"aws_amplify_app\" \"frontend\"","code":" platform = \"WEB_COMPUTE\"","start_line":64,"highlight_start_offset":34,"highlight_end_offset":47,"values":[]}},"type":"diagnostic"}
```
# Expectation
GIVEN I am configuring the aws_amplify_app resource
AND I have set the value of `platform` to the string `WEB_COMPUTE`
WHEN I start a new run
THEN the plan will succeed
## Note
I hardly trust myself to may changes to my own code, also I have barely touched GO lang, so I dont think that I am best to make the required changes.
### Affected Resource(s) and/or Data Source(s)
aws_amplify_app
### Potential Terraform Configuration
```terraform
resource "aws_amplify_app" "frontend" {
name = "${var.project_name}-${var.organization}"
repository = var.github_repository
access_token = var.github_token
build_spec = <<-EOT
version: 1
applications:
- frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- npm run build
artifacts:
baseDirectory: .next
files:
- '**/*'
cache:
paths:
- node_modules/**/*
appRoot: unit
EOT
environment_variables = {
AMPLIFY_MONOREPO_APP_ROOT = var.app_root,
AMPLIFY_DIFF_DEPLOY = "false",
_LIVE_UPDATES = <<-EOT
[{"pkg":"next-version","type":"internal","version":"latest"}]
EOT
}
enable_auto_branch_creation = false
enable_branch_auto_build = false
enable_branch_auto_deletion = false
platform = "WEB_COMPUTE"
iam_service_role_arn = aws_iam_role.amplify_role.arn
#Amplify will automatically add custom_rules after initial deployment. This ensures your subsequent terraform runs don't break your amplify deployment.
lifecycle {
ignore_changes = [
custom_rule,
]
}
custom_rule {
source = "/<*>"
status = "404-200"
target = "/index.html"
}
tags = {
Environment = var.organization
Project = var.project_name
}
}
```
### References
https://docs.aws.amazon.com/cli/latest/reference/amplify/create-app.html?highlight=amplify
https://docs.aws.amazon.com/amplify/latest/userguide/update-app-nextjs-version.html
### Would you like to implement a fix?
No | 1.0 | [Enhancement]: aws_amplify_app add support for platform = "WEB_COMPUTE" - ### Description
# Context
I am building a React front end with NextJS for server side rendering.
I use AWS Amplify for hosting and provision the resources using terraform cloud.
As of November 2022, AWS supports NextJS version 12 and 13 via and new Amplify Platform Type called "WEB_COMPUTE"
The terraform provider does not support this value and will error when a plan is run.
### ERROR
```
{"@level":"info","@message":"Terraform 1.3.5","@module":"terraform.ui","@timestamp":"2022-11-20T07:40:43.158542Z","terraform":"1.3.5","type":"version","ui":"1.0"}
{"@level":"error","@message":"Error: expected platform to be one of [WEB WEB_DYNAMIC], got WEB_COMPUTE","@module":"terraform.ui","@timestamp":"2022-11-20T07:40:46.855878Z","diagnostic":{"severity":"error","summary":"expected platform to be one of [WEB WEB_DYNAMIC], got WEB_COMPUTE","detail":"","address":"aws_amplify_app.frontend","range":{"filename":"amplify.tf","start":{"line":64,"column":35,"byte":1466},"end":{"line":64,"column":48,"byte":1479}},"snippet":{"context":"resource \"aws_amplify_app\" \"frontend\"","code":" platform = \"WEB_COMPUTE\"","start_line":64,"highlight_start_offset":34,"highlight_end_offset":47,"values":[]}},"type":"diagnostic"}
```
# Expectation
GIVEN I am configuring the aws_amplify_app resource
AND I have set the value of `platform` to the string `WEB_COMPUTE`
WHEN I start a new run
THEN the plan will succeed
## Note
I hardly trust myself to may changes to my own code, also I have barely touched GO lang, so I dont think that I am best to make the required changes.
### Affected Resource(s) and/or Data Source(s)
aws_amplify_app
### Potential Terraform Configuration
```terraform
resource "aws_amplify_app" "frontend" {
name = "${var.project_name}-${var.organization}"
repository = var.github_repository
access_token = var.github_token
build_spec = <<-EOT
version: 1
applications:
- frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- npm run build
artifacts:
baseDirectory: .next
files:
- '**/*'
cache:
paths:
- node_modules/**/*
appRoot: unit
EOT
environment_variables = {
AMPLIFY_MONOREPO_APP_ROOT = var.app_root,
AMPLIFY_DIFF_DEPLOY = "false",
_LIVE_UPDATES = <<-EOT
[{"pkg":"next-version","type":"internal","version":"latest"}]
EOT
}
enable_auto_branch_creation = false
enable_branch_auto_build = false
enable_branch_auto_deletion = false
platform = "WEB_COMPUTE"
iam_service_role_arn = aws_iam_role.amplify_role.arn
#Amplify will automatically add custom_rules after initial deployment. This ensures your subsequent terraform runs don't break your amplify deployment.
lifecycle {
ignore_changes = [
custom_rule,
]
}
custom_rule {
source = "/<*>"
status = "404-200"
target = "/index.html"
}
tags = {
Environment = var.organization
Project = var.project_name
}
}
```
### References
https://docs.aws.amazon.com/cli/latest/reference/amplify/create-app.html?highlight=amplify
https://docs.aws.amazon.com/amplify/latest/userguide/update-app-nextjs-version.html
### Would you like to implement a fix?
No | non_priority | aws amplify app add support for platform web compute description context i am building a react front end with nextjs for server side rendering i use aws amplify for hosting and provision the resources using terraform cloud as of november aws supports nextjs version and via and new amplify platform type called web compute the terraform provider does not support this value and will error when a plan is run error level info message terraform module terraform ui timestamp terraform type version ui level error message error expected platform to be one of got web compute module terraform ui timestamp diagnostic severity error summary expected platform to be one of got web compute detail address aws amplify app frontend range filename amplify tf start line column byte end line column byte snippet context resource aws amplify app frontend code platform web compute start line highlight start offset highlight end offset values type diagnostic expectation given i am configuring the aws amplify app resource and i have set the value of platform to the string web compute when i start a new run then the plan will succeed note i hardly trust myself to may changes to my own code also i have barely touched go lang so i dont think that i am best to make the required changes affected resource s and or data source s aws amplify app potential terraform configuration terraform resource aws amplify app frontend name var project name var organization repository var github repository access token var github token build spec eot version applications frontend phases prebuild commands npm ci build commands npm run build artifacts basedirectory next files cache paths node modules approot unit eot environment variables amplify monorepo app root var app root amplify diff deploy false live updates eot eot enable auto branch creation false enable branch auto build false enable branch auto deletion false platform web compute iam service role arn aws iam role amplify role arn amplify will automatically add custom rules after initial deployment this ensures your subsequent terraform runs don t break your amplify deployment lifecycle ignore changes custom rule custom rule source status target index html tags environment var organization project var project name references would you like to implement a fix no | 0 |
26,294 | 6,759,804,581 | IssuesEvent | 2017-10-24 18:22:22 | OpenGenus/cosmos | https://api.github.com/repos/OpenGenus/cosmos | closed | Add JS implementation for bloom filter | add code hacktoberfest | <!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!--Required Information-->
**This is a(n):**
<!-- choose one by changing [ ] to [x] -->
- [x] New algorithm
- [ ] Update to an existing algorithm
- [ ] Error
- [ ] Proposal to the Repository
**Details:**
<!-- Details of algorithm to be added/updated -->
Adding a JS implementation of the bloom filter. The initial implementation will have the following hash functions:
* FNV
* FNV-1a
`-Sid`
| 1.0 | Add JS implementation for bloom filter - <!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!--Required Information-->
**This is a(n):**
<!-- choose one by changing [ ] to [x] -->
- [x] New algorithm
- [ ] Update to an existing algorithm
- [ ] Error
- [ ] Proposal to the Repository
**Details:**
<!-- Details of algorithm to be added/updated -->
Adding a JS implementation of the bloom filter. The initial implementation will have the following hash functions:
* FNV
* FNV-1a
`-Sid`
| non_priority | add js implementation for bloom filter this is a n new algorithm update to an existing algorithm error proposal to the repository details adding a js implementation of the bloom filter the initial implementation will have the following hash functions fnv fnv sid | 0 |
80,548 | 23,241,859,798 | IssuesEvent | 2022-08-03 16:15:22 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | Build 5 for 8.4 with status FAILURE | automation ci-reported Team:Elastic-Agent-Data-Plane build-failures |
## :broken_heart: Tests Failed
<!-- BUILD BADGES-->
> _the below badges are clickable and redirect to their specific view in the CI or DOCS_
[](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//pipeline) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//tests) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//changes) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//artifacts) [](http://beats_null.docs-preview.app.elstc.co/diff) [](https://ci-stats.elastic.co/app/apm/services/beats-ci/transactions/view?rangeFrom=2022-08-03T13:53:28.541Z&rangeTo=2022-08-03T14:13:28.541Z&transactionName=Beats/beats/8.4&transactionType=job&latencyAggregationType=avg&traceId=e2a8217489c8f6fddd0ae9f93455fd6d&transactionId=247bc374cb614958)
<!-- BUILD SUMMARY-->
<details><summary>Expand to view the summary</summary>
<p>
#### Build stats
* Start Time: 2022-08-03T14:03:28.541+0000
* Duration: 100 min 52 sec
#### Test stats :test_tube:
| Test | Results |
| ------------ | :-----------------------------: |
| Failed | 10 |
| Passed | 24295 |
| Skipped | 2254 |
| Total | 26559 |
</p>
</details>
<!-- TEST RESULTS IF ANY-->
### Test errors [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//tests)
<details><summary>Expand to view the tests failures</summary><p>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_dashboards – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_export_config – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_export_ilm_policy – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_export_index_pattern – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_export_index_pattern_migration – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_export_template – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_index_management – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_start_stop – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_health – x-pack.metricbeat.module.enterprisesearch.test_enterprisesearch.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "DeprecationWarning: The "warn" method is deprecated, use "warning" instead"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
response = <Response [500]>
def _raise_for_status(self, response):
"""Raises stored :class:`APIError`, if one occurred."""
try:
> response.raise_for_status()
../../build/ve/docker/lib/python3.9/site-packages/docker/api/client.py:268:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Response [500]>
def raise_for_status(self):
"""Raises :class:`HTTPError`, if one occurred."""
http_error_msg = ""
if isinstance(self.reason, bytes):
# We attempt to decode utf-8 first because some servers
# choose to localize their reason strings. If the string
# isn"t utf-8, we fall back to iso-8859-1 for all other
# encodings. (See PR #3538)
try:
reason = self.reason.decode("utf-8")
except UnicodeDecodeError:
reason = self.reason.decode("iso-8859-1")
else:
reason = self.reason
if 400 <= self.status_code < 500:
http_error_msg = u"%s Client Error: %s for url: %s" % (self.status_code, reason, self.url)
elif 500 <= self.status_code < 600:
http_error_msg = u"%s Server Error: %s for url: %s" % (self.status_code, reason, self.url)
if http_error_msg:
> raise HTTPError(http_error_msg, response=self)
E requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start
../../build/ve/docker/lib/python3.9/site-packages/requests/models.py:943: HTTPError
During handling of the above exception, another exception occurred:
self = <Service: elasticsearch>
container = <Container: enterprisesearch_7918a246f6f8_elasticsearch_1 (5e716b)>
use_network_aliases = True
def start_container(self, container, use_network_aliases=True):
self.connect_container_to_networks(container, use_network_aliases)
try:
> container.start()
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:643:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Container: enterprisesearch_7918a246f6f8_elasticsearch_1 (5e716b)>
options = {}
def start(self, **options):
> return self.client.start(self.id, **options)
../../build/ve/docker/lib/python3.9/site-packages/compose/container.py:228:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
resource_id = "5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016"
args = (), kwargs = {}
@functools.wraps(f)
def wrapped(self, resource_id=None, *args, **kwargs):
if resource_id is None and kwargs.get(resource_name):
resource_id = kwargs.pop(resource_name)
if isinstance(resource_id, dict):
resource_id = resource_id.get("Id", resource_id.get("ID"))
if not resource_id:
raise errors.NullResource(
"Resource ID was not provided"
)
> return f(self, resource_id, *args, **kwargs)
../../build/ve/docker/lib/python3.9/site-packages/docker/utils/decorators.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
container = "5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016"
args = (), kwargs = {}
url = "http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start"
res = <Response [500]>
@utils.check_resource("container")
def start(self, container, *args, **kwargs):
"""
Start a container. Similar to the ``docker start`` command, but
doesn"t support attach options.
**Deprecation warning:** Passing configuration options in ``start`` is
no longer supported. Users are expected to provide host config options
in the ``host_config`` parameter of
:py:meth:`~ContainerApiMixin.create_container`.
Args:
container (str): The container to start
Raises:
:py:class:`docker.errors.APIError`
If the server returns an error.
:py:class:`docker.errors.DeprecatedMethod`
If any argument besides ``container`` are provided.
Example:
>>> container = client.api.create_container(
... image="busybox:latest",
... command="/bin/sleep 30")
>>> client.api.start(container=container.get("Id"))
"""
if args or kwargs:
raise errors.DeprecatedMethod(
"Providing configuration in the start() method is no longer "
"supported. Use the host_config param in create_container "
"instead."
)
url = self._url("/containers/{0}/start", container)
res = self._post(url)
> self._raise_for_status(res)
../../build/ve/docker/lib/python3.9/site-packages/docker/api/container.py:1109:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
response = <Response [500]>
def _raise_for_status(self, response):
"""Raises stored :class:`APIError`, if one occurred."""
try:
response.raise_for_status()
except requests.exceptions.HTTPError as e:
> raise create_api_error_from_http_exception(e)
../../build/ve/docker/lib/python3.9/site-packages/docker/api/client.py:270:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
e = HTTPError("500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start")
def create_api_error_from_http_exception(e):
"""
Create a suitable APIError from requests.exceptions.HTTPError.
"""
response = e.response
try:
explanation = response.json()["message"]
except ValueError:
explanation = (response.content or "").strip()
cls = APIError
if response.status_code == 404:
if explanation and ("No such image" in str(explanation) or
"not found: does not exist or no pull access"
in str(explanation) or
"repository does not exist" in str(explanation)):
cls = ImageNotFound
else:
cls = NotFound
> raise cls(e, response=response, explanation=explanation)
E docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start: Internal Server Error ("driver failed programming external connectivity on endpoint enterprisesearch_7918a246f6f8_elasticsearch_1 (42ac5e9284c17e132de45d67619452694c44049cb959fa7dac6ddb0e7ae6b2e1): Bind for 0.0.0.0:9200 failed: port is already allocated")
../../build/ve/docker/lib/python3.9/site-packages/docker/errors.py:31: APIError
During handling of the above exception, another exception occurred:
self = <class "test_enterprisesearch.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
../../libbeat/tests/system/beat/compose.py:66: in compose_up
project.up(
../../build/ve/docker/lib/python3.9/site-packages/compose/project.py:697: in up
results, errors = parallel.parallel_execute(
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:108: in parallel_execute
raise error_to_reraise
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:206: in producer
result = func(obj)
../../build/ve/docker/lib/python3.9/site-packages/compose/project.py:679: in do
return service.execute_convergence_plan(
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:559: in execute_convergence_plan
return self._execute_convergence_create(
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:473: in _execute_convergence_create
containers, errors = parallel_execute(
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:108: in parallel_execute
raise error_to_reraise
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:206: in producer
result = func(obj)
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:478: in <lambda>
lambda service_name: create_and_start(self, service_name.number),
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:461: in create_and_start
self.start_container(container)
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:647: in start_container
log.warn("Host is already in use by another container")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Logger compose.service (WARNING)>
msg = "Host is already in use by another container", args = (), kwargs = {}
def warn(self, msg, *args, **kwargs):
> warnings.warn("The "warn" method is deprecated, "
"use "warning" instead", DeprecationWarning, 2)
E DeprecationWarning: The "warn" method is deprecated, use "warning" instead
/usr/lib/python3.9/logging/__init__.py:1457: DeprecationWarning
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_stats – x-pack.metricbeat.module.enterprisesearch.test_enterprisesearch.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "DeprecationWarning: The "warn" method is deprecated, use "warning" instead"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
response = <Response [500]>
def _raise_for_status(self, response):
"""Raises stored :class:`APIError`, if one occurred."""
try:
> response.raise_for_status()
../../build/ve/docker/lib/python3.9/site-packages/docker/api/client.py:268:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Response [500]>
def raise_for_status(self):
"""Raises :class:`HTTPError`, if one occurred."""
http_error_msg = ""
if isinstance(self.reason, bytes):
# We attempt to decode utf-8 first because some servers
# choose to localize their reason strings. If the string
# isn"t utf-8, we fall back to iso-8859-1 for all other
# encodings. (See PR #3538)
try:
reason = self.reason.decode("utf-8")
except UnicodeDecodeError:
reason = self.reason.decode("iso-8859-1")
else:
reason = self.reason
if 400 <= self.status_code < 500:
http_error_msg = u"%s Client Error: %s for url: %s" % (self.status_code, reason, self.url)
elif 500 <= self.status_code < 600:
http_error_msg = u"%s Server Error: %s for url: %s" % (self.status_code, reason, self.url)
if http_error_msg:
> raise HTTPError(http_error_msg, response=self)
E requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start
../../build/ve/docker/lib/python3.9/site-packages/requests/models.py:943: HTTPError
During handling of the above exception, another exception occurred:
self = <Service: elasticsearch>
container = <Container: enterprisesearch_7918a246f6f8_elasticsearch_1 (5e716b)>
use_network_aliases = True
def start_container(self, container, use_network_aliases=True):
self.connect_container_to_networks(container, use_network_aliases)
try:
> container.start()
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:643:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Container: enterprisesearch_7918a246f6f8_elasticsearch_1 (5e716b)>
options = {}
def start(self, **options):
> return self.client.start(self.id, **options)
../../build/ve/docker/lib/python3.9/site-packages/compose/container.py:228:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
resource_id = "5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016"
args = (), kwargs = {}
@functools.wraps(f)
def wrapped(self, resource_id=None, *args, **kwargs):
if resource_id is None and kwargs.get(resource_name):
resource_id = kwargs.pop(resource_name)
if isinstance(resource_id, dict):
resource_id = resource_id.get("Id", resource_id.get("ID"))
if not resource_id:
raise errors.NullResource(
"Resource ID was not provided"
)
> return f(self, resource_id, *args, **kwargs)
../../build/ve/docker/lib/python3.9/site-packages/docker/utils/decorators.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
container = "5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016"
args = (), kwargs = {}
url = "http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start"
res = <Response [500]>
@utils.check_resource("container")
def start(self, container, *args, **kwargs):
"""
Start a container. Similar to the ``docker start`` command, but
doesn"t support attach options.
**Deprecation warning:** Passing configuration options in ``start`` is
no longer supported. Users are expected to provide host config options
in the ``host_config`` parameter of
:py:meth:`~ContainerApiMixin.create_container`.
Args:
container (str): The container to start
Raises:
:py:class:`docker.errors.APIError`
If the server returns an error.
:py:class:`docker.errors.DeprecatedMethod`
If any argument besides ``container`` are provided.
Example:
>>> container = client.api.create_container(
... image="busybox:latest",
... command="/bin/sleep 30")
>>> client.api.start(container=container.get("Id"))
"""
if args or kwargs:
raise errors.DeprecatedMethod(
"Providing configuration in the start() method is no longer "
"supported. Use the host_config param in create_container "
"instead."
)
url = self._url("/containers/{0}/start", container)
res = self._post(url)
> self._raise_for_status(res)
../../build/ve/docker/lib/python3.9/site-packages/docker/api/container.py:1109:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
response = <Response [500]>
def _raise_for_status(self, response):
"""Raises stored :class:`APIError`, if one occurred."""
try:
response.raise_for_status()
except requests.exceptions.HTTPError as e:
> raise create_api_error_from_http_exception(e)
../../build/ve/docker/lib/python3.9/site-packages/docker/api/client.py:270:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
e = HTTPError("500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start")
def create_api_error_from_http_exception(e):
"""
Create a suitable APIError from requests.exceptions.HTTPError.
"""
response = e.response
try:
explanation = response.json()["message"]
except ValueError:
explanation = (response.content or "").strip()
cls = APIError
if response.status_code == 404:
if explanation and ("No such image" in str(explanation) or
"not found: does not exist or no pull access"
in str(explanation) or
"repository does not exist" in str(explanation)):
cls = ImageNotFound
else:
cls = NotFound
> raise cls(e, response=response, explanation=explanation)
E docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start: Internal Server Error ("driver failed programming external connectivity on endpoint enterprisesearch_7918a246f6f8_elasticsearch_1 (42ac5e9284c17e132de45d67619452694c44049cb959fa7dac6ddb0e7ae6b2e1): Bind for 0.0.0.0:9200 failed: port is already allocated")
../../build/ve/docker/lib/python3.9/site-packages/docker/errors.py:31: APIError
During handling of the above exception, another exception occurred:
self = <class "test_enterprisesearch.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
../../libbeat/tests/system/beat/compose.py:66: in compose_up
project.up(
../../build/ve/docker/lib/python3.9/site-packages/compose/project.py:697: in up
results, errors = parallel.parallel_execute(
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:108: in parallel_execute
raise error_to_reraise
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:206: in producer
result = func(obj)
../../build/ve/docker/lib/python3.9/site-packages/compose/project.py:679: in do
return service.execute_convergence_plan(
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:559: in execute_convergence_plan
return self._execute_convergence_create(
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:473: in _execute_convergence_create
containers, errors = parallel_execute(
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:108: in parallel_execute
raise error_to_reraise
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:206: in producer
result = func(obj)
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:478: in <lambda>
lambda service_name: create_and_start(self, service_name.number),
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:461: in create_and_start
self.start_container(container)
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:647: in start_container
log.warn("Host is already in use by another container")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Logger compose.service (WARNING)>
msg = "Host is already in use by another container", args = (), kwargs = {}
def warn(self, msg, *args, **kwargs):
> warnings.warn("The "warn" method is deprecated, "
"use "warning" instead", DeprecationWarning, 2)
E DeprecationWarning: The "warn" method is deprecated, use "warning" instead
/usr/lib/python3.9/logging/__init__.py:1457: DeprecationWarning
```
</p></details>
</ul>
</p></details>
<!-- STEPS ERRORS IF ANY -->
### Steps errors [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//pipeline)
<details><summary>Expand to view the steps failures</summary>
<p>
##### `metricbeat-goIntegTest - mage goIntegTest`
<ul>
<li>Took 31 min 37 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/17143/log/?start=0">here</a></li>
<li>Description: <code>mage goIntegTest</code></l1>
</ul>
##### `metricbeat-windows-2022-windows-2022 - mage build unitTest`
<ul>
<li>Took 4 min 51 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/17298/log/?start=0">here</a></li>
<li>Description: <code>mage build unitTest</code></l1>
</ul>
##### `x-pack/metricbeat-pythonIntegTest - mage pythonIntegTest`
<ul>
<li>Took 26 min 7 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/16452/log/?start=0">here</a></li>
<li>Description: <code>mage pythonIntegTest</code></l1>
</ul>
##### `x-pack/metricbeat-pythonIntegTest - mage pythonIntegTest`
<ul>
<li>Took 19 min 19 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/23989/log/?start=0">here</a></li>
<li>Description: <code>mage pythonIntegTest</code></l1>
</ul>
##### `x-pack/metricbeat-pythonIntegTest - mage pythonIntegTest`
<ul>
<li>Took 20 min 49 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/24195/log/?start=0">here</a></li>
<li>Description: <code>mage pythonIntegTest</code></l1>
</ul>
##### `Error signal`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/24310/log/?start=0">here</a></li>
<li>Description: <code>Error "hudson.AbortException: script returned exit code 1"</code></l1>
</ul>
</p>
</details>
| 1.0 | Build 5 for 8.4 with status FAILURE -
## :broken_heart: Tests Failed
<!-- BUILD BADGES-->
> _the below badges are clickable and redirect to their specific view in the CI or DOCS_
[](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//pipeline) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//tests) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//changes) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//artifacts) [](http://beats_null.docs-preview.app.elstc.co/diff) [](https://ci-stats.elastic.co/app/apm/services/beats-ci/transactions/view?rangeFrom=2022-08-03T13:53:28.541Z&rangeTo=2022-08-03T14:13:28.541Z&transactionName=Beats/beats/8.4&transactionType=job&latencyAggregationType=avg&traceId=e2a8217489c8f6fddd0ae9f93455fd6d&transactionId=247bc374cb614958)
<!-- BUILD SUMMARY-->
<details><summary>Expand to view the summary</summary>
<p>
#### Build stats
* Start Time: 2022-08-03T14:03:28.541+0000
* Duration: 100 min 52 sec
#### Test stats :test_tube:
| Test | Results |
| ------------ | :-----------------------------: |
| Failed | 10 |
| Passed | 24295 |
| Skipped | 2254 |
| Total | 26559 |
</p>
</details>
<!-- TEST RESULTS IF ANY-->
### Test errors [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//tests)
<details><summary>Expand to view the tests failures</summary><p>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_dashboards – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_export_config – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_export_ilm_policy – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_export_index_pattern – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_export_index_pattern_migration – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_export_template – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_index_management – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_start_stop – x-pack.metricbeat.tests.system.test_xpack_base.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <class "test_xpack_base.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class "test_xpack_base.Test">
@classmethod
def compose_up(cls):
"""
Ensure *only* the services defined under `COMPOSE_SERVICES` are running and healthy
"""
if not INTEGRATION_TESTS or not cls.COMPOSE_SERVICES:
return
if os.environ.get("NO_COMPOSE"):
return
def print_logs(container):
print("---- " + container.name_without_project)
print(container.logs())
print("----")
def is_healthy(container):
return container.inspect()["State"]["Health"]["Status"] == "healthy"
project = cls.compose_project()
with disabled_logger("compose.service"):
project.pull(
ignore_pull_failures=True,
service_names=cls.COMPOSE_SERVICES)
project.up(
strategy=ConvergenceStrategy.always,
service_names=cls.COMPOSE_SERVICES,
timeout=30)
# Wait for them to be healthy
start = time.time()
while True:
containers = project.containers(
service_names=cls.COMPOSE_SERVICES,
stopped=True)
healthy = True
for container in containers:
if not container.is_running:
print_logs(container)
raise Exception(
"Container %s unexpectedly finished on startup" %
container.name_without_project)
if not is_healthy(container):
healthy = False
break
if healthy:
break
if cls.COMPOSE_ADVERTISED_HOST:
for service in cls.COMPOSE_SERVICES:
cls._setup_advertised_host(project, service)
time.sleep(1)
timeout = time.time() - start > cls.COMPOSE_TIMEOUT
if timeout:
for container in containers:
if not is_healthy(container):
print_logs(container)
> raise Exception(
"Timeout while waiting for healthy "
"docker-compose services: %s" %
",".join(cls.COMPOSE_SERVICES))
E Exception: Timeout while waiting for healthy docker-compose services: elasticsearch,kibana
../../libbeat/tests/system/beat/compose.py:102: Exception
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_health – x-pack.metricbeat.module.enterprisesearch.test_enterprisesearch.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "DeprecationWarning: The "warn" method is deprecated, use "warning" instead"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
response = <Response [500]>
def _raise_for_status(self, response):
"""Raises stored :class:`APIError`, if one occurred."""
try:
> response.raise_for_status()
../../build/ve/docker/lib/python3.9/site-packages/docker/api/client.py:268:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Response [500]>
def raise_for_status(self):
"""Raises :class:`HTTPError`, if one occurred."""
http_error_msg = ""
if isinstance(self.reason, bytes):
# We attempt to decode utf-8 first because some servers
# choose to localize their reason strings. If the string
# isn"t utf-8, we fall back to iso-8859-1 for all other
# encodings. (See PR #3538)
try:
reason = self.reason.decode("utf-8")
except UnicodeDecodeError:
reason = self.reason.decode("iso-8859-1")
else:
reason = self.reason
if 400 <= self.status_code < 500:
http_error_msg = u"%s Client Error: %s for url: %s" % (self.status_code, reason, self.url)
elif 500 <= self.status_code < 600:
http_error_msg = u"%s Server Error: %s for url: %s" % (self.status_code, reason, self.url)
if http_error_msg:
> raise HTTPError(http_error_msg, response=self)
E requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start
../../build/ve/docker/lib/python3.9/site-packages/requests/models.py:943: HTTPError
During handling of the above exception, another exception occurred:
self = <Service: elasticsearch>
container = <Container: enterprisesearch_7918a246f6f8_elasticsearch_1 (5e716b)>
use_network_aliases = True
def start_container(self, container, use_network_aliases=True):
self.connect_container_to_networks(container, use_network_aliases)
try:
> container.start()
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:643:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Container: enterprisesearch_7918a246f6f8_elasticsearch_1 (5e716b)>
options = {}
def start(self, **options):
> return self.client.start(self.id, **options)
../../build/ve/docker/lib/python3.9/site-packages/compose/container.py:228:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
resource_id = "5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016"
args = (), kwargs = {}
@functools.wraps(f)
def wrapped(self, resource_id=None, *args, **kwargs):
if resource_id is None and kwargs.get(resource_name):
resource_id = kwargs.pop(resource_name)
if isinstance(resource_id, dict):
resource_id = resource_id.get("Id", resource_id.get("ID"))
if not resource_id:
raise errors.NullResource(
"Resource ID was not provided"
)
> return f(self, resource_id, *args, **kwargs)
../../build/ve/docker/lib/python3.9/site-packages/docker/utils/decorators.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
container = "5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016"
args = (), kwargs = {}
url = "http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start"
res = <Response [500]>
@utils.check_resource("container")
def start(self, container, *args, **kwargs):
"""
Start a container. Similar to the ``docker start`` command, but
doesn"t support attach options.
**Deprecation warning:** Passing configuration options in ``start`` is
no longer supported. Users are expected to provide host config options
in the ``host_config`` parameter of
:py:meth:`~ContainerApiMixin.create_container`.
Args:
container (str): The container to start
Raises:
:py:class:`docker.errors.APIError`
If the server returns an error.
:py:class:`docker.errors.DeprecatedMethod`
If any argument besides ``container`` are provided.
Example:
>>> container = client.api.create_container(
... image="busybox:latest",
... command="/bin/sleep 30")
>>> client.api.start(container=container.get("Id"))
"""
if args or kwargs:
raise errors.DeprecatedMethod(
"Providing configuration in the start() method is no longer "
"supported. Use the host_config param in create_container "
"instead."
)
url = self._url("/containers/{0}/start", container)
res = self._post(url)
> self._raise_for_status(res)
../../build/ve/docker/lib/python3.9/site-packages/docker/api/container.py:1109:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
response = <Response [500]>
def _raise_for_status(self, response):
"""Raises stored :class:`APIError`, if one occurred."""
try:
response.raise_for_status()
except requests.exceptions.HTTPError as e:
> raise create_api_error_from_http_exception(e)
../../build/ve/docker/lib/python3.9/site-packages/docker/api/client.py:270:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
e = HTTPError("500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start")
def create_api_error_from_http_exception(e):
"""
Create a suitable APIError from requests.exceptions.HTTPError.
"""
response = e.response
try:
explanation = response.json()["message"]
except ValueError:
explanation = (response.content or "").strip()
cls = APIError
if response.status_code == 404:
if explanation and ("No such image" in str(explanation) or
"not found: does not exist or no pull access"
in str(explanation) or
"repository does not exist" in str(explanation)):
cls = ImageNotFound
else:
cls = NotFound
> raise cls(e, response=response, explanation=explanation)
E docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start: Internal Server Error ("driver failed programming external connectivity on endpoint enterprisesearch_7918a246f6f8_elasticsearch_1 (42ac5e9284c17e132de45d67619452694c44049cb959fa7dac6ddb0e7ae6b2e1): Bind for 0.0.0.0:9200 failed: port is already allocated")
../../build/ve/docker/lib/python3.9/site-packages/docker/errors.py:31: APIError
During handling of the above exception, another exception occurred:
self = <class "test_enterprisesearch.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
../../libbeat/tests/system/beat/compose.py:66: in compose_up
project.up(
../../build/ve/docker/lib/python3.9/site-packages/compose/project.py:697: in up
results, errors = parallel.parallel_execute(
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:108: in parallel_execute
raise error_to_reraise
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:206: in producer
result = func(obj)
../../build/ve/docker/lib/python3.9/site-packages/compose/project.py:679: in do
return service.execute_convergence_plan(
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:559: in execute_convergence_plan
return self._execute_convergence_create(
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:473: in _execute_convergence_create
containers, errors = parallel_execute(
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:108: in parallel_execute
raise error_to_reraise
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:206: in producer
result = func(obj)
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:478: in <lambda>
lambda service_name: create_and_start(self, service_name.number),
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:461: in create_and_start
self.start_container(container)
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:647: in start_container
log.warn("Host is already in use by another container")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Logger compose.service (WARNING)>
msg = "Host is already in use by another container", args = (), kwargs = {}
def warn(self, msg, *args, **kwargs):
> warnings.warn("The "warn" method is deprecated, "
"use "warning" instead", DeprecationWarning, 2)
E DeprecationWarning: The "warn" method is deprecated, use "warning" instead
/usr/lib/python3.9/logging/__init__.py:1457: DeprecationWarning
```
</p></details>
</ul>
##### `Build&Test / x-pack/metricbeat-pythonIntegTest / test_stats – x-pack.metricbeat.module.enterprisesearch.test_enterprisesearch.Test`
<ul>
<details><summary>Expand to view the error details</summary><p>
```
failed on setup with "DeprecationWarning: The "warn" method is deprecated, use "warning" instead"
```
</p></details>
<details><summary>Expand to view the stacktrace</summary><p>
```
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
response = <Response [500]>
def _raise_for_status(self, response):
"""Raises stored :class:`APIError`, if one occurred."""
try:
> response.raise_for_status()
../../build/ve/docker/lib/python3.9/site-packages/docker/api/client.py:268:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Response [500]>
def raise_for_status(self):
"""Raises :class:`HTTPError`, if one occurred."""
http_error_msg = ""
if isinstance(self.reason, bytes):
# We attempt to decode utf-8 first because some servers
# choose to localize their reason strings. If the string
# isn"t utf-8, we fall back to iso-8859-1 for all other
# encodings. (See PR #3538)
try:
reason = self.reason.decode("utf-8")
except UnicodeDecodeError:
reason = self.reason.decode("iso-8859-1")
else:
reason = self.reason
if 400 <= self.status_code < 500:
http_error_msg = u"%s Client Error: %s for url: %s" % (self.status_code, reason, self.url)
elif 500 <= self.status_code < 600:
http_error_msg = u"%s Server Error: %s for url: %s" % (self.status_code, reason, self.url)
if http_error_msg:
> raise HTTPError(http_error_msg, response=self)
E requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start
../../build/ve/docker/lib/python3.9/site-packages/requests/models.py:943: HTTPError
During handling of the above exception, another exception occurred:
self = <Service: elasticsearch>
container = <Container: enterprisesearch_7918a246f6f8_elasticsearch_1 (5e716b)>
use_network_aliases = True
def start_container(self, container, use_network_aliases=True):
self.connect_container_to_networks(container, use_network_aliases)
try:
> container.start()
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:643:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Container: enterprisesearch_7918a246f6f8_elasticsearch_1 (5e716b)>
options = {}
def start(self, **options):
> return self.client.start(self.id, **options)
../../build/ve/docker/lib/python3.9/site-packages/compose/container.py:228:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
resource_id = "5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016"
args = (), kwargs = {}
@functools.wraps(f)
def wrapped(self, resource_id=None, *args, **kwargs):
if resource_id is None and kwargs.get(resource_name):
resource_id = kwargs.pop(resource_name)
if isinstance(resource_id, dict):
resource_id = resource_id.get("Id", resource_id.get("ID"))
if not resource_id:
raise errors.NullResource(
"Resource ID was not provided"
)
> return f(self, resource_id, *args, **kwargs)
../../build/ve/docker/lib/python3.9/site-packages/docker/utils/decorators.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
container = "5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016"
args = (), kwargs = {}
url = "http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start"
res = <Response [500]>
@utils.check_resource("container")
def start(self, container, *args, **kwargs):
"""
Start a container. Similar to the ``docker start`` command, but
doesn"t support attach options.
**Deprecation warning:** Passing configuration options in ``start`` is
no longer supported. Users are expected to provide host config options
in the ``host_config`` parameter of
:py:meth:`~ContainerApiMixin.create_container`.
Args:
container (str): The container to start
Raises:
:py:class:`docker.errors.APIError`
If the server returns an error.
:py:class:`docker.errors.DeprecatedMethod`
If any argument besides ``container`` are provided.
Example:
>>> container = client.api.create_container(
... image="busybox:latest",
... command="/bin/sleep 30")
>>> client.api.start(container=container.get("Id"))
"""
if args or kwargs:
raise errors.DeprecatedMethod(
"Providing configuration in the start() method is no longer "
"supported. Use the host_config param in create_container "
"instead."
)
url = self._url("/containers/{0}/start", container)
res = self._post(url)
> self._raise_for_status(res)
../../build/ve/docker/lib/python3.9/site-packages/docker/api/container.py:1109:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7fbcec10f1f0>
response = <Response [500]>
def _raise_for_status(self, response):
"""Raises stored :class:`APIError`, if one occurred."""
try:
response.raise_for_status()
except requests.exceptions.HTTPError as e:
> raise create_api_error_from_http_exception(e)
../../build/ve/docker/lib/python3.9/site-packages/docker/api/client.py:270:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
e = HTTPError("500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start")
def create_api_error_from_http_exception(e):
"""
Create a suitable APIError from requests.exceptions.HTTPError.
"""
response = e.response
try:
explanation = response.json()["message"]
except ValueError:
explanation = (response.content or "").strip()
cls = APIError
if response.status_code == 404:
if explanation and ("No such image" in str(explanation) or
"not found: does not exist or no pull access"
in str(explanation) or
"repository does not exist" in str(explanation)):
cls = ImageNotFound
else:
cls = NotFound
> raise cls(e, response=response, explanation=explanation)
E docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.41/containers/5e716bfbbd406fb8ecc5c102f41961682d826410ab50a07047b151aabaadb016/start: Internal Server Error ("driver failed programming external connectivity on endpoint enterprisesearch_7918a246f6f8_elasticsearch_1 (42ac5e9284c17e132de45d67619452694c44049cb959fa7dac6ddb0e7ae6b2e1): Bind for 0.0.0.0:9200 failed: port is already allocated")
../../build/ve/docker/lib/python3.9/site-packages/docker/errors.py:31: APIError
During handling of the above exception, another exception occurred:
self = <class "test_enterprisesearch.Test">
@classmethod
def setUpClass(self):
self.beat_name = "metricbeat"
self.beat_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../"))
self.template_paths = [
os.path.abspath(os.path.join(self.beat_path, "../../metricbeat")),
os.path.abspath(os.path.join(self.beat_path, "../../libbeat")),
]
> super(XPackTest, self).setUpClass()
tests/system/xpack_metricbeat.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../metricbeat/tests/system/metricbeat.py:42: in setUpClass
super().setUpClass()
../../libbeat/tests/system/beat/beat.py:204: in setUpClass
cls.compose_up_with_retries()
../../libbeat/tests/system/beat/beat.py:222: in compose_up_with_retries
raise ex
../../libbeat/tests/system/beat/beat.py:218: in compose_up_with_retries
cls.compose_up()
../../libbeat/tests/system/beat/compose.py:66: in compose_up
project.up(
../../build/ve/docker/lib/python3.9/site-packages/compose/project.py:697: in up
results, errors = parallel.parallel_execute(
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:108: in parallel_execute
raise error_to_reraise
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:206: in producer
result = func(obj)
../../build/ve/docker/lib/python3.9/site-packages/compose/project.py:679: in do
return service.execute_convergence_plan(
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:559: in execute_convergence_plan
return self._execute_convergence_create(
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:473: in _execute_convergence_create
containers, errors = parallel_execute(
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:108: in parallel_execute
raise error_to_reraise
../../build/ve/docker/lib/python3.9/site-packages/compose/parallel.py:206: in producer
result = func(obj)
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:478: in <lambda>
lambda service_name: create_and_start(self, service_name.number),
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:461: in create_and_start
self.start_container(container)
../../build/ve/docker/lib/python3.9/site-packages/compose/service.py:647: in start_container
log.warn("Host is already in use by another container")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Logger compose.service (WARNING)>
msg = "Host is already in use by another container", args = (), kwargs = {}
def warn(self, msg, *args, **kwargs):
> warnings.warn("The "warn" method is deprecated, "
"use "warning" instead", DeprecationWarning, 2)
E DeprecationWarning: The "warn" method is deprecated, use "warning" instead
/usr/lib/python3.9/logging/__init__.py:1457: DeprecationWarning
```
</p></details>
</ul>
</p></details>
<!-- STEPS ERRORS IF ANY -->
### Steps errors [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/5//pipeline)
<details><summary>Expand to view the steps failures</summary>
<p>
##### `metricbeat-goIntegTest - mage goIntegTest`
<ul>
<li>Took 31 min 37 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/17143/log/?start=0">here</a></li>
<li>Description: <code>mage goIntegTest</code></l1>
</ul>
##### `metricbeat-windows-2022-windows-2022 - mage build unitTest`
<ul>
<li>Took 4 min 51 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/17298/log/?start=0">here</a></li>
<li>Description: <code>mage build unitTest</code></l1>
</ul>
##### `x-pack/metricbeat-pythonIntegTest - mage pythonIntegTest`
<ul>
<li>Took 26 min 7 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/16452/log/?start=0">here</a></li>
<li>Description: <code>mage pythonIntegTest</code></l1>
</ul>
##### `x-pack/metricbeat-pythonIntegTest - mage pythonIntegTest`
<ul>
<li>Took 19 min 19 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/23989/log/?start=0">here</a></li>
<li>Description: <code>mage pythonIntegTest</code></l1>
</ul>
##### `x-pack/metricbeat-pythonIntegTest - mage pythonIntegTest`
<ul>
<li>Took 20 min 49 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/24195/log/?start=0">here</a></li>
<li>Description: <code>mage pythonIntegTest</code></l1>
</ul>
##### `Error signal`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/5/steps/24310/log/?start=0">here</a></li>
<li>Description: <code>Error "hudson.AbortException: script returned exit code 1"</code></l1>
</ul>
</p>
</details>
| non_priority | build for with status failure broken heart tests failed the below badges are clickable and redirect to their specific view in the ci or docs expand to view the summary build stats start time duration min sec test stats test tube test results failed passed skipped total test errors expand to view the tests failures build test x pack metricbeat pythonintegtest test dashboards – x pack metricbeat tests system test xpack base test expand to view the error details failed on setup with exception timeout while waiting for healthy docker compose services elasticsearch kibana expand to view the stacktrace self classmethod def setupclass self self beat name metricbeat self beat path os path abspath os path join os path dirname file self template paths os path abspath os path join self beat path metricbeat os path abspath os path join self beat path libbeat super xpacktest self setupclass tests system xpack metricbeat py metricbeat tests system metricbeat py in setupclass super setupclass libbeat tests system beat beat py in setupclass cls compose up with retries libbeat tests system beat beat py in compose up with retries raise ex libbeat tests system beat beat py in compose up with retries cls compose up cls classmethod def compose up cls ensure only the services defined under compose services are running and healthy if not integration tests or not cls compose services return if os environ get no compose return def print logs container print container name without project print container logs print def is healthy container return container inspect healthy project cls compose project with disabled logger compose service project pull ignore pull failures true service names cls compose services project up strategy convergencestrategy always service names cls compose services timeout wait for them to be healthy start time time while true containers project containers service names cls compose services stopped true healthy true for container in containers if not container is running print logs container raise exception container s unexpectedly finished on startup container name without project if not is healthy container healthy false break if healthy break if cls compose advertised host for service in cls compose services cls setup advertised host project service time sleep timeout time time start cls compose timeout if timeout for container in containers if not is healthy container print logs container raise exception timeout while waiting for healthy docker compose services s join cls compose services e exception timeout while waiting for healthy docker compose services elasticsearch kibana libbeat tests system beat compose py exception build test x pack metricbeat pythonintegtest test export config – x pack metricbeat tests system test xpack base test expand to view the error details failed on setup with exception timeout while waiting for healthy docker compose services elasticsearch kibana expand to view the stacktrace self classmethod def setupclass self self beat name metricbeat self beat path os path abspath os path join os path dirname file self template paths os path abspath os path join self beat path metricbeat os path abspath os path join self beat path libbeat super xpacktest self setupclass tests system xpack metricbeat py metricbeat tests system metricbeat py in setupclass super setupclass libbeat tests system beat beat py in setupclass cls compose up with retries libbeat tests system beat beat py in compose up with retries raise ex libbeat tests system beat beat py in compose up with retries cls compose up cls classmethod def compose up cls ensure only the services defined under compose services are running and healthy if not integration tests or not cls compose services return if os environ get no compose return def print logs container print container name without project print container logs print def is healthy container return container inspect healthy project cls compose project with disabled logger compose service project pull ignore pull failures true service names cls compose services project up strategy convergencestrategy always service names cls compose services timeout wait for them to be healthy start time time while true containers project containers service names cls compose services stopped true healthy true for container in containers if not container is running print logs container raise exception container s unexpectedly finished on startup container name without project if not is healthy container healthy false break if healthy break if cls compose advertised host for service in cls compose services cls setup advertised host project service time sleep timeout time time start cls compose timeout if timeout for container in containers if not is healthy container print logs container raise exception timeout while waiting for healthy docker compose services s join cls compose services e exception timeout while waiting for healthy docker compose services elasticsearch kibana libbeat tests system beat compose py exception build test x pack metricbeat pythonintegtest test export ilm policy – x pack metricbeat tests system test xpack base test expand to view the error details failed on setup with exception timeout while waiting for healthy docker compose services elasticsearch kibana expand to view the stacktrace self classmethod def setupclass self self beat name metricbeat self beat path os path abspath os path join os path dirname file self template paths os path abspath os path join self beat path metricbeat os path abspath os path join self beat path libbeat super xpacktest self setupclass tests system xpack metricbeat py metricbeat tests system metricbeat py in setupclass super setupclass libbeat tests system beat beat py in setupclass cls compose up with retries libbeat tests system beat beat py in compose up with retries raise ex libbeat tests system beat beat py in compose up with retries cls compose up cls classmethod def compose up cls ensure only the services defined under compose services are running and healthy if not integration tests or not cls compose services return if os environ get no compose return def print logs container print container name without project print container logs print def is healthy container return container inspect healthy project cls compose project with disabled logger compose service project pull ignore pull failures true service names cls compose services project up strategy convergencestrategy always service names cls compose services timeout wait for them to be healthy start time time while true containers project containers service names cls compose services stopped true healthy true for container in containers if not container is running print logs container raise exception container s unexpectedly finished on startup container name without project if not is healthy container healthy false break if healthy break if cls compose advertised host for service in cls compose services cls setup advertised host project service time sleep timeout time time start cls compose timeout if timeout for container in containers if not is healthy container print logs container raise exception timeout while waiting for healthy docker compose services s join cls compose services e exception timeout while waiting for healthy docker compose services elasticsearch kibana libbeat tests system beat compose py exception build test x pack metricbeat pythonintegtest test export index pattern – x pack metricbeat tests system test xpack base test expand to view the error details failed on setup with exception timeout while waiting for healthy docker compose services elasticsearch kibana expand to view the stacktrace self classmethod def setupclass self self beat name metricbeat self beat path os path abspath os path join os path dirname file self template paths os path abspath os path join self beat path metricbeat os path abspath os path join self beat path libbeat super xpacktest self setupclass tests system xpack metricbeat py metricbeat tests system metricbeat py in setupclass super setupclass libbeat tests system beat beat py in setupclass cls compose up with retries libbeat tests system beat beat py in compose up with retries raise ex libbeat tests system beat beat py in compose up with retries cls compose up cls classmethod def compose up cls ensure only the services defined under compose services are running and healthy if not integration tests or not cls compose services return if os environ get no compose return def print logs container print container name without project print container logs print def is healthy container return container inspect healthy project cls compose project with disabled logger compose service project pull ignore pull failures true service names cls compose services project up strategy convergencestrategy always service names cls compose services timeout wait for them to be healthy start time time while true containers project containers service names cls compose services stopped true healthy true for container in containers if not container is running print logs container raise exception container s unexpectedly finished on startup container name without project if not is healthy container healthy false break if healthy break if cls compose advertised host for service in cls compose services cls setup advertised host project service time sleep timeout time time start cls compose timeout if timeout for container in containers if not is healthy container print logs container raise exception timeout while waiting for healthy docker compose services s join cls compose services e exception timeout while waiting for healthy docker compose services elasticsearch kibana libbeat tests system beat compose py exception build test x pack metricbeat pythonintegtest test export index pattern migration – x pack metricbeat tests system test xpack base test expand to view the error details failed on setup with exception timeout while waiting for healthy docker compose services elasticsearch kibana expand to view the stacktrace self classmethod def setupclass self self beat name metricbeat self beat path os path abspath os path join os path dirname file self template paths os path abspath os path join self beat path metricbeat os path abspath os path join self beat path libbeat super xpacktest self setupclass tests system xpack metricbeat py metricbeat tests system metricbeat py in setupclass super setupclass libbeat tests system beat beat py in setupclass cls compose up with retries libbeat tests system beat beat py in compose up with retries raise ex libbeat tests system beat beat py in compose up with retries cls compose up cls classmethod def compose up cls ensure only the services defined under compose services are running and healthy if not integration tests or not cls compose services return if os environ get no compose return def print logs container print container name without project print container logs print def is healthy container return container inspect healthy project cls compose project with disabled logger compose service project pull ignore pull failures true service names cls compose services project up strategy convergencestrategy always service names cls compose services timeout wait for them to be healthy start time time while true containers project containers service names cls compose services stopped true healthy true for container in containers if not container is running print logs container raise exception container s unexpectedly finished on startup container name without project if not is healthy container healthy false break if healthy break if cls compose advertised host for service in cls compose services cls setup advertised host project service time sleep timeout time time start cls compose timeout if timeout for container in containers if not is healthy container print logs container raise exception timeout while waiting for healthy docker compose services s join cls compose services e exception timeout while waiting for healthy docker compose services elasticsearch kibana libbeat tests system beat compose py exception build test x pack metricbeat pythonintegtest test export template – x pack metricbeat tests system test xpack base test expand to view the error details failed on setup with exception timeout while waiting for healthy docker compose services elasticsearch kibana expand to view the stacktrace self classmethod def setupclass self self beat name metricbeat self beat path os path abspath os path join os path dirname file self template paths os path abspath os path join self beat path metricbeat os path abspath os path join self beat path libbeat super xpacktest self setupclass tests system xpack metricbeat py metricbeat tests system metricbeat py in setupclass super setupclass libbeat tests system beat beat py in setupclass cls compose up with retries libbeat tests system beat beat py in compose up with retries raise ex libbeat tests system beat beat py in compose up with retries cls compose up cls classmethod def compose up cls ensure only the services defined under compose services are running and healthy if not integration tests or not cls compose services return if os environ get no compose return def print logs container print container name without project print container logs print def is healthy container return container inspect healthy project cls compose project with disabled logger compose service project pull ignore pull failures true service names cls compose services project up strategy convergencestrategy always service names cls compose services timeout wait for them to be healthy start time time while true containers project containers service names cls compose services stopped true healthy true for container in containers if not container is running print logs container raise exception container s unexpectedly finished on startup container name without project if not is healthy container healthy false break if healthy break if cls compose advertised host for service in cls compose services cls setup advertised host project service time sleep timeout time time start cls compose timeout if timeout for container in containers if not is healthy container print logs container raise exception timeout while waiting for healthy docker compose services s join cls compose services e exception timeout while waiting for healthy docker compose services elasticsearch kibana libbeat tests system beat compose py exception build test x pack metricbeat pythonintegtest test index management – x pack metricbeat tests system test xpack base test expand to view the error details failed on setup with exception timeout while waiting for healthy docker compose services elasticsearch kibana expand to view the stacktrace self classmethod def setupclass self self beat name metricbeat self beat path os path abspath os path join os path dirname file self template paths os path abspath os path join self beat path metricbeat os path abspath os path join self beat path libbeat super xpacktest self setupclass tests system xpack metricbeat py metricbeat tests system metricbeat py in setupclass super setupclass libbeat tests system beat beat py in setupclass cls compose up with retries libbeat tests system beat beat py in compose up with retries raise ex libbeat tests system beat beat py in compose up with retries cls compose up cls classmethod def compose up cls ensure only the services defined under compose services are running and healthy if not integration tests or not cls compose services return if os environ get no compose return def print logs container print container name without project print container logs print def is healthy container return container inspect healthy project cls compose project with disabled logger compose service project pull ignore pull failures true service names cls compose services project up strategy convergencestrategy always service names cls compose services timeout wait for them to be healthy start time time while true containers project containers service names cls compose services stopped true healthy true for container in containers if not container is running print logs container raise exception container s unexpectedly finished on startup container name without project if not is healthy container healthy false break if healthy break if cls compose advertised host for service in cls compose services cls setup advertised host project service time sleep timeout time time start cls compose timeout if timeout for container in containers if not is healthy container print logs container raise exception timeout while waiting for healthy docker compose services s join cls compose services e exception timeout while waiting for healthy docker compose services elasticsearch kibana libbeat tests system beat compose py exception build test x pack metricbeat pythonintegtest test start stop – x pack metricbeat tests system test xpack base test expand to view the error details failed on setup with exception timeout while waiting for healthy docker compose services elasticsearch kibana expand to view the stacktrace self classmethod def setupclass self self beat name metricbeat self beat path os path abspath os path join os path dirname file self template paths os path abspath os path join self beat path metricbeat os path abspath os path join self beat path libbeat super xpacktest self setupclass tests system xpack metricbeat py metricbeat tests system metricbeat py in setupclass super setupclass libbeat tests system beat beat py in setupclass cls compose up with retries libbeat tests system beat beat py in compose up with retries raise ex libbeat tests system beat beat py in compose up with retries cls compose up cls classmethod def compose up cls ensure only the services defined under compose services are running and healthy if not integration tests or not cls compose services return if os environ get no compose return def print logs container print container name without project print container logs print def is healthy container return container inspect healthy project cls compose project with disabled logger compose service project pull ignore pull failures true service names cls compose services project up strategy convergencestrategy always service names cls compose services timeout wait for them to be healthy start time time while true containers project containers service names cls compose services stopped true healthy true for container in containers if not container is running print logs container raise exception container s unexpectedly finished on startup container name without project if not is healthy container healthy false break if healthy break if cls compose advertised host for service in cls compose services cls setup advertised host project service time sleep timeout time time start cls compose timeout if timeout for container in containers if not is healthy container print logs container raise exception timeout while waiting for healthy docker compose services s join cls compose services e exception timeout while waiting for healthy docker compose services elasticsearch kibana libbeat tests system beat compose py exception build test x pack metricbeat pythonintegtest test health – x pack metricbeat module enterprisesearch test enterprisesearch test expand to view the error details failed on setup with deprecationwarning the warn method is deprecated use warning instead expand to view the stacktrace self response def raise for status self response raises stored class apierror if one occurred try response raise for status build ve docker lib site packages docker api client py self def raise for status self raises class httperror if one occurred http error msg if isinstance self reason bytes we attempt to decode utf first because some servers choose to localize their reason strings if the string isn t utf we fall back to iso for all other encodings see pr try reason self reason decode utf except unicodedecodeerror reason self reason decode iso else reason self reason if self status code http error msg u s client error s for url s self status code reason self url elif self status code http error msg u s server error s for url s self status code reason self url if http error msg raise httperror http error msg response self e requests exceptions httperror server error internal server error for url http docker localhost containers start build ve docker lib site packages requests models py httperror during handling of the above exception another exception occurred self container use network aliases true def start container self container use network aliases true self connect container to networks container use network aliases try container start build ve docker lib site packages compose service py self options def start self options return self client start self id options build ve docker lib site packages compose container py self resource id args kwargs functools wraps f def wrapped self resource id none args kwargs if resource id is none and kwargs get resource name resource id kwargs pop resource name if isinstance resource id dict resource id resource id get id resource id get id if not resource id raise errors nullresource resource id was not provided return f self resource id args kwargs build ve docker lib site packages docker utils decorators py self container args kwargs url http docker localhost containers start res utils check resource container def start self container args kwargs start a container similar to the docker start command but doesn t support attach options deprecation warning passing configuration options in start is no longer supported users are expected to provide host config options in the host config parameter of py meth containerapimixin create container args container str the container to start raises py class docker errors apierror if the server returns an error py class docker errors deprecatedmethod if any argument besides container are provided example container client api create container image busybox latest command bin sleep client api start container container get id if args or kwargs raise errors deprecatedmethod providing configuration in the start method is no longer supported use the host config param in create container instead url self url containers start container res self post url self raise for status res build ve docker lib site packages docker api container py self response def raise for status self response raises stored class apierror if one occurred try response raise for status except requests exceptions httperror as e raise create api error from http exception e build ve docker lib site packages docker api client py e httperror server error internal server error for url http docker localhost containers start def create api error from http exception e create a suitable apierror from requests exceptions httperror response e response try explanation response json except valueerror explanation response content or strip cls apierror if response status code if explanation and no such image in str explanation or not found does not exist or no pull access in str explanation or repository does not exist in str explanation cls imagenotfound else cls notfound raise cls e response response explanation explanation e docker errors apierror server error for http docker localhost containers start internal server error driver failed programming external connectivity on endpoint enterprisesearch elasticsearch bind for failed port is already allocated build ve docker lib site packages docker errors py apierror during handling of the above exception another exception occurred self classmethod def setupclass self self beat name metricbeat self beat path os path abspath os path join os path dirname file self template paths os path abspath os path join self beat path metricbeat os path abspath os path join self beat path libbeat super xpacktest self setupclass tests system xpack metricbeat py metricbeat tests system metricbeat py in setupclass super setupclass libbeat tests system beat beat py in setupclass cls compose up with retries libbeat tests system beat beat py in compose up with retries raise ex libbeat tests system beat beat py in compose up with retries cls compose up libbeat tests system beat compose py in compose up project up build ve docker lib site packages compose project py in up results errors parallel parallel execute build ve docker lib site packages compose parallel py in parallel execute raise error to reraise build ve docker lib site packages compose parallel py in producer result func obj build ve docker lib site packages compose project py in do return service execute convergence plan build ve docker lib site packages compose service py in execute convergence plan return self execute convergence create build ve docker lib site packages compose service py in execute convergence create containers errors parallel execute build ve docker lib site packages compose parallel py in parallel execute raise error to reraise build ve docker lib site packages compose parallel py in producer result func obj build ve docker lib site packages compose service py in lambda service name create and start self service name number build ve docker lib site packages compose service py in create and start self start container container build ve docker lib site packages compose service py in start container log warn host is already in use by another container self msg host is already in use by another container args kwargs def warn self msg args kwargs warnings warn the warn method is deprecated use warning instead deprecationwarning e deprecationwarning the warn method is deprecated use warning instead usr lib logging init py deprecationwarning build test x pack metricbeat pythonintegtest test stats – x pack metricbeat module enterprisesearch test enterprisesearch test expand to view the error details failed on setup with deprecationwarning the warn method is deprecated use warning instead expand to view the stacktrace self response def raise for status self response raises stored class apierror if one occurred try response raise for status build ve docker lib site packages docker api client py self def raise for status self raises class httperror if one occurred http error msg if isinstance self reason bytes we attempt to decode utf first because some servers choose to localize their reason strings if the string isn t utf we fall back to iso for all other encodings see pr try reason self reason decode utf except unicodedecodeerror reason self reason decode iso else reason self reason if self status code http error msg u s client error s for url s self status code reason self url elif self status code http error msg u s server error s for url s self status code reason self url if http error msg raise httperror http error msg response self e requests exceptions httperror server error internal server error for url http docker localhost containers start build ve docker lib site packages requests models py httperror during handling of the above exception another exception occurred self container use network aliases true def start container self container use network aliases true self connect container to networks container use network aliases try container start build ve docker lib site packages compose service py self options def start self options return self client start self id options build ve docker lib site packages compose container py self resource id args kwargs functools wraps f def wrapped self resource id none args kwargs if resource id is none and kwargs get resource name resource id kwargs pop resource name if isinstance resource id dict resource id resource id get id resource id get id if not resource id raise errors nullresource resource id was not provided return f self resource id args kwargs build ve docker lib site packages docker utils decorators py self container args kwargs url http docker localhost containers start res utils check resource container def start self container args kwargs start a container similar to the docker start command but doesn t support attach options deprecation warning passing configuration options in start is no longer supported users are expected to provide host config options in the host config parameter of py meth containerapimixin create container args container str the container to start raises py class docker errors apierror if the server returns an error py class docker errors deprecatedmethod if any argument besides container are provided example container client api create container image busybox latest command bin sleep client api start container container get id if args or kwargs raise errors deprecatedmethod providing configuration in the start method is no longer supported use the host config param in create container instead url self url containers start container res self post url self raise for status res build ve docker lib site packages docker api container py self response def raise for status self response raises stored class apierror if one occurred try response raise for status except requests exceptions httperror as e raise create api error from http exception e build ve docker lib site packages docker api client py e httperror server error internal server error for url http docker localhost containers start def create api error from http exception e create a suitable apierror from requests exceptions httperror response e response try explanation response json except valueerror explanation response content or strip cls apierror if response status code if explanation and no such image in str explanation or not found does not exist or no pull access in str explanation or repository does not exist in str explanation cls imagenotfound else cls notfound raise cls e response response explanation explanation e docker errors apierror server error for http docker localhost containers start internal server error driver failed programming external connectivity on endpoint enterprisesearch elasticsearch bind for failed port is already allocated build ve docker lib site packages docker errors py apierror during handling of the above exception another exception occurred self classmethod def setupclass self self beat name metricbeat self beat path os path abspath os path join os path dirname file self template paths os path abspath os path join self beat path metricbeat os path abspath os path join self beat path libbeat super xpacktest self setupclass tests system xpack metricbeat py metricbeat tests system metricbeat py in setupclass super setupclass libbeat tests system beat beat py in setupclass cls compose up with retries libbeat tests system beat beat py in compose up with retries raise ex libbeat tests system beat beat py in compose up with retries cls compose up libbeat tests system beat compose py in compose up project up build ve docker lib site packages compose project py in up results errors parallel parallel execute build ve docker lib site packages compose parallel py in parallel execute raise error to reraise build ve docker lib site packages compose parallel py in producer result func obj build ve docker lib site packages compose project py in do return service execute convergence plan build ve docker lib site packages compose service py in execute convergence plan return self execute convergence create build ve docker lib site packages compose service py in execute convergence create containers errors parallel execute build ve docker lib site packages compose parallel py in parallel execute raise error to reraise build ve docker lib site packages compose parallel py in producer result func obj build ve docker lib site packages compose service py in lambda service name create and start self service name number build ve docker lib site packages compose service py in create and start self start container container build ve docker lib site packages compose service py in start container log warn host is already in use by another container self msg host is already in use by another container args kwargs def warn self msg args kwargs warnings warn the warn method is deprecated use warning instead deprecationwarning e deprecationwarning the warn method is deprecated use warning instead usr lib logging init py deprecationwarning steps errors expand to view the steps failures metricbeat gointegtest mage gointegtest took min sec view more details a href description mage gointegtest metricbeat windows windows mage build unittest took min sec view more details a href description mage build unittest x pack metricbeat pythonintegtest mage pythonintegtest took min sec view more details a href description mage pythonintegtest x pack metricbeat pythonintegtest mage pythonintegtest took min sec view more details a href description mage pythonintegtest x pack metricbeat pythonintegtest mage pythonintegtest took min sec view more details a href description mage pythonintegtest error signal took min sec view more details a href description error hudson abortexception script returned exit code | 0 |
79,965 | 29,802,613,546 | IssuesEvent | 2023-06-16 09:14:20 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | ParseExceptions are being thrown when exporting DDL of a view that includes an "interval" clause | T: Defect C: Functionality P: Medium R: Duplicate E: All Editions C: Parser | ### Expected behavior
I expect DDL of view:
```sql
create or replace view test_table_view as
select id, order_date + interval id day
from test_table_for_view;
```
### Actual behavior
jOOQ throws exception when try to export DDL of the view:
```java
Exception in thread "main" org.jooq.impl.ParserException: Token ')' expected: [1:141] ...la`.`test_table_for_view`.`order_date` + interval [*]`sakila`.`test_table_for_view`.`id` day) AS `order_date + interval id day` from ...
at org.jooq.impl.DefaultParseContext.expected(ParserImpl.java:14686)
at org.jooq.impl.DefaultParseContext.parse(ParserImpl.java:13925)
at org.jooq.impl.DefaultParseContext.parse(ParserImpl.java:13920)
```
### Steps to reproduce the problem
Create a table:
```sql
create table `test_table_for_view` (
`id` int(11) not null,
`order_date` datetime not null,
primary key (`id`)
);
```
Create a view:
```sql
create or replace view test_table_view as
select id, order_date + interval id day
from test_table_for_view;
```
Run this code:
```java
Connection conn = DriverManager.getConnection("jdbc:mysql://127.0.0.1:6000/sakila", "root", "admin");
Configuration configuration = new DefaultConfiguration().set(conn).set(SQLDialect.MYSQL);
Meta meta = using(configuration).meta();
Arrays.stream(meta.filterSchemas(v -> v.getName().equalsIgnoreCase("sakila")).ddl().queries()).forEach(System.out::println);
```
### jOOQ Version
jOOQ Professional Edition 3.18.4
### Database product and version
MySQL 5.7.42
### Java Version
openjdk 17.0.2 2022-01-18
### OS Version
Microsoft Windows [Version 10.0.19044.2846]
### JDBC driver name and version (include name if unofficial driver)
mysql-connector-java:8.0.33 | 1.0 | ParseExceptions are being thrown when exporting DDL of a view that includes an "interval" clause - ### Expected behavior
I expect DDL of view:
```sql
create or replace view test_table_view as
select id, order_date + interval id day
from test_table_for_view;
```
### Actual behavior
jOOQ throws exception when try to export DDL of the view:
```java
Exception in thread "main" org.jooq.impl.ParserException: Token ')' expected: [1:141] ...la`.`test_table_for_view`.`order_date` + interval [*]`sakila`.`test_table_for_view`.`id` day) AS `order_date + interval id day` from ...
at org.jooq.impl.DefaultParseContext.expected(ParserImpl.java:14686)
at org.jooq.impl.DefaultParseContext.parse(ParserImpl.java:13925)
at org.jooq.impl.DefaultParseContext.parse(ParserImpl.java:13920)
```
### Steps to reproduce the problem
Create a table:
```sql
create table `test_table_for_view` (
`id` int(11) not null,
`order_date` datetime not null,
primary key (`id`)
);
```
Create a view:
```sql
create or replace view test_table_view as
select id, order_date + interval id day
from test_table_for_view;
```
Run this code:
```java
Connection conn = DriverManager.getConnection("jdbc:mysql://127.0.0.1:6000/sakila", "root", "admin");
Configuration configuration = new DefaultConfiguration().set(conn).set(SQLDialect.MYSQL);
Meta meta = using(configuration).meta();
Arrays.stream(meta.filterSchemas(v -> v.getName().equalsIgnoreCase("sakila")).ddl().queries()).forEach(System.out::println);
```
### jOOQ Version
jOOQ Professional Edition 3.18.4
### Database product and version
MySQL 5.7.42
### Java Version
openjdk 17.0.2 2022-01-18
### OS Version
Microsoft Windows [Version 10.0.19044.2846]
### JDBC driver name and version (include name if unofficial driver)
mysql-connector-java:8.0.33 | non_priority | parseexceptions are being thrown when exporting ddl of a view that includes an interval clause expected behavior i expect ddl of view sql create or replace view test table view as select id order date interval id day from test table for view actual behavior jooq throws exception when try to export ddl of the view java exception in thread main org jooq impl parserexception token expected la test table for view order date interval sakila test table for view id day as order date interval id day from at org jooq impl defaultparsecontext expected parserimpl java at org jooq impl defaultparsecontext parse parserimpl java at org jooq impl defaultparsecontext parse parserimpl java steps to reproduce the problem create a table sql create table test table for view id int not null order date datetime not null primary key id create a view sql create or replace view test table view as select id order date interval id day from test table for view run this code java connection conn drivermanager getconnection jdbc mysql sakila root admin configuration configuration new defaultconfiguration set conn set sqldialect mysql meta meta using configuration meta arrays stream meta filterschemas v v getname equalsignorecase sakila ddl queries foreach system out println jooq version jooq professional edition database product and version mysql java version openjdk os version microsoft windows jdbc driver name and version include name if unofficial driver mysql connector java | 0 |
85,492 | 15,736,722,498 | IssuesEvent | 2021-03-30 01:17:13 | mgh3326/querydsl | https://api.github.com/repos/mgh3326/querydsl | opened | CVE-2020-13935 (High) detected in tomcat-embed-websocket-9.0.30.jar | security vulnerability | ## CVE-2020-13935 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-websocket-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: querydsl/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-websocket/9.0.30/33157f6bc5bfd03380ebb5ac476db0600a04168d/tomcat-embed-websocket-9.0.30.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-websocket/9.0.30/33157f6bc5bfd03380ebb5ac476db0600a04168d/tomcat-embed-websocket-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.3.RELEASE.jar
- :x: **tomcat-embed-websocket-9.0.30.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The payload length in a WebSocket frame was not correctly validated in Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M1 to 9.0.36, 8.5.0 to 8.5.56 and 7.0.27 to 7.0.104. Invalid payload lengths could trigger an infinite loop. Multiple requests with invalid payload lengths could lead to a denial of service.
<p>Publish Date: 2020-07-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13935>CVE-2020-13935</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rd48c72bd3255bda87564d4da3791517c074d94f8a701f93b85752651%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rd48c72bd3255bda87564d4da3791517c074d94f8a701f93b85752651%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-websocket:7.0.105,8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-websocket:7.0.105,8.5.57,9.0.37,10.0.0-M7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-13935 (High) detected in tomcat-embed-websocket-9.0.30.jar - ## CVE-2020-13935 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-websocket-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: querydsl/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-websocket/9.0.30/33157f6bc5bfd03380ebb5ac476db0600a04168d/tomcat-embed-websocket-9.0.30.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-websocket/9.0.30/33157f6bc5bfd03380ebb5ac476db0600a04168d/tomcat-embed-websocket-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.3.RELEASE.jar
- :x: **tomcat-embed-websocket-9.0.30.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The payload length in a WebSocket frame was not correctly validated in Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M1 to 9.0.36, 8.5.0 to 8.5.56 and 7.0.27 to 7.0.104. Invalid payload lengths could trigger an infinite loop. Multiple requests with invalid payload lengths could lead to a denial of service.
<p>Publish Date: 2020-07-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13935>CVE-2020-13935</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rd48c72bd3255bda87564d4da3791517c074d94f8a701f93b85752651%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rd48c72bd3255bda87564d4da3791517c074d94f8a701f93b85752651%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-websocket:7.0.105,8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-websocket:7.0.105,8.5.57,9.0.37,10.0.0-M7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in tomcat embed websocket jar cve high severity vulnerability vulnerable library tomcat embed websocket jar core tomcat implementation library home page a href path to dependency file querydsl build gradle path to vulnerable library root gradle caches modules files org apache tomcat embed tomcat embed websocket tomcat embed websocket jar root gradle caches modules files org apache tomcat embed tomcat embed websocket tomcat embed websocket jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed websocket jar vulnerable library vulnerability details the payload length in a websocket frame was not correctly validated in apache tomcat to to to and to invalid payload lengths could trigger an infinite loop multiple requests with invalid payload lengths could lead to a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat websocket org apache tomcat embed tomcat embed websocket step up your open source security game with whitesource | 0 |
169,254 | 26,772,509,022 | IssuesEvent | 2023-01-31 15:01:41 | akroma-project/akroma-wallet-mobile | https://api.github.com/repos/akroma-project/akroma-wallet-mobile | closed | Send & Receive Screens | Mobile UI Design | Creation of a modal or pop-out showing the QR scan to receive AKA and a view once a user wants to send AKA. (The sent AKA must show the user the option to send for one of his **favorite wallets**) | 1.0 | Send & Receive Screens - Creation of a modal or pop-out showing the QR scan to receive AKA and a view once a user wants to send AKA. (The sent AKA must show the user the option to send for one of his **favorite wallets**) | non_priority | send receive screens creation of a modal or pop out showing the qr scan to receive aka and a view once a user wants to send aka the sent aka must show the user the option to send for one of his favorite wallets | 0 |
149,509 | 13,282,323,343 | IssuesEvent | 2020-08-23 22:11:15 | dionthorn/2DTacticalRPG | https://api.github.com/repos/dionthorn/2DTacticalRPG | closed | Javadoc compatibility | documentation | Should probably rework the endless lines of // comments into legit javadoc blocks. | 1.0 | Javadoc compatibility - Should probably rework the endless lines of // comments into legit javadoc blocks. | non_priority | javadoc compatibility should probably rework the endless lines of comments into legit javadoc blocks | 0 |
15,303 | 19,343,335,512 | IssuesEvent | 2021-12-15 08:11:29 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Discussion: A way to prevent `Process` from configuring the terminal | area-System.Diagnostics.Process untriaged | I'm filing this as a blank issue rather than an API proposal as I don't yet know how an API for this might look (or if the team is even willing to support it).
In my [System.Terminal project](https://github.com/alexrp/system-terminal) I would like to [ban](https://github.com/alexrp/system-terminal/blob/master/src/core/buildTransitive/BannedSymbols.txt) `System.Diagnostics.Process` and provide a wrapper which ensures that System.Native doesn't step on my library's toes too much.
Unfortunately, while I can cancel terminal configuration changes done as a result of `SIGCHLD` (via `PosixSignalRegistration`), there is currently no way for me to block the changes that are done when a `Process` starts (or fails to start). I would like for there to be some kind of API to achieve this. I don't know exactly what shape it ought to take though.
My immediate thought was a `bool ProcessStartInfo.ConfigureConsole` property, but that doesn't seem to fit very well with how terminal configuration is restored once there are no more child processes. It's probably not obvious to a user that not only should they disable this property, they should also cancel the runtime's default `SIGCHLD` handling with `PosixSignalRegistration`.
Perhaps something like:
```csharp
public class Process
{
public static event Action<bool>? ConfigureConsole;
}
```
If the event is non-null, the framework code would assume that at least one of the handlers is going to take care of configuring the console, so the framework doesn't do it. The `bool` parameter would indicate whether the configuration is being done because child processes are starting or because there are no more child processes left. The upside of this approach is that there would be no need to cancel `SIGCHLD` handling. | 1.0 | Discussion: A way to prevent `Process` from configuring the terminal - I'm filing this as a blank issue rather than an API proposal as I don't yet know how an API for this might look (or if the team is even willing to support it).
In my [System.Terminal project](https://github.com/alexrp/system-terminal) I would like to [ban](https://github.com/alexrp/system-terminal/blob/master/src/core/buildTransitive/BannedSymbols.txt) `System.Diagnostics.Process` and provide a wrapper which ensures that System.Native doesn't step on my library's toes too much.
Unfortunately, while I can cancel terminal configuration changes done as a result of `SIGCHLD` (via `PosixSignalRegistration`), there is currently no way for me to block the changes that are done when a `Process` starts (or fails to start). I would like for there to be some kind of API to achieve this. I don't know exactly what shape it ought to take though.
My immediate thought was a `bool ProcessStartInfo.ConfigureConsole` property, but that doesn't seem to fit very well with how terminal configuration is restored once there are no more child processes. It's probably not obvious to a user that not only should they disable this property, they should also cancel the runtime's default `SIGCHLD` handling with `PosixSignalRegistration`.
Perhaps something like:
```csharp
public class Process
{
public static event Action<bool>? ConfigureConsole;
}
```
If the event is non-null, the framework code would assume that at least one of the handlers is going to take care of configuring the console, so the framework doesn't do it. The `bool` parameter would indicate whether the configuration is being done because child processes are starting or because there are no more child processes left. The upside of this approach is that there would be no need to cancel `SIGCHLD` handling. | non_priority | discussion a way to prevent process from configuring the terminal i m filing this as a blank issue rather than an api proposal as i don t yet know how an api for this might look or if the team is even willing to support it in my i would like to system diagnostics process and provide a wrapper which ensures that system native doesn t step on my library s toes too much unfortunately while i can cancel terminal configuration changes done as a result of sigchld via posixsignalregistration there is currently no way for me to block the changes that are done when a process starts or fails to start i would like for there to be some kind of api to achieve this i don t know exactly what shape it ought to take though my immediate thought was a bool processstartinfo configureconsole property but that doesn t seem to fit very well with how terminal configuration is restored once there are no more child processes it s probably not obvious to a user that not only should they disable this property they should also cancel the runtime s default sigchld handling with posixsignalregistration perhaps something like csharp public class process public static event action configureconsole if the event is non null the framework code would assume that at least one of the handlers is going to take care of configuring the console so the framework doesn t do it the bool parameter would indicate whether the configuration is being done because child processes are starting or because there are no more child processes left the upside of this approach is that there would be no need to cancel sigchld handling | 0 |
75,149 | 14,405,423,502 | IssuesEvent | 2020-12-03 18:41:01 | mozilla-mobile/android-components | https://api.github.com/repos/mozilla-mobile/android-components | opened | Remove or replace TabsToolbarFeature | <tabs> <toolbar> ⌨️ code | Currently usage of this feature looks like:
```
TabsToolbarFeature(
toolbar = layout.toolbar,
store = components.store,
sessionId = sessionId,
lifecycleOwner = viewLifecycleOwner,
showTabs = ::showTabs
)
```
We create an instance of the feature that is never used. It just internally creates a button and adds it to the toolbar. In addition, the feature is currently not lifecycle aware which required workarounds in the `TabCounterToolbarButton` it creates internally (e.g. a weak reference to the `TabCounter`). | 1.0 | Remove or replace TabsToolbarFeature - Currently usage of this feature looks like:
```
TabsToolbarFeature(
toolbar = layout.toolbar,
store = components.store,
sessionId = sessionId,
lifecycleOwner = viewLifecycleOwner,
showTabs = ::showTabs
)
```
We create an instance of the feature that is never used. It just internally creates a button and adds it to the toolbar. In addition, the feature is currently not lifecycle aware which required workarounds in the `TabCounterToolbarButton` it creates internally (e.g. a weak reference to the `TabCounter`). | non_priority | remove or replace tabstoolbarfeature currently usage of this feature looks like tabstoolbarfeature toolbar layout toolbar store components store sessionid sessionid lifecycleowner viewlifecycleowner showtabs showtabs we create an instance of the feature that is never used it just internally creates a button and adds it to the toolbar in addition the feature is currently not lifecycle aware which required workarounds in the tabcountertoolbarbutton it creates internally e g a weak reference to the tabcounter | 0 |
299,324 | 22,599,627,399 | IssuesEvent | 2022-06-29 07:59:05 | esi-neuroscience/syncopy | https://api.github.com/repos/esi-neuroscience/syncopy | closed | Check frontend for `select` kw documentation | Documentation | Neiter `freqanalysis` nor `preprocessing` are mentioning that handy selection mechanism at all atm | 1.0 | Check frontend for `select` kw documentation - Neiter `freqanalysis` nor `preprocessing` are mentioning that handy selection mechanism at all atm | non_priority | check frontend for select kw documentation neiter freqanalysis nor preprocessing are mentioning that handy selection mechanism at all atm | 0 |
29,381 | 13,102,239,693 | IssuesEvent | 2020-08-04 06:14:33 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | closed | Can't set cors.supportCredentials property to true | Service Attention Web Apps | ### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az webapp config set`
**Errors:**
```
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
azure/cli/core/util.py, ln 244, in shell_safe_json_parse
return json.loads(json_or_dict_string)
lib64/python3.7/json/__init__.py, ln 348, in loads
return _default_decoder.decode(s)
lib64/python3.7/json/decoder.py, ln 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
lib64/python3.7/json/decoder.py, ln 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
...
cli/command_modules/appservice/custom.py, ln 813, in update_site_configs
config_name, value = s.split('=', 1)
ValueError: not enough values to unpack (expected 2, got 1)
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az webapp config set -g {} -n {} --generic-configurations "{"cors":{"supportCredentials":true}}"`
## Expected Behavior
supportCredentials is set to true
## Environment Summary
```
Linux-5.3.6-1-MANJARO-x86_64-with-arch-Manjaro-Linux
Python 3.7.4
Shell: bash
azure-cli 2.0.75
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
| 1.0 | Can't set cors.supportCredentials property to true - ### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az webapp config set`
**Errors:**
```
not enough values to unpack (expected 2, got 1)
Traceback (most recent call last):
azure/cli/core/util.py, ln 244, in shell_safe_json_parse
return json.loads(json_or_dict_string)
lib64/python3.7/json/__init__.py, ln 348, in loads
return _default_decoder.decode(s)
lib64/python3.7/json/decoder.py, ln 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
lib64/python3.7/json/decoder.py, ln 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
...
cli/command_modules/appservice/custom.py, ln 813, in update_site_configs
config_name, value = s.split('=', 1)
ValueError: not enough values to unpack (expected 2, got 1)
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az webapp config set -g {} -n {} --generic-configurations "{"cors":{"supportCredentials":true}}"`
## Expected Behavior
supportCredentials is set to true
## Environment Summary
```
Linux-5.3.6-1-MANJARO-x86_64-with-arch-Manjaro-Linux
Python 3.7.4
Shell: bash
azure-cli 2.0.75
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
| non_priority | can t set cors supportcredentials property to true this is autogenerated please review and update as needed describe the bug command name az webapp config set errors not enough values to unpack expected got traceback most recent call last azure cli core util py ln in shell safe json parse return json loads json or dict string json init py ln in loads return default decoder decode s json decoder py ln in decode obj end self raw decode s idx w s end json decoder py ln in raw decode obj end self scan once s idx json decoder jsondecodeerror expecting property name enclosed in double quotes line column char cli command modules appservice custom py ln in update site configs config name value s split valueerror not enough values to unpack expected got to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information put any pre requisite steps here az webapp config set g n generic configurations cors supportcredentials true expected behavior supportcredentials is set to true environment summary linux manjaro with arch manjaro linux python shell bash azure cli additional context | 0 |
28,522 | 13,731,738,384 | IssuesEvent | 2020-10-05 02:12:40 | TDycores-Project/TDycore | https://api.github.com/repos/TDycores-Project/TDycore | opened | MPFAO model initialization takes a long time. | MPFAO performance | This has been on our radar for a while, so I'm creating an issue to track progress. Now that we've instrumented TDycore with timers, I'm able to see where the time is being spent in initialization.
## Profiling Notes
I'm doing profiling in the `jeff-cohere/mpfao_init_profiling` branch. Here's what I'm running to profile the initialization issue:
```
cd demo/richards
make
./richards_driver -dim 3 -Nx 100 -Ny 100 -Nz 10 -tdy_timers -final_time 300
```
This runs a short-ish simulation with timers turned on. The resulting profile log, `tdycore_profile.csv`, can be loaded into a spreadsheet (or Pandas dataframe or whatever). So far, it looks like we're spending a lot of initialization time in these functions:
* `TDyDriverInitializeTDy` (74 sec)
* `TDyCreateJacobian` (48 sec)
* `DMCreateMat` (48 sec)
* `DMPlexPrealloc` (48 sec)
* `TDySetDiscretizationMethod` (21 sec)
* `TDyMPFAOInitialize` (21 sec)
The preallocation entry is telling. If we're not giving PETSc any clues about the non-zero structure matrix in our preallocation of the Jacobian, PETSc's probably doing a lot of work to figure it out on its own. My guess is that we can pass along some information to help it out. I've never used the DMPlex interface, though, so I'll have to look into what this means.
FYI @bishtgautam | True | MPFAO model initialization takes a long time. - This has been on our radar for a while, so I'm creating an issue to track progress. Now that we've instrumented TDycore with timers, I'm able to see where the time is being spent in initialization.
## Profiling Notes
I'm doing profiling in the `jeff-cohere/mpfao_init_profiling` branch. Here's what I'm running to profile the initialization issue:
```
cd demo/richards
make
./richards_driver -dim 3 -Nx 100 -Ny 100 -Nz 10 -tdy_timers -final_time 300
```
This runs a short-ish simulation with timers turned on. The resulting profile log, `tdycore_profile.csv`, can be loaded into a spreadsheet (or Pandas dataframe or whatever). So far, it looks like we're spending a lot of initialization time in these functions:
* `TDyDriverInitializeTDy` (74 sec)
* `TDyCreateJacobian` (48 sec)
* `DMCreateMat` (48 sec)
* `DMPlexPrealloc` (48 sec)
* `TDySetDiscretizationMethod` (21 sec)
* `TDyMPFAOInitialize` (21 sec)
The preallocation entry is telling. If we're not giving PETSc any clues about the non-zero structure matrix in our preallocation of the Jacobian, PETSc's probably doing a lot of work to figure it out on its own. My guess is that we can pass along some information to help it out. I've never used the DMPlex interface, though, so I'll have to look into what this means.
FYI @bishtgautam | non_priority | mpfao model initialization takes a long time this has been on our radar for a while so i m creating an issue to track progress now that we ve instrumented tdycore with timers i m able to see where the time is being spent in initialization profiling notes i m doing profiling in the jeff cohere mpfao init profiling branch here s what i m running to profile the initialization issue cd demo richards make richards driver dim nx ny nz tdy timers final time this runs a short ish simulation with timers turned on the resulting profile log tdycore profile csv can be loaded into a spreadsheet or pandas dataframe or whatever so far it looks like we re spending a lot of initialization time in these functions tdydriverinitializetdy sec tdycreatejacobian sec dmcreatemat sec dmplexprealloc sec tdysetdiscretizationmethod sec tdympfaoinitialize sec the preallocation entry is telling if we re not giving petsc any clues about the non zero structure matrix in our preallocation of the jacobian petsc s probably doing a lot of work to figure it out on its own my guess is that we can pass along some information to help it out i ve never used the dmplex interface though so i ll have to look into what this means fyi bishtgautam | 0 |
40,402 | 5,214,584,860 | IssuesEvent | 2017-01-26 00:21:40 | dotnet/templating | https://api.github.com/repos/dotnet/templating | closed | Errors are unintuitive when directory context prevents template creation | enhancement Needs design | Attempting to create a project template in a directory where a project already exists will fail. But the error messages usually indicate other problems: e.g.
- The template name is invalid
- The flags relevant to the specified template are reported as invalid. | 1.0 | Errors are unintuitive when directory context prevents template creation - Attempting to create a project template in a directory where a project already exists will fail. But the error messages usually indicate other problems: e.g.
- The template name is invalid
- The flags relevant to the specified template are reported as invalid. | non_priority | errors are unintuitive when directory context prevents template creation attempting to create a project template in a directory where a project already exists will fail but the error messages usually indicate other problems e g the template name is invalid the flags relevant to the specified template are reported as invalid | 0 |
103,347 | 11,354,256,926 | IssuesEvent | 2020-01-24 17:11:50 | Graylog2/documentation | https://api.github.com/repos/Graylog2/documentation | opened | Add documentation about parameters | documentation | <!--- Provide a general summary of the issue in the Title above -->
Description on how parameters are used?
How many can be created at once?
Do they only exist within a saved dashboard?
Can a list be seen anywhere for faster editing/deleting of each parameter?
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be wrong -->
<!--- What were you trying to accomplish? -->
An understanding how to use parameters would be helpful
## Expected Content
<!--- Tell us what we are missing -->
Documentation as well as a helpful link added into the 'How To Use' section within the application to point you to the docs.
## Your Environment
<!--- Include as many relevant details about the environment you use -->
* Graylog Version: 3.2


| 1.0 | Add documentation about parameters - <!--- Provide a general summary of the issue in the Title above -->
Description on how parameters are used?
How many can be created at once?
Do they only exist within a saved dashboard?
Can a list be seen anywhere for faster editing/deleting of each parameter?
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be wrong -->
<!--- What were you trying to accomplish? -->
An understanding how to use parameters would be helpful
## Expected Content
<!--- Tell us what we are missing -->
Documentation as well as a helpful link added into the 'How To Use' section within the application to point you to the docs.
## Your Environment
<!--- Include as many relevant details about the environment you use -->
* Graylog Version: 3.2


| non_priority | add documentation about parameters description on how parameters are used how many can be created at once do they only exist within a saved dashboard can a list be seen anywhere for faster editing deleting of each parameter context an understanding how to use parameters would be helpful expected content documentation as well as a helpful link added into the how to use section within the application to point you to the docs your environment graylog version | 0 |
42,356 | 10,959,712,824 | IssuesEvent | 2019-11-27 12:01:43 | icsharpcode/ILSpy | https://api.github.com/repos/icsharpcode/ILSpy | opened | Switch to .NET Core 3.1 SDK | Build Automation | Once 3.1 SDK RTMs (and build environments are updated), switch from 3.0 to 3.1 because the latter is an LTS release, and 3.0 will be eol'd rather quickly. | 1.0 | Switch to .NET Core 3.1 SDK - Once 3.1 SDK RTMs (and build environments are updated), switch from 3.0 to 3.1 because the latter is an LTS release, and 3.0 will be eol'd rather quickly. | non_priority | switch to net core sdk once sdk rtms and build environments are updated switch from to because the latter is an lts release and will be eol d rather quickly | 0 |
409,926 | 27,758,715,711 | IssuesEvent | 2023-03-16 06:08:31 | masastack/MASA.DCC | https://api.github.com/repos/masastack/MASA.DCC | closed | The translation of the text to undo the popover of the configuration object is incorrect in English. Procedure | type/documentation status/resolved severity/medium site/staging | 英文状态下,撤销配置对象弹窗的文本与中文的文本描述不匹配,部分文本没有被翻译


| 1.0 | The translation of the text to undo the popover of the configuration object is incorrect in English. Procedure - 英文状态下,撤销配置对象弹窗的文本与中文的文本描述不匹配,部分文本没有被翻译


| non_priority | the translation of the text to undo the popover of the configuration object is incorrect in english procedure 英文状态下,撤销配置对象弹窗的文本与中文的文本描述不匹配,部分文本没有被翻译 | 0 |
8,657 | 12,197,795,944 | IssuesEvent | 2020-04-29 21:27:16 | Kevpedia/Habitica-Habit-History-Connector | https://api.github.com/repos/Kevpedia/Habitica-Habit-History-Connector | closed | Please add `privacyPolicyUrl` to your connector's manifest | publish-requirement | Please add `privacyPolicyUrl` to your connector's manifest. See https://developers.google.com/datastudio/connector/manifest manifest reference for more details. | 1.0 | Please add `privacyPolicyUrl` to your connector's manifest - Please add `privacyPolicyUrl` to your connector's manifest. See https://developers.google.com/datastudio/connector/manifest manifest reference for more details. | non_priority | please add privacypolicyurl to your connector s manifest please add privacypolicyurl to your connector s manifest see manifest reference for more details | 0 |
8,009 | 20,403,616,365 | IssuesEvent | 2022-02-23 00:52:30 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Return RFC 3339 (ish) style date / time values from API | type: enhancement affects: architecture work: backend work: database status: started | ## Problem
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
Currently, the style of date/time values returned by the API is whatever the Django default for handling `python` date / time types is. Further, since the dates and times are made into `python` types during the process, precision is lost, as well as functionality around dates before the common era, and dates with years containing more than 4 digits.
## Proposed solution
<!-- A clear and concise description of your proposed solution or feature. -->
We should parse the dates and times into RFC 3339 compliant strings in the database, and then pass those strings up through the API. We will need to make the following slight extensions and refinements
- We will append `AD` or `BC` to each date or datetime
- We will allow years with more than 4 digits
- We will always take the 'verbose' version when there's a choice (e.g., a timezone will always be `+08:00`, not `+08`).
## Additional context
<!-- Add any other context or screenshots about the feature request here.-->
There is a spec being written in the wiki in parallel with the implementation of this issue. The PR is here: https://github.com/centerofci/mathesar-wiki/pull/35 . | 1.0 | Return RFC 3339 (ish) style date / time values from API - ## Problem
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
Currently, the style of date/time values returned by the API is whatever the Django default for handling `python` date / time types is. Further, since the dates and times are made into `python` types during the process, precision is lost, as well as functionality around dates before the common era, and dates with years containing more than 4 digits.
## Proposed solution
<!-- A clear and concise description of your proposed solution or feature. -->
We should parse the dates and times into RFC 3339 compliant strings in the database, and then pass those strings up through the API. We will need to make the following slight extensions and refinements
- We will append `AD` or `BC` to each date or datetime
- We will allow years with more than 4 digits
- We will always take the 'verbose' version when there's a choice (e.g., a timezone will always be `+08:00`, not `+08`).
## Additional context
<!-- Add any other context or screenshots about the feature request here.-->
There is a spec being written in the wiki in parallel with the implementation of this issue. The PR is here: https://github.com/centerofci/mathesar-wiki/pull/35 . | non_priority | return rfc ish style date time values from api problem currently the style of date time values returned by the api is whatever the django default for handling python date time types is further since the dates and times are made into python types during the process precision is lost as well as functionality around dates before the common era and dates with years containing more than digits proposed solution we should parse the dates and times into rfc compliant strings in the database and then pass those strings up through the api we will need to make the following slight extensions and refinements we will append ad or bc to each date or datetime we will allow years with more than digits we will always take the verbose version when there s a choice e g a timezone will always be not additional context there is a spec being written in the wiki in parallel with the implementation of this issue the pr is here | 0 |
173,380 | 14,409,104,666 | IssuesEvent | 2020-12-04 01:23:59 | manbeardgames/monogame-aseprite | https://api.github.com/repos/manbeardgames/monogame-aseprite | opened | Update documentation for 2.0 | Documentation | Due to the changes in the 2.0 update, all documentation will need to be rewritten. | 1.0 | Update documentation for 2.0 - Due to the changes in the 2.0 update, all documentation will need to be rewritten. | non_priority | update documentation for due to the changes in the update all documentation will need to be rewritten | 0 |
89,251 | 15,827,602,262 | IssuesEvent | 2021-04-06 08:53:39 | matrixknight/Umbraco-CMS | https://api.github.com/repos/matrixknight/Umbraco-CMS | opened | WS-2019-0026 (Medium) detected in marked-0.2.9.tgz | security vulnerability | ## WS-2019-0026 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.2.9.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.2.9.tgz">https://registry.npmjs.org/marked/-/marked-0.2.9.tgz</a></p>
<p>Path to dependency file: Umbraco-CMS/src/Umbraco.Web.UI.Client/package.json</p>
<p>Path to vulnerable library: Umbraco-CMS/src/Umbraco.Web.UI.Client/node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- grunt-ngdocs-0.1.11.tgz (Root Library)
- :x: **marked-0.2.9.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/matrixknight/Umbraco-CMS/commit/cbde891e04f16eb272f30328b88a2ef0c7c720b8">cbde891e04f16eb272f30328b88a2ef0c7c720b8</a></p>
<p>Found in base branch: <b>7.0.1</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions 0.3.7 and earlier of marked unescape only lowercase while owsers support both lowercase and uppercase x in hexadecimal form of HTML character entity
<p>Publish Date: 2017-12-23
<p>URL: <a href=https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9>WS-2019-0026</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9">https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9</a></p>
<p>Release Date: 2019-03-17</p>
<p>Fix Resolution: 0.3.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0026 (Medium) detected in marked-0.2.9.tgz - ## WS-2019-0026 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.2.9.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.2.9.tgz">https://registry.npmjs.org/marked/-/marked-0.2.9.tgz</a></p>
<p>Path to dependency file: Umbraco-CMS/src/Umbraco.Web.UI.Client/package.json</p>
<p>Path to vulnerable library: Umbraco-CMS/src/Umbraco.Web.UI.Client/node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- grunt-ngdocs-0.1.11.tgz (Root Library)
- :x: **marked-0.2.9.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/matrixknight/Umbraco-CMS/commit/cbde891e04f16eb272f30328b88a2ef0c7c720b8">cbde891e04f16eb272f30328b88a2ef0c7c720b8</a></p>
<p>Found in base branch: <b>7.0.1</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions 0.3.7 and earlier of marked unescape only lowercase while owsers support both lowercase and uppercase x in hexadecimal form of HTML character entity
<p>Publish Date: 2017-12-23
<p>URL: <a href=https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9>WS-2019-0026</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9">https://github.com/markedjs/marked/commit/6d1901ff71abb83aa32ca9a5ce47471382ea42a9</a></p>
<p>Release Date: 2019-03-17</p>
<p>Fix Resolution: 0.3.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws medium detected in marked tgz ws medium severity vulnerability vulnerable library marked tgz a markdown parser built for speed library home page a href path to dependency file umbraco cms src umbraco web ui client package json path to vulnerable library umbraco cms src umbraco web ui client node modules marked package json dependency hierarchy grunt ngdocs tgz root library x marked tgz vulnerable library found in head commit a href found in base branch vulnerability details versions and earlier of marked unescape only lowercase while owsers support both lowercase and uppercase x in hexadecimal form of html character entity publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
95,555 | 12,004,535,195 | IssuesEvent | 2020-04-09 11:46:01 | COVID19Tracking/website | https://api.github.com/repos/COVID19Tracking/website | closed | Displaying screenshots -- Input Requested | DESIGN | Currently, the way we handle sharing screenshots on the historical tables is pretty messy. I would love some design input about ways to clean it up.

| 1.0 | Displaying screenshots -- Input Requested - Currently, the way we handle sharing screenshots on the historical tables is pretty messy. I would love some design input about ways to clean it up.

| non_priority | displaying screenshots input requested currently the way we handle sharing screenshots on the historical tables is pretty messy i would love some design input about ways to clean it up | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.