Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,614
| 5,394,178,293
|
IssuesEvent
|
2017-02-27 01:38:06
|
uccser/kordac
|
https://api.github.com/repos/uccser/kordac
|
closed
|
Implement {video} tag
|
processor implementation testing
|
Implement the existing tag used in the CSFG. The function is found within [`markdownsection.py`](https://github.com/uccser/cs-field-guide/blob/develop/generator/markdownsection.py).
```
[video]
regex: ^\{video (?P<args>[^\}]*)\}
function: create_video_html
```
|
1.0
|
Implement {video} tag - Implement the existing tag used in the CSFG. The function is found within [`markdownsection.py`](https://github.com/uccser/cs-field-guide/blob/develop/generator/markdownsection.py).
```
[video]
regex: ^\{video (?P<args>[^\}]*)\}
function: create_video_html
```
|
process
|
implement video tag implement the existing tag used in the csfg the function is found within regex video p function create video html
| 1
|
762,443
| 26,718,821,243
|
IssuesEvent
|
2023-01-28 21:58:26
|
AlexNav73/Navitski.Crystalized
|
https://api.github.com/repos/AlexNav73/Navitski.Crystalized
|
closed
|
Create LazyCopyCollection/Relation types
|
enhancement priority-medium
|
Data should be copied only when it will be modified, now we are coping whole shard even if only one collection would be modified
|
1.0
|
Create LazyCopyCollection/Relation types - Data should be copied only when it will be modified, now we are coping whole shard even if only one collection would be modified
|
non_process
|
create lazycopycollection relation types data should be copied only when it will be modified now we are coping whole shard even if only one collection would be modified
| 0
|
99,130
| 4,047,974,555
|
IssuesEvent
|
2016-05-23 08:32:20
|
Taeir/ContextProject-MIGI2
|
https://api.github.com/repos/Taeir/ContextProject-MIGI2
|
closed
|
Controller Support
|
Priority B
|
As the VR player,
I want to be able use an Xbox 360 controller,
so that I can control the movement of my character with it.
This task has the following subtasks:
#126 - Controller support - link
#127 - Controller support - key bindings
|
1.0
|
Controller Support - As the VR player,
I want to be able use an Xbox 360 controller,
so that I can control the movement of my character with it.
This task has the following subtasks:
#126 - Controller support - link
#127 - Controller support - key bindings
|
non_process
|
controller support as the vr player i want to be able use an xbox controller so that i can control the movement of my character with it this task has the following subtasks controller support link controller support key bindings
| 0
|
725,645
| 24,969,751,328
|
IssuesEvent
|
2022-11-01 23:15:53
|
nprapps/elections22
|
https://api.github.com/repos/nprapps/elections22
|
closed
|
If there's only one race we're following in a state, that should appear by default on the "Key" tab
|
Priority: Low
|
thinking in particular of DE, DC but there are may be a handful of others

|
1.0
|
If there's only one race we're following in a state, that should appear by default on the "Key" tab - thinking in particular of DE, DC but there are may be a handful of others

|
non_process
|
if there s only one race we re following in a state that should appear by default on the key tab thinking in particular of de dc but there are may be a handful of others
| 0
|
7,435
| 10,550,298,142
|
IssuesEvent
|
2019-10-03 10:43:12
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
closed
|
area_weighted regridding fails on several observational datasets
|
bug help wanted preprocessor
|
The preprocessor function `regrid` with argument `scheme = 'area_weighted'` fails on several observational datasets. The error thrown is:
```
File "/iris/experimental/regrid.py", line 680, in regrid_area_weighted_rectilinear_src_and_grid
raise ValueError("The horizontal grid coordinates of both the source "
ValueError: The horizontal grid coordinates of both the source and grid cubes must have contiguous bounds.
```
```
# Version info
In [2]: iris.__version__
Out[2]: '2.2.1dev0'
```
Please find below the recipe, including one dataset for which everything works fine. This bug is possibly related to: https://github.com/SciTools/iris/issues/2061
Recipe:
```
# ESMValTool
---
documentation:
description: |
Test recipe recipe_bug_area_weighted.yml
authors:
- crezee_bas
################################################
# Define some default parameters using anchors #
################################################
timeperiod: &timeperiod
start_year: 2009
end_year: 2010
################################################
################################################
################################################
datasets:
#
### This dataset works fine:
- { dataset: CDS-SATELLITE-SOIL-MOISTURE, type: sat, project: OBS, mip: Lmon, version: COMBINED-TCDR-v201812.0.0, tier: 3, << : *timeperiod }
#
### The following three datasets suffer from the bug:
- { dataset: cds-era5-land-monthly, type: reanaly, project: OBS, mip: Lmon, version: 1, tier: 3, <<: *timeperiod }
- {dataset: eraintland, type: reanaly, project: OBS6, mip: E6hr, version: 1, tier: 3, frequency: 6hr, <<: *timeperiod }
- { dataset: cds-era5-monthly, type: reanaly, project: OBS, mip: Lmon, version: 1, tier: 3, <<: *timeperiod }
- { dataset: CDS-UERRA, type: reanaly, project: OBS6, mip: E6hr, version: UERRA-HARMONIE, tier: 3, frequency: 6hr, << : *timeperiod }
preprocessors:
pp_regrid:
regrid:
target_grid: 1x1
scheme: area_weighted # 'nearest' works fine for all datasets above
diagnostics:
mapplots:
description: mapplots
variables:
sm:
preprocessor: pp_regrid
mip: Lmon
scripts:
null
```
I will do my best to look further into this, but any help is greatly appreciated!
|
1.0
|
area_weighted regridding fails on several observational datasets - The preprocessor function `regrid` with argument `scheme = 'area_weighted'` fails on several observational datasets. The error thrown is:
```
File "/iris/experimental/regrid.py", line 680, in regrid_area_weighted_rectilinear_src_and_grid
raise ValueError("The horizontal grid coordinates of both the source "
ValueError: The horizontal grid coordinates of both the source and grid cubes must have contiguous bounds.
```
```
# Version info
In [2]: iris.__version__
Out[2]: '2.2.1dev0'
```
Please find below the recipe, including one dataset for which everything works fine. This bug is possibly related to: https://github.com/SciTools/iris/issues/2061
Recipe:
```
# ESMValTool
---
documentation:
description: |
Test recipe recipe_bug_area_weighted.yml
authors:
- crezee_bas
################################################
# Define some default parameters using anchors #
################################################
timeperiod: &timeperiod
start_year: 2009
end_year: 2010
################################################
################################################
################################################
datasets:
#
### This dataset works fine:
- { dataset: CDS-SATELLITE-SOIL-MOISTURE, type: sat, project: OBS, mip: Lmon, version: COMBINED-TCDR-v201812.0.0, tier: 3, << : *timeperiod }
#
### The following three datasets suffer from the bug:
- { dataset: cds-era5-land-monthly, type: reanaly, project: OBS, mip: Lmon, version: 1, tier: 3, <<: *timeperiod }
- {dataset: eraintland, type: reanaly, project: OBS6, mip: E6hr, version: 1, tier: 3, frequency: 6hr, <<: *timeperiod }
- { dataset: cds-era5-monthly, type: reanaly, project: OBS, mip: Lmon, version: 1, tier: 3, <<: *timeperiod }
- { dataset: CDS-UERRA, type: reanaly, project: OBS6, mip: E6hr, version: UERRA-HARMONIE, tier: 3, frequency: 6hr, << : *timeperiod }
preprocessors:
pp_regrid:
regrid:
target_grid: 1x1
scheme: area_weighted # 'nearest' works fine for all datasets above
diagnostics:
mapplots:
description: mapplots
variables:
sm:
preprocessor: pp_regrid
mip: Lmon
scripts:
null
```
I will do my best to look further into this, but any help is greatly appreciated!
|
process
|
area weighted regridding fails on several observational datasets the preprocessor function regrid with argument scheme area weighted fails on several observational datasets the error thrown is file iris experimental regrid py line in regrid area weighted rectilinear src and grid raise valueerror the horizontal grid coordinates of both the source valueerror the horizontal grid coordinates of both the source and grid cubes must have contiguous bounds version info in iris version out please find below the recipe including one dataset for which everything works fine this bug is possibly related to recipe esmvaltool documentation description test recipe recipe bug area weighted yml authors crezee bas define some default parameters using anchors timeperiod timeperiod start year end year datasets this dataset works fine dataset cds satellite soil moisture type sat project obs mip lmon version combined tcdr tier timeperiod the following three datasets suffer from the bug dataset cds land monthly type reanaly project obs mip lmon version tier timeperiod dataset eraintland type reanaly project mip version tier frequency timeperiod dataset cds monthly type reanaly project obs mip lmon version tier timeperiod dataset cds uerra type reanaly project mip version uerra harmonie tier frequency timeperiod preprocessors pp regrid regrid target grid scheme area weighted nearest works fine for all datasets above diagnostics mapplots description mapplots variables sm preprocessor pp regrid mip lmon scripts null i will do my best to look further into this but any help is greatly appreciated
| 1
|
737
| 3,214,323,495
|
IssuesEvent
|
2015-10-07 00:49:52
|
broadinstitute/hellbender-dataflow
|
https://api.github.com/repos/broadinstitute/hellbender-dataflow
|
opened
|
Consider replacing ReferenceShard and VariantShard with SimpleInterval
|
Dataflow DataflowPreprocessingPipeline
|
_From @droazen on July 22, 2015 18:58_
_Copied from original issue: broadinstitute/hellbender#703_
|
1.0
|
Consider replacing ReferenceShard and VariantShard with SimpleInterval - _From @droazen on July 22, 2015 18:58_
_Copied from original issue: broadinstitute/hellbender#703_
|
process
|
consider replacing referenceshard and variantshard with simpleinterval from droazen on july copied from original issue broadinstitute hellbender
| 1
|
6,938
| 10,102,755,739
|
IssuesEvent
|
2019-07-29 12:03:19
|
eaudeweb/ozone
|
https://api.github.com/repos/eaudeweb/ozone
|
opened
|
Custom titles and help text for Transfers and ProcAgent forms
|
Feature: Process Agents Feature: Transfers Frontend: Small stuff
|
Right now the ProcAgents form shows the title of the Transfers form:
http://prntscr.com/olgw82
We need to show different texts, depending on the obligation behind the form (including the submission info tab): http://prntscr.com/olgwot
|
1.0
|
Custom titles and help text for Transfers and ProcAgent forms - Right now the ProcAgents form shows the title of the Transfers form:
http://prntscr.com/olgw82
We need to show different texts, depending on the obligation behind the form (including the submission info tab): http://prntscr.com/olgwot
|
process
|
custom titles and help text for transfers and procagent forms right now the procagents form shows the title of the transfers form we need to show different texts depending on the obligation behind the form including the submission info tab
| 1
|
177,857
| 6,587,917,690
|
IssuesEvent
|
2017-09-13 23:29:24
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
failing: [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest logs
|
area/platform/gke kind/bug priority/critical-urgent sig/gcp sig/instrumentation
|
**Is this a BUG REPORT or FEATURE REQUEST?**:
/kind bug
**What happened**:
EDIT: this probably shows it better
https://storage.googleapis.com/k8s-gubernator/triage/index.html?text=AAAAAAAAAAAAAA
I noticed this test started failing across all PR's in the pull-kubernetes-e2e-gce-bazel job this morning.
First failure I noticed:
- https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/52289/pull-kubernetes-e2e-gce-bazel/16038
Immediately followed by a pass:
- https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-e2e-gce-bazel/16039
Started continuously failing shortly afterwards:
- https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-e2e-gce-bazel/16063
This is also apparently happening for the GKE postsubmit job:
- https://k8s-testgrid.appspot.com/release-master-blocking#gke
- (clicking the changes link for the failing test leads me to https://github.com/kubernetes/kubernetes/compare/be78d113b...991afb243)
|
1.0
|
failing: [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest logs - **Is this a BUG REPORT or FEATURE REQUEST?**:
/kind bug
**What happened**:
EDIT: this probably shows it better
https://storage.googleapis.com/k8s-gubernator/triage/index.html?text=AAAAAAAAAAAAAA
I noticed this test started failing across all PR's in the pull-kubernetes-e2e-gce-bazel job this morning.
First failure I noticed:
- https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/52289/pull-kubernetes-e2e-gce-bazel/16038
Immediately followed by a pass:
- https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-e2e-gce-bazel/16039
Started continuously failing shortly afterwards:
- https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-e2e-gce-bazel/16063
This is also apparently happening for the GKE postsubmit job:
- https://k8s-testgrid.appspot.com/release-master-blocking#gke
- (clicking the changes link for the failing test leads me to https://github.com/kubernetes/kubernetes/compare/be78d113b...991afb243)
|
non_process
|
failing cluster level logging implemented by stackdriver should ingest logs is this a bug report or feature request kind bug what happened edit this probably shows it better i noticed this test started failing across all pr s in the pull kubernetes gce bazel job this morning first failure i noticed immediately followed by a pass started continuously failing shortly afterwards this is also apparently happening for the gke postsubmit job clicking the changes link for the failing test leads me to
| 0
|
429,676
| 30,085,228,450
|
IssuesEvent
|
2023-06-29 08:08:23
|
Nalini1998/Project_Public
|
https://api.github.com/repos/Nalini1998/Project_Public
|
closed
|
5. Iterate through the elements in the x and y array
|
documentation enhancement help wanted good first issue question
|
Inside the for loop, iterate through the elements in the 'x' and 'y' array to set the 'x[i]' positions to be the canvas's horizontal center and the 'y[i]' positions to be the canvas's vertical center.
**Hint:**
_We can set the horizontal and vertical center of the canvas relative to width and height variables._
|
1.0
|
5. Iterate through the elements in the x and y array - Inside the for loop, iterate through the elements in the 'x' and 'y' array to set the 'x[i]' positions to be the canvas's horizontal center and the 'y[i]' positions to be the canvas's vertical center.
**Hint:**
_We can set the horizontal and vertical center of the canvas relative to width and height variables._
|
non_process
|
iterate through the elements in the x and y array inside the for loop iterate through the elements in the x and y array to set the x positions to be the canvas s horizontal center and the y positions to be the canvas s vertical center hint we can set the horizontal and vertical center of the canvas relative to width and height variables
| 0
|
1,110
| 3,588,370,047
|
IssuesEvent
|
2016-01-30 23:46:33
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
closed
|
Implement query digest to monitor performance
|
ADMIN GLOBAL MYSQL QUERY PROCESSOR STATISTICS
|
Parent ticket for more tickets.
We need to implement a new table in which queries performances are logged
|
1.0
|
Implement query digest to monitor performance - Parent ticket for more tickets.
We need to implement a new table in which queries performances are logged
|
process
|
implement query digest to monitor performance parent ticket for more tickets we need to implement a new table in which queries performances are logged
| 1
|
23,274
| 3,785,839,692
|
IssuesEvent
|
2016-03-20 19:26:59
|
LlmDl/Towny
|
https://api.github.com/repos/LlmDl/Towny
|
closed
|
Townychat Event Priority is having a issue with chatcontrol and deluxechat
|
Status-New Type-Defect
|
Originally reported on Google Code with ID 2333
```
Im using chatcontrol And deluxechat with townychat
Deluxechat is a nice formating tool that let me add interactive chat to townychat
The developer of both plugin said that it should be a priority issue.
And chatcontrol is a filter tool to filter key words and to change it to something
else.
The developer of deluxechat said that this is how deluxechat see
"When Towny is detected and used for chat integration with DeluxeChat I do not handle
my chat listening from the AsyncPlayerChatEvent. I do everything from the AsyncChatHookEvent
which Towny Fires when a player chats:
@EventHandler(priority = EventPriority.HIGHEST)
public void onTownyChat(AsyncChatHookEvent e) {
//do stuff
}
If ChatControl is not doing anything it is because of Towny not handling the chat event
at the correct priority then. I take the information Tonwy passes to its own fired
event and modify it from there.
(only if TownyChat integration is enabled)"
I have gone through chatcontrol and change the ChatFormatter_Event_Priority:
LOWEST
LOW
NORMAL
HIGH
HIGHEST
and none of it is working
What is the expected output? :
For the <3 to be pink in game because it was edited by chatcontrol
What do you see instead? :
https://surl.im/i/bqvlp
The <3 is still white
What is your Towny version number? :
0.89.2.27
What is your TownyChat version number? :
0.44
What is your Spigot version number? :
version git-Spigot-d2856ae-8f0f4ed (MC: 1.8.7) (Implementing API version 1.8.7-R0.1-SNAPSHOT)
What operating system? :
Linux
- Your full server startup from server.log,
https://smallpaste.in/view/55c07e7c569e7a074c8b456a
- Your towny config.yml (if using MySQL please remove pasword,)
https://smallpaste.in/view/55c07e9b569e7a064c8b456b
- Your townyperms.yml,
https://smallpaste.in/view/55c07eab569e7a094c8b456a
- Your permissions file.
Not using any permissions on test server
In the case of chat-issues please link your:
- Chatconfig.yml
https://smallpaste.in/view/55c07ebf569e7a534c8b456a
- Channels.yml
https://smallpaste.in/view/55c07ec7569e7a3f4c8b456b
```
Reported by `thu2468` on 2015-08-04 09:11:50
|
1.0
|
Townychat Event Priority is having a issue with chatcontrol and deluxechat - Originally reported on Google Code with ID 2333
```
Im using chatcontrol And deluxechat with townychat
Deluxechat is a nice formating tool that let me add interactive chat to townychat
The developer of both plugin said that it should be a priority issue.
And chatcontrol is a filter tool to filter key words and to change it to something
else.
The developer of deluxechat said that this is how deluxechat see
"When Towny is detected and used for chat integration with DeluxeChat I do not handle
my chat listening from the AsyncPlayerChatEvent. I do everything from the AsyncChatHookEvent
which Towny Fires when a player chats:
@EventHandler(priority = EventPriority.HIGHEST)
public void onTownyChat(AsyncChatHookEvent e) {
//do stuff
}
If ChatControl is not doing anything it is because of Towny not handling the chat event
at the correct priority then. I take the information Tonwy passes to its own fired
event and modify it from there.
(only if TownyChat integration is enabled)"
I have gone through chatcontrol and change the ChatFormatter_Event_Priority:
LOWEST
LOW
NORMAL
HIGH
HIGHEST
and none of it is working
What is the expected output? :
For the <3 to be pink in game because it was edited by chatcontrol
What do you see instead? :
https://surl.im/i/bqvlp
The <3 is still white
What is your Towny version number? :
0.89.2.27
What is your TownyChat version number? :
0.44
What is your Spigot version number? :
version git-Spigot-d2856ae-8f0f4ed (MC: 1.8.7) (Implementing API version 1.8.7-R0.1-SNAPSHOT)
What operating system? :
Linux
- Your full server startup from server.log,
https://smallpaste.in/view/55c07e7c569e7a074c8b456a
- Your towny config.yml (if using MySQL please remove pasword,)
https://smallpaste.in/view/55c07e9b569e7a064c8b456b
- Your townyperms.yml,
https://smallpaste.in/view/55c07eab569e7a094c8b456a
- Your permissions file.
Not using any permissions on test server
In the case of chat-issues please link your:
- Chatconfig.yml
https://smallpaste.in/view/55c07ebf569e7a534c8b456a
- Channels.yml
https://smallpaste.in/view/55c07ec7569e7a3f4c8b456b
```
Reported by `thu2468` on 2015-08-04 09:11:50
|
non_process
|
townychat event priority is having a issue with chatcontrol and deluxechat originally reported on google code with id im using chatcontrol and deluxechat with townychat deluxechat is a nice formating tool that let me add interactive chat to townychat the developer of both plugin said that it should be a priority issue and chatcontrol is a filter tool to filter key words and to change it to something else the developer of deluxechat said that this is how deluxechat see when towny is detected and used for chat integration with deluxechat i do not handle my chat listening from the asyncplayerchatevent i do everything from the asyncchathookevent which towny fires when a player chats eventhandler priority eventpriority highest public void ontownychat asyncchathookevent e do stuff if chatcontrol is not doing anything it is because of towny not handling the chat event at the correct priority then i take the information tonwy passes to its own fired event and modify it from there only if townychat integration is enabled i have gone through chatcontrol and change the chatformatter event priority lowest low normal high highest and none of it is working what is the expected output for the to be pink in game because it was edited by chatcontrol what do you see instead the is still white what is your towny version number what is your townychat version number what is your spigot version number version git spigot mc implementing api version snapshot what operating system linux your full server startup from server log your towny config yml if using mysql please remove pasword your townyperms yml your permissions file not using any permissions on test server in the case of chat issues please link your chatconfig yml channels yml reported by on
| 0
|
16,232
| 10,682,018,046
|
IssuesEvent
|
2019-10-22 03:24:14
|
rabbitmq/rabbitmq-cli
|
https://api.github.com/repos/rabbitmq/rabbitmq-cli
|
closed
|
Erlang formatter not working as expected
|
bug usability
|
Reported in [this `rabbitmq-users` discussion.](https://groups.google.com/d/topic/rabbitmq-users/t1DONfY2oEg/discussion)
To reproduce:
* Start up RabbitMQ 3.8.0
* Run `rabbitmqctl cluster_status --formatter erlang`. Instead of the "old style" erlang term output, you get the "new" output stuffed into a binary:
```
Cluster status of node rabbit@shostakovich ...
<<"\e[1mBasics\e[0m\n\nCluster name: rabbit@shostakovich\n\n\e[1mDisk Nodes\e[0m\n\nrabbit@shostakovich\n\n\e[1mRunning Nodes\e[0m\n\nrabbit@shostakovich\n\n\e[1mVersions\e[0m\n\nrabbit@shostakovich: RabbitMQ 3.8.0 on Erlang 22.1.2\n\n\e[1mAlarms\e[0m\n\n(none)\n\n\e[1mNetwork Partitions\e[0m\n\n(none)\n\n\e[1mListeners\e[0m\n\nNode: rabbit@shostakovich, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication\nNode: rabbit@shostakovich, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0\nNode: rabbit@shostakovich, interface: [::], port: 15672, protocol: http, purpose: HTTP API\n\n\e[1mFeature flags\e[0m\n\nFlag: drop_unroutable_metric, state: enabled\nFlag: empty_basic_get_metric, state: enabled\nFlag: implicit_default_bindings, state: enabled\nFlag: quorum_queue, state: enabled\nFlag: virtual_host_metadata, state: enabled">>
```
Expected result: see the old cluster status like this:
```
[{nodes,[{disc,[rabbitmq@rabbitmq1,rabbitmq@rabbitmq2]}]},
{running_nodes,[rabbitmq@rabbitmq2,rabbitmq@rabbitmq1]},
{cluster_name,<<"rabbitmq@bfbdb8a2097d">>},
{partitions,[]},
{alarms,[{rabbitmq@rabbitmq2,[]},{rabbitmq@rabbitmq1,[]}]}]
```
[169232651]
|
True
|
Erlang formatter not working as expected - Reported in [this `rabbitmq-users` discussion.](https://groups.google.com/d/topic/rabbitmq-users/t1DONfY2oEg/discussion)
To reproduce:
* Start up RabbitMQ 3.8.0
* Run `rabbitmqctl cluster_status --formatter erlang`. Instead of the "old style" erlang term output, you get the "new" output stuffed into a binary:
```
Cluster status of node rabbit@shostakovich ...
<<"\e[1mBasics\e[0m\n\nCluster name: rabbit@shostakovich\n\n\e[1mDisk Nodes\e[0m\n\nrabbit@shostakovich\n\n\e[1mRunning Nodes\e[0m\n\nrabbit@shostakovich\n\n\e[1mVersions\e[0m\n\nrabbit@shostakovich: RabbitMQ 3.8.0 on Erlang 22.1.2\n\n\e[1mAlarms\e[0m\n\n(none)\n\n\e[1mNetwork Partitions\e[0m\n\n(none)\n\n\e[1mListeners\e[0m\n\nNode: rabbit@shostakovich, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication\nNode: rabbit@shostakovich, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0\nNode: rabbit@shostakovich, interface: [::], port: 15672, protocol: http, purpose: HTTP API\n\n\e[1mFeature flags\e[0m\n\nFlag: drop_unroutable_metric, state: enabled\nFlag: empty_basic_get_metric, state: enabled\nFlag: implicit_default_bindings, state: enabled\nFlag: quorum_queue, state: enabled\nFlag: virtual_host_metadata, state: enabled">>
```
Expected result: see the old cluster status like this:
```
[{nodes,[{disc,[rabbitmq@rabbitmq1,rabbitmq@rabbitmq2]}]},
{running_nodes,[rabbitmq@rabbitmq2,rabbitmq@rabbitmq1]},
{cluster_name,<<"rabbitmq@bfbdb8a2097d">>},
{partitions,[]},
{alarms,[{rabbitmq@rabbitmq2,[]},{rabbitmq@rabbitmq1,[]}]}]
```
[169232651]
|
non_process
|
erlang formatter not working as expected reported in to reproduce start up rabbitmq run rabbitmqctl cluster status formatter erlang instead of the old style erlang term output you get the new output stuffed into a binary cluster status of node rabbit shostakovich expected result see the old cluster status like this running nodes cluster name partitions alarms rabbitmq
| 0
|
344,674
| 30,752,140,907
|
IssuesEvent
|
2023-07-28 20:22:31
|
saltstack/salt
|
https://api.github.com/repos/saltstack/salt
|
opened
|
[Increase Test Coverage] Batch 95
|
Tests
|
Increase the code coverage percent on the following files to at least 80%.
Please be aware that currently the percentage might be inaccurate if the module uses salt due to #64696
File | Percent
salt/states/mssql_user.py 13
salt/states/postgres_group.py 79
salt/states/rbac_solaris.py 4
salt/states/win_wua.py 75
salt/utils/slack.py 67
|
1.0
|
[Increase Test Coverage] Batch 95 - Increase the code coverage percent on the following files to at least 80%.
Please be aware that currently the percentage might be inaccurate if the module uses salt due to #64696
File | Percent
salt/states/mssql_user.py 13
salt/states/postgres_group.py 79
salt/states/rbac_solaris.py 4
salt/states/win_wua.py 75
salt/utils/slack.py 67
|
non_process
|
batch increase the code coverage percent on the following files to at least please be aware that currently the percentage might be inaccurate if the module uses salt due to file percent salt states mssql user py salt states postgres group py salt states rbac solaris py salt states win wua py salt utils slack py
| 0
|
20,315
| 26,958,723,196
|
IssuesEvent
|
2023-02-08 16:35:46
|
googleapis/java-core
|
https://api.github.com/repos/googleapis/java-core
|
closed
|
Dependency Dashboard
|
type: process priority: p4
|
This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Repository problems
These problems occurred while renovating this repository.
- WARN: RepoCacheS3.getCacheFolder() - appending missing trailing slash to pathname
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-core-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-core to v2.10.0](../pull/1098)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-core-grpc-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-core-grpc to v2.10.0](../pull/1099)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-core-http-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-core-http to v2.10.0](../pull/1100)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-core-parent-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-core-parent to v2.10.0](../pull/1101)
- [ ] <!-- rebase-branch=renovate/google.api.version -->[deps: update dependency com.google.api-client:google-api-client-bom to v2.2.0](../pull/1090)
- [ ] <!-- rebase-branch=renovate/google.common-protos.version -->[deps: update dependency com.google.api.grpc:proto-google-common-protos to v2.14.0](../pull/1094)
- [ ] <!-- rebase-branch=renovate/google.iam.version -->[deps: update dependency com.google.api.grpc:proto-google-iam-v1 to v1.9.0](../pull/1095)
- [ ] <!-- rebase-branch=renovate/google.api-common.version -->[deps: update dependency com.google.api:api-common to v2.6.0](../pull/1096)
- [ ] <!-- rebase-branch=renovate/gax.version -->[deps: update dependency com.google.api:gax-bom to v2.23.0](../pull/1097)
- [ ] <!-- rebase-branch=renovate/google.auth.version -->[deps: update dependency com.google.auth:google-auth-library-bom to v1.15.0](../pull/1092)
- [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once**
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/actions-checkout-3.x -->[deps: update actions/checkout action to v3](../pull/746)
## Detected dependencies
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/approve-readme.yaml</summary>
- `actions/github-script v6`
</details>
<details><summary>.github/workflows/auto-release.yaml</summary>
- `actions/github-script v6`
</details>
<details><summary>.github/workflows/ci.yaml</summary>
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
</details>
<details><summary>.github/workflows/downstream.yaml</summary>
- `actions/checkout v2`
- `actions/setup-java v3`
</details>
</blockquote>
</details>
<details><summary>maven</summary>
<blockquote>
<details><summary>google-cloud-core-bom/pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.cloud:google-cloud-core 2.9.5-SNAPSHOT`
- `com.google.cloud:google-cloud-core-grpc 2.9.5-SNAPSHOT`
- `com.google.cloud:google-cloud-core-http 2.9.5-SNAPSHOT`
</details>
<details><summary>google-cloud-core-grpc/pom.xml</summary>
- `com.google.cloud:google-cloud-core-parent 2.9.5-SNAPSHOT`
</details>
<details><summary>google-cloud-core-http/pom.xml</summary>
- `com.google.cloud:google-cloud-core-parent 2.9.5-SNAPSHOT`
</details>
<details><summary>google-cloud-core/pom.xml</summary>
- `com.google.cloud:google-cloud-core-parent 2.9.5-SNAPSHOT`
</details>
<details><summary>pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.auth:google-auth-library-bom 1.14.0`
- `com.google.api:gax-bom 2.22.0`
- `com.google.http-client:google-http-client-bom 1.42.3`
- `com.google.api-client:google-api-client-bom 2.1.2`
- `io.grpc:grpc-bom 1.52.1`
- `com.google.protobuf:protobuf-bom 3.21.12`
- `com.google.guava:guava-bom 31.1-jre`
- `com.google.api:api-common 2.5.0`
- `com.google.api.grpc:proto-google-common-protos 2.13.0`
- `com.google.api.grpc:proto-google-iam-v1 1.8.0`
- `io.opencensus:opencensus-api 0.31.1`
- `io.opencensus:opencensus-contrib-http-util 0.31.1`
- `javax.annotation:javax.annotation-api 1.3.2`
- `com.google.code.findbugs:jsr305 3.0.2`
- `org.threeten:threetenbp 1.6.5`
- `com.google.errorprone:error_prone_annotations 2.18.0`
- `com.google.code.gson:gson 2.10.1`
- `com.google.truth:truth 1.1.3`
- `junit:junit 4.13.2`
- `org.easymock:easymock 5.1.0`
- `org.objenesis:objenesis 3.3`
- `org.apache.maven.plugins:maven-project-info-reports-plugin 3.4.2`
- `org.apache.maven.plugins:maven-javadoc-plugin 3.4.1`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Repository problems
These problems occurred while renovating this repository.
- WARN: RepoCacheS3.getCacheFolder() - appending missing trailing slash to pathname
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-core-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-core to v2.10.0](../pull/1098)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-core-grpc-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-core-grpc to v2.10.0](../pull/1099)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-core-http-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-core-http to v2.10.0](../pull/1100)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-core-parent-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-core-parent to v2.10.0](../pull/1101)
- [ ] <!-- rebase-branch=renovate/google.api.version -->[deps: update dependency com.google.api-client:google-api-client-bom to v2.2.0](../pull/1090)
- [ ] <!-- rebase-branch=renovate/google.common-protos.version -->[deps: update dependency com.google.api.grpc:proto-google-common-protos to v2.14.0](../pull/1094)
- [ ] <!-- rebase-branch=renovate/google.iam.version -->[deps: update dependency com.google.api.grpc:proto-google-iam-v1 to v1.9.0](../pull/1095)
- [ ] <!-- rebase-branch=renovate/google.api-common.version -->[deps: update dependency com.google.api:api-common to v2.6.0](../pull/1096)
- [ ] <!-- rebase-branch=renovate/gax.version -->[deps: update dependency com.google.api:gax-bom to v2.23.0](../pull/1097)
- [ ] <!-- rebase-branch=renovate/google.auth.version -->[deps: update dependency com.google.auth:google-auth-library-bom to v1.15.0](../pull/1092)
- [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once**
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/actions-checkout-3.x -->[deps: update actions/checkout action to v3](../pull/746)
## Detected dependencies
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/approve-readme.yaml</summary>
- `actions/github-script v6`
</details>
<details><summary>.github/workflows/auto-release.yaml</summary>
- `actions/github-script v6`
</details>
<details><summary>.github/workflows/ci.yaml</summary>
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
- `actions/checkout v3`
- `actions/setup-java v3`
</details>
<details><summary>.github/workflows/downstream.yaml</summary>
- `actions/checkout v2`
- `actions/setup-java v3`
</details>
</blockquote>
</details>
<details><summary>maven</summary>
<blockquote>
<details><summary>google-cloud-core-bom/pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.cloud:google-cloud-core 2.9.5-SNAPSHOT`
- `com.google.cloud:google-cloud-core-grpc 2.9.5-SNAPSHOT`
- `com.google.cloud:google-cloud-core-http 2.9.5-SNAPSHOT`
</details>
<details><summary>google-cloud-core-grpc/pom.xml</summary>
- `com.google.cloud:google-cloud-core-parent 2.9.5-SNAPSHOT`
</details>
<details><summary>google-cloud-core-http/pom.xml</summary>
- `com.google.cloud:google-cloud-core-parent 2.9.5-SNAPSHOT`
</details>
<details><summary>google-cloud-core/pom.xml</summary>
- `com.google.cloud:google-cloud-core-parent 2.9.5-SNAPSHOT`
</details>
<details><summary>pom.xml</summary>
- `com.google.cloud:google-cloud-shared-config 1.5.5`
- `com.google.auth:google-auth-library-bom 1.14.0`
- `com.google.api:gax-bom 2.22.0`
- `com.google.http-client:google-http-client-bom 1.42.3`
- `com.google.api-client:google-api-client-bom 2.1.2`
- `io.grpc:grpc-bom 1.52.1`
- `com.google.protobuf:protobuf-bom 3.21.12`
- `com.google.guava:guava-bom 31.1-jre`
- `com.google.api:api-common 2.5.0`
- `com.google.api.grpc:proto-google-common-protos 2.13.0`
- `com.google.api.grpc:proto-google-iam-v1 1.8.0`
- `io.opencensus:opencensus-api 0.31.1`
- `io.opencensus:opencensus-contrib-http-util 0.31.1`
- `javax.annotation:javax.annotation-api 1.3.2`
- `com.google.code.findbugs:jsr305 3.0.2`
- `org.threeten:threetenbp 1.6.5`
- `com.google.errorprone:error_prone_annotations 2.18.0`
- `com.google.code.gson:gson 2.10.1`
- `com.google.truth:truth 1.1.3`
- `junit:junit 4.13.2`
- `org.easymock:easymock 5.1.0`
- `org.objenesis:objenesis 3.3`
- `org.apache.maven.plugins:maven-project-info-reports-plugin 3.4.2`
- `org.apache.maven.plugins:maven-javadoc-plugin 3.4.1`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more repository problems these problems occurred while renovating this repository warn getcachefolder appending missing trailing slash to pathname open these updates have all been created already click a checkbox below to force a retry rebase of any pull pull pull pull pull pull pull pull pull pull click on this checkbox to rebase all open prs at once ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull detected dependencies github actions github workflows approve readme yaml actions github script github workflows auto release yaml actions github script github workflows ci yaml actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java github workflows downstream yaml actions checkout actions setup java maven google cloud core bom pom xml com google cloud google cloud shared config com google cloud google cloud core snapshot com google cloud google cloud core grpc snapshot com google cloud google cloud core http snapshot google cloud core grpc pom xml com google cloud google cloud core parent snapshot google cloud core http pom xml com google cloud google cloud core parent snapshot google cloud core pom xml com google cloud google cloud core parent snapshot pom xml com google cloud google cloud shared config com google auth google auth library bom com google api gax bom com google http client google http client bom com google api client google api client bom io grpc grpc bom com google protobuf protobuf bom com google guava guava bom jre com google api api common com google api grpc proto google common protos com google api grpc proto google iam io opencensus opencensus api io opencensus opencensus contrib http util javax annotation javax annotation api com google code findbugs org threeten threetenbp com google errorprone error prone annotations com google code gson gson com google truth truth junit junit org easymock easymock org objenesis objenesis org apache maven plugins maven project info reports plugin org apache maven plugins maven javadoc plugin check this box to trigger a request for renovate to run again on this repository
| 1
|
638,264
| 20,720,241,090
|
IssuesEvent
|
2022-03-13 09:17:38
|
AY2122S2-CS2103T-T09-2/tp
|
https://api.github.com/repos/AY2122S2-CS2103T-T09-2/tp
|
opened
|
As a Recruiter, I can see what stage of the job application an applicant is at
|
type.Story priority.High
|
so that, I can keep track of their progress
|
1.0
|
As a Recruiter, I can see what stage of the job application an applicant is at - so that, I can keep track of their progress
|
non_process
|
as a recruiter i can see what stage of the job application an applicant is at so that i can keep track of their progress
| 0
|
741,261
| 25,786,300,926
|
IssuesEvent
|
2022-12-09 20:51:40
|
low-key-code/poorcast
|
https://api.github.com/repos/low-key-code/poorcast
|
closed
|
Design Bills Page
|
Frontend MEDIUM priority
|
### Problem:
Bills page needs design.
### Solution:
See issue #43 and design accordingly through dribble and mocks in Miro.
### Success Criteria:
- [x] Bills page is designed
|
1.0
|
Design Bills Page - ### Problem:
Bills page needs design.
### Solution:
See issue #43 and design accordingly through dribble and mocks in Miro.
### Success Criteria:
- [x] Bills page is designed
|
non_process
|
design bills page problem bills page needs design solution see issue and design accordingly through dribble and mocks in miro success criteria bills page is designed
| 0
|
3,047
| 4,141,011,830
|
IssuesEvent
|
2016-06-14 01:59:21
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
closed
|
Grsecurity does not work in virtualbox
|
6.topic: grsecurity
|
So, i wanted to test grsecurity in virtualbox, but it does not seem to boot. This is my config
```
security.grsecurity = {
enable = true;
stable = true;
config = {
system = "server";
hardwareVirtualisation = false;
virtualisationSoftware = "virtualbox";
};
};
```
Kernel crashes at boot with some error indicating that execution of init is not possible for some reason, but i did not investigate this problem future.
@thoughtpolice do you have any idea what could be the problem?
|
True
|
Grsecurity does not work in virtualbox - So, i wanted to test grsecurity in virtualbox, but it does not seem to boot. This is my config
```
security.grsecurity = {
enable = true;
stable = true;
config = {
system = "server";
hardwareVirtualisation = false;
virtualisationSoftware = "virtualbox";
};
};
```
Kernel crashes at boot with some error indicating that execution of init is not possible for some reason, but i did not investigate this problem future.
@thoughtpolice do you have any idea what could be the problem?
|
non_process
|
grsecurity does not work in virtualbox so i wanted to test grsecurity in virtualbox but it does not seem to boot this is my config security grsecurity enable true stable true config system server hardwarevirtualisation false virtualisationsoftware virtualbox kernel crashes at boot with some error indicating that execution of init is not possible for some reason but i did not investigate this problem future thoughtpolice do you have any idea what could be the problem
| 0
|
171,175
| 27,076,917,065
|
IssuesEvent
|
2023-02-14 11:15:21
|
ZoCom-utbildning/folkans-it-utbildningar
|
https://api.github.com/repos/ZoCom-utbildning/folkans-it-utbildningar
|
closed
|
Questionaire styling
|
Design Programmering V: Formulär
|
- [x] mobilvy - card/image component - prototype
- [x] desktopvy - card/image component - prototype
- [ ] mobilvy - card/image component
- [ ] desktopvy - card/image component
|
1.0
|
Questionaire styling - - [x] mobilvy - card/image component - prototype
- [x] desktopvy - card/image component - prototype
- [ ] mobilvy - card/image component
- [ ] desktopvy - card/image component
|
non_process
|
questionaire styling mobilvy card image component prototype desktopvy card image component prototype mobilvy card image component desktopvy card image component
| 0
|
76,495
| 15,496,123,921
|
IssuesEvent
|
2021-03-11 02:06:12
|
poojasoude/SmartHotel
|
https://api.github.com/repos/poojasoude/SmartHotel
|
opened
|
CVE-2021-23337 (High) detected in lodash-4.17.10.tgz, lodash-3.10.1.tgz
|
security vulnerability
|
## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-4.17.10.tgz</b>, <b>lodash-3.10.1.tgz</b></p></summary>
<p>
<details><summary><b>lodash-4.17.10.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz</a></p>
<p>Path to dependency file: SmartHotel/SmartHotel360.Website/ClientApp/package.json</p>
<p>Path to vulnerable library: SmartHotel/SmartHotel360.Website/ClientApp/node_modules/cypress/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- cypress-3.1.1.tgz (Root Library)
- :x: **lodash-4.17.10.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: SmartHotel/SmartHotel360.Website/ClientApp/package.json</p>
<p>Path to vulnerable library: SmartHotel/SmartHotel360.Website/ClientApp/node_modules/release-it/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- releaser-0.2.2.tgz (Root Library)
- release-it-2.3.1.tgz
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Command Injection via template.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23337 (High) detected in lodash-4.17.10.tgz, lodash-3.10.1.tgz - ## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-4.17.10.tgz</b>, <b>lodash-3.10.1.tgz</b></p></summary>
<p>
<details><summary><b>lodash-4.17.10.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.10.tgz</a></p>
<p>Path to dependency file: SmartHotel/SmartHotel360.Website/ClientApp/package.json</p>
<p>Path to vulnerable library: SmartHotel/SmartHotel360.Website/ClientApp/node_modules/cypress/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- cypress-3.1.1.tgz (Root Library)
- :x: **lodash-4.17.10.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: SmartHotel/SmartHotel360.Website/ClientApp/package.json</p>
<p>Path to vulnerable library: SmartHotel/SmartHotel360.Website/ClientApp/node_modules/release-it/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- releaser-0.2.2.tgz (Root Library)
- release-it-2.3.1.tgz
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Command Injection via template.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in lodash tgz lodash tgz cve high severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash modular utilities library home page a href path to dependency file smarthotel website clientapp package json path to vulnerable library smarthotel website clientapp node modules cypress node modules lodash package json dependency hierarchy cypress tgz root library x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file smarthotel website clientapp package json path to vulnerable library smarthotel website clientapp node modules release it node modules lodash package json dependency hierarchy releaser tgz root library release it tgz x lodash tgz vulnerable library vulnerability details all versions of package lodash all versions of package org fujion webjars lodash are vulnerable to command injection via template publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
5,609
| 8,468,914,353
|
IssuesEvent
|
2018-10-23 21:07:14
|
carloseduardov8/Viajato
|
https://api.github.com/repos/carloseduardov8/Viajato
|
closed
|
Criar banco de dados de locação de veículos
|
Priority:Normal Process: Setup Environment
|
Definir e popular base com empresas de locação e tipos de automóveis disponíveis.
|
1.0
|
Criar banco de dados de locação de veículos - Definir e popular base com empresas de locação e tipos de automóveis disponíveis.
|
process
|
criar banco de dados de locação de veículos definir e popular base com empresas de locação e tipos de automóveis disponíveis
| 1
|
20,583
| 27,244,872,563
|
IssuesEvent
|
2023-02-22 00:36:50
|
emacs-ess/ESS
|
https://api.github.com/repos/emacs-ess/ESS
|
closed
|
Default R session for executing code blocks
|
process:eval
|
Hi
In org-mode and with multiple R sessions if I have an R code and execute code with C-cC-c it will use the default R session for that buffer, or section etc.
However, if I go into ess-mode (C-c') and I try to send a block to the R session either with C-cC-c or C-cC-j or any other command it will ask for which R session to use. however the default R session appears to be randomly selected. Could we make the default R session to be the same default R session for that org-mode section?
Thanks
FKG
|
1.0
|
Default R session for executing code blocks - Hi
In org-mode and with multiple R sessions if I have an R code and execute code with C-cC-c it will use the default R session for that buffer, or section etc.
However, if I go into ess-mode (C-c') and I try to send a block to the R session either with C-cC-c or C-cC-j or any other command it will ask for which R session to use. however the default R session appears to be randomly selected. Could we make the default R session to be the same default R session for that org-mode section?
Thanks
FKG
|
process
|
default r session for executing code blocks hi in org mode and with multiple r sessions if i have an r code and execute code with c cc c it will use the default r session for that buffer or section etc however if i go into ess mode c c and i try to send a block to the r session either with c cc c or c cc j or any other command it will ask for which r session to use however the default r session appears to be randomly selected could we make the default r session to be the same default r session for that org mode section thanks fkg
| 1
|
73,066
| 24,438,182,204
|
IssuesEvent
|
2022-10-06 12:58:47
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
User can get stuck with sending read receipts disabled if server stopped supporting it
|
T-Defect A-User-Settings A-Read-Receipts O-Uncommon
|
- Use a server that supports MSC2285
- Disable sending read receipts
- The server stops supporting MSC2285
- You're stuck with no RRs and no way to clear notifs
It's an edge-case but we should at least provide a way to enable them again
|
1.0
|
User can get stuck with sending read receipts disabled if server stopped supporting it - - Use a server that supports MSC2285
- Disable sending read receipts
- The server stops supporting MSC2285
- You're stuck with no RRs and no way to clear notifs
It's an edge-case but we should at least provide a way to enable them again
|
non_process
|
user can get stuck with sending read receipts disabled if server stopped supporting it use a server that supports disable sending read receipts the server stops supporting you re stuck with no rrs and no way to clear notifs it s an edge case but we should at least provide a way to enable them again
| 0
|
13,925
| 16,681,803,881
|
IssuesEvent
|
2021-06-08 01:23:56
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Warning before running analyses on too large rasters
|
Feature Request Feedback Processing
|
Author Name: **Paolo Cavallini** (@pcav)
Original Redmine Issue: [8249](https://issues.qgis.org/issues/8249)
Redmine category:processing/core
---
If the user does not select an appropriate resolution for the resulting raster(s), it risks to get stuck with an excessive resolution, and too long processing time. I suggest to do a simple calculation of the resulting number of pixel before running analyses, and warn the user if this is too high.
---
Related issue(s): #16342 (relates)
Redmine related issue(s): [7369](https://issues.qgis.org/issues/7369)
---
|
1.0
|
Warning before running analyses on too large rasters - Author Name: **Paolo Cavallini** (@pcav)
Original Redmine Issue: [8249](https://issues.qgis.org/issues/8249)
Redmine category:processing/core
---
If the user does not select an appropriate resolution for the resulting raster(s), it risks to get stuck with an excessive resolution, and too long processing time. I suggest to do a simple calculation of the resulting number of pixel before running analyses, and warn the user if this is too high.
---
Related issue(s): #16342 (relates)
Redmine related issue(s): [7369](https://issues.qgis.org/issues/7369)
---
|
process
|
warning before running analyses on too large rasters author name paolo cavallini pcav original redmine issue redmine category processing core if the user does not select an appropriate resolution for the resulting raster s it risks to get stuck with an excessive resolution and too long processing time i suggest to do a simple calculation of the resulting number of pixel before running analyses and warn the user if this is too high related issue s relates redmine related issue s
| 1
|
6,297
| 9,303,207,790
|
IssuesEvent
|
2019-03-24 15:49:05
|
shirou/gopsutil
|
https://api.github.com/repos/shirou/gopsutil
|
closed
|
process.PidExists(0) return False on Linux
|
os:linux package:process
|
With current version of `gopsutil`:
```
$ cat test.go
cat test.go
package main
import (
"fmt"
"github.com/shirou/gopsutil/process"
)
func main() {
fmt.Println(process.PidExists(0))
}
$ go run test.go
false <nil>
```
On any POSIX system, PID 0 means all process and that check should return True. At least with FreeBSD 9.3, `process.PidExists(0)` will report `true`.
python `psutil` also has the same [issue](https://github.com/giampaolo/psutil/issues/844).
I have made a [quick fix](https://github.com/Gnouc/gopsutil/commit/0c5c7ec45623efc659a854744070acbc2f7349c6) for this issue, though I'm not sure it will work on Windows.
|
1.0
|
process.PidExists(0) return False on Linux - With current version of `gopsutil`:
```
$ cat test.go
cat test.go
package main
import (
"fmt"
"github.com/shirou/gopsutil/process"
)
func main() {
fmt.Println(process.PidExists(0))
}
$ go run test.go
false <nil>
```
On any POSIX system, PID 0 means all process and that check should return True. At least with FreeBSD 9.3, `process.PidExists(0)` will report `true`.
python `psutil` also has the same [issue](https://github.com/giampaolo/psutil/issues/844).
I have made a [quick fix](https://github.com/Gnouc/gopsutil/commit/0c5c7ec45623efc659a854744070acbc2f7349c6) for this issue, though I'm not sure it will work on Windows.
|
process
|
process pidexists return false on linux with current version of gopsutil cat test go cat test go package main import fmt github com shirou gopsutil process func main fmt println process pidexists go run test go false on any posix system pid means all process and that check should return true at least with freebsd process pidexists will report true python psutil also has the same i have made a for this issue though i m not sure it will work on windows
| 1
|
18,157
| 3,029,736,953
|
IssuesEvent
|
2015-08-04 14:11:29
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
RandomMaterial test is not testing anything
|
C: MOOSE P: normal T: defect
|
There is a type in the input file, causing the material property not to be output. I'll fix this.
|
1.0
|
RandomMaterial test is not testing anything - There is a type in the input file, causing the material property not to be output. I'll fix this.
|
non_process
|
randommaterial test is not testing anything there is a type in the input file causing the material property not to be output i ll fix this
| 0
|
6,953
| 10,113,921,971
|
IssuesEvent
|
2019-07-30 17:55:09
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
[BottomSheet] Update checklist for BottomSheet
|
[BottomSheet] type:Process
|
OWNERs should update checklist to fill in missing values.
https://docs.google.com/spreadsheets/d/1H30t5r8rJcXkyG9tNpPSOgVEJ0enxaZlazQJrlfyYSo/edit#gid=0
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/117177634](http://b/117177634)
|
1.0
|
[BottomSheet] Update checklist for BottomSheet - OWNERs should update checklist to fill in missing values.
https://docs.google.com/spreadsheets/d/1H30t5r8rJcXkyG9tNpPSOgVEJ0enxaZlazQJrlfyYSo/edit#gid=0
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/117177634](http://b/117177634)
|
process
|
update checklist for bottomsheet owners should update checklist to fill in missing values internal data associated internal bug
| 1
|
23,590
| 7,346,463,710
|
IssuesEvent
|
2018-03-07 20:48:50
|
BeAPI/bea-media-analytics
|
https://api.github.com/repos/BeAPI/bea-media-analytics
|
opened
|
Envira Gallery support
|
Builder Support Enhancement
|
<!--
Thanks for contributing !
Please note:
- These comments won't show up when you submit your issue.
- Please choose a descriptive title, ex. : "On media delete, it's still indexed".
- Try to provide as many details as possible on the below list.
- If requesting a new feature, please explain why you'd like to see it added.
-->
#### Version information
* Plugin version :
* PHP :
* WordPress :
#### Steps to reproduce
1.
2.
3.
#### What was expected
#### What happened instead
|
1.0
|
Envira Gallery support - <!--
Thanks for contributing !
Please note:
- These comments won't show up when you submit your issue.
- Please choose a descriptive title, ex. : "On media delete, it's still indexed".
- Try to provide as many details as possible on the below list.
- If requesting a new feature, please explain why you'd like to see it added.
-->
#### Version information
* Plugin version :
* PHP :
* WordPress :
#### Steps to reproduce
1.
2.
3.
#### What was expected
#### What happened instead
|
non_process
|
envira gallery support thanks for contributing please note these comments won t show up when you submit your issue please choose a descriptive title ex on media delete it s still indexed try to provide as many details as possible on the below list if requesting a new feature please explain why you d like to see it added version information plugin version php wordpress steps to reproduce what was expected what happened instead
| 0
|
70,348
| 7,183,848,937
|
IssuesEvent
|
2018-02-01 14:43:15
|
openshift/origin
|
https://api.github.com/repos/openshift/origin
|
reopened
|
executing 'oc get secrets -o name --all-namespaces; oc describe appliedclusterresourcequota/for-deads-by-annotation -n quota-bar --as deads' expecting any result and text 'secrets.
|
kind/test-flake priority/P1 sig/master
|
https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/17599/test_pull_request_origin_cmd/6974/
@deads2k you've got your name on it, you own it
/assign deads2k
/kind test-flake
/sig master
|
1.0
|
executing 'oc get secrets -o name --all-namespaces; oc describe appliedclusterresourcequota/for-deads-by-annotation -n quota-bar --as deads' expecting any result and text 'secrets. - https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/17599/test_pull_request_origin_cmd/6974/
@deads2k you've got your name on it, you own it
/assign deads2k
/kind test-flake
/sig master
|
non_process
|
executing oc get secrets o name all namespaces oc describe appliedclusterresourcequota for deads by annotation n quota bar as deads expecting any result and text secrets you ve got your name on it you own it assign kind test flake sig master
| 0
|
436
| 2,868,705,560
|
IssuesEvent
|
2015-06-05 20:31:50
|
gremau/NMEG_fluxproc_testing
|
https://api.github.com/repos/gremau/NMEG_fluxproc_testing
|
closed
|
Put VPD in fluxall_qc files
|
enhancement Gap Filling QC Process
|
There is no vpd value in the `fluxall_qc` files. Currently we let the MPI gapfiller calculate this for us. We should calculate it ourselves and put it in these files.
|
1.0
|
Put VPD in fluxall_qc files - There is no vpd value in the `fluxall_qc` files. Currently we let the MPI gapfiller calculate this for us. We should calculate it ourselves and put it in these files.
|
process
|
put vpd in fluxall qc files there is no vpd value in the fluxall qc files currently we let the mpi gapfiller calculate this for us we should calculate it ourselves and put it in these files
| 1
|
18,785
| 24,690,505,448
|
IssuesEvent
|
2022-10-19 08:11:44
|
deepset-ai/haystack
|
https://api.github.com/repos/deepset-ai/haystack
|
closed
|
Bug: Load PreProcessor with `split_by: None` from config fails
|
type:bug topic:preprocessing
|
### The issue
When trying to load the PreProcessor node with `split_by: None` from config fails.
### Reproduce the issue
To reproduce the issue, execute this script with the latest Haystack version and python 3.8:
```python
from haystack import Pipeline
from haystack.nodes.preprocessor import PreProcessor
processor = PreProcessor(
split_by=None,
)
pipeline = Pipeline()
pipeline.add_node(processor, name="Preprocessor", inputs=["File"])
config = pipeline.get_config()
pipeline = Pipeline.load_from_config(config, pipeline_name="indexing")
```
### The error
Where it fails:
https://github.com/deepset-ai/haystack/blob/7290196c32abf44e269caa03e1d2178c77992b0e/haystack/pipelines/config.py#L307-L331
The actual error:
```shell
On instance['components'][0]:
{'name': 'Preprocessor',
'params': {'split_by': None},
'type': 'PreProcessor'}
Traceback (most recent call last):
File "/home/paperspace/repositories/smart-search/env/lib/python3.8/site-packages/haystack/pipelines/config.py", line 338, in validate_schema
Draft7Validator(schema).validate(instance=pipeline_config)
File "/home/paperspace/repositories/smart-search/env/lib/python3.8/site-packages/jsonschema/validators.py", line 304, in validate
raise error
jsonschema.exceptions.ValidationError: {'name': 'Preprocessor', 'type': 'PreProcessor', 'params': {'split_by': None}} is not valid under any of the given schemas
On instance['components'][0]:
{'name': 'Preprocessor',
'params': {'split_by': None},
'type': 'PreProcessor'}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "haystack_bug_test.py", line 12, in <module>
pipeline = Pipeline.load_from_config(config, pipeline_name="indexing")
File "/home/paperspace/repositories/smart-search/env/lib/python3.8/site-packages/haystack/pipelines/base.py", line 1913, in load_from_config
validate_config(pipeline_config, strict_version_check=strict_version_check)
File "/home/paperspace/repositories/smart-search/env/lib/python3.8/site-packages/haystack/pipelines/config.py", line 259, in validate_config
validate_schema(
File "/home/paperspace/repositories/smart-search/env/lib/python3.8/site-packages/haystack/pipelines/config.py", line 366, in validate_schema
raise PipelineSchemaError(
haystack.errors.PipelineSchemaError: Node of type PreProcessor found, but it failed validation. Possible causes:
- The node is missing some mandatory parameter
- Wrong indentation of some parameter in YAML
See the stacktrace for more information.
```
|
1.0
|
Bug: Load PreProcessor with `split_by: None` from config fails - ### The issue
When trying to load the PreProcessor node with `split_by: None` from config fails.
### Reproduce the issue
To reproduce the issue, execute this script with the latest Haystack version and python 3.8:
```python
from haystack import Pipeline
from haystack.nodes.preprocessor import PreProcessor
processor = PreProcessor(
split_by=None,
)
pipeline = Pipeline()
pipeline.add_node(processor, name="Preprocessor", inputs=["File"])
config = pipeline.get_config()
pipeline = Pipeline.load_from_config(config, pipeline_name="indexing")
```
### The error
Where it fails:
https://github.com/deepset-ai/haystack/blob/7290196c32abf44e269caa03e1d2178c77992b0e/haystack/pipelines/config.py#L307-L331
The actual error:
```shell
On instance['components'][0]:
{'name': 'Preprocessor',
'params': {'split_by': None},
'type': 'PreProcessor'}
Traceback (most recent call last):
File "/home/paperspace/repositories/smart-search/env/lib/python3.8/site-packages/haystack/pipelines/config.py", line 338, in validate_schema
Draft7Validator(schema).validate(instance=pipeline_config)
File "/home/paperspace/repositories/smart-search/env/lib/python3.8/site-packages/jsonschema/validators.py", line 304, in validate
raise error
jsonschema.exceptions.ValidationError: {'name': 'Preprocessor', 'type': 'PreProcessor', 'params': {'split_by': None}} is not valid under any of the given schemas
On instance['components'][0]:
{'name': 'Preprocessor',
'params': {'split_by': None},
'type': 'PreProcessor'}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "haystack_bug_test.py", line 12, in <module>
pipeline = Pipeline.load_from_config(config, pipeline_name="indexing")
File "/home/paperspace/repositories/smart-search/env/lib/python3.8/site-packages/haystack/pipelines/base.py", line 1913, in load_from_config
validate_config(pipeline_config, strict_version_check=strict_version_check)
File "/home/paperspace/repositories/smart-search/env/lib/python3.8/site-packages/haystack/pipelines/config.py", line 259, in validate_config
validate_schema(
File "/home/paperspace/repositories/smart-search/env/lib/python3.8/site-packages/haystack/pipelines/config.py", line 366, in validate_schema
raise PipelineSchemaError(
haystack.errors.PipelineSchemaError: Node of type PreProcessor found, but it failed validation. Possible causes:
- The node is missing some mandatory parameter
- Wrong indentation of some parameter in YAML
See the stacktrace for more information.
```
|
process
|
bug load preprocessor with split by none from config fails the issue when trying to load the preprocessor node with split by none from config fails reproduce the issue to reproduce the issue execute this script with the latest haystack version and python python from haystack import pipeline from haystack nodes preprocessor import preprocessor processor preprocessor split by none pipeline pipeline pipeline add node processor name preprocessor inputs config pipeline get config pipeline pipeline load from config config pipeline name indexing the error where it fails the actual error shell on instance name preprocessor params split by none type preprocessor traceback most recent call last file home paperspace repositories smart search env lib site packages haystack pipelines config py line in validate schema schema validate instance pipeline config file home paperspace repositories smart search env lib site packages jsonschema validators py line in validate raise error jsonschema exceptions validationerror name preprocessor type preprocessor params split by none is not valid under any of the given schemas on instance name preprocessor params split by none type preprocessor the above exception was the direct cause of the following exception traceback most recent call last file haystack bug test py line in pipeline pipeline load from config config pipeline name indexing file home paperspace repositories smart search env lib site packages haystack pipelines base py line in load from config validate config pipeline config strict version check strict version check file home paperspace repositories smart search env lib site packages haystack pipelines config py line in validate config validate schema file home paperspace repositories smart search env lib site packages haystack pipelines config py line in validate schema raise pipelineschemaerror haystack errors pipelineschemaerror node of type preprocessor found but it failed validation possible causes the node is missing some mandatory parameter wrong indentation of some parameter in yaml see the stacktrace for more information
| 1
|
5,843
| 8,670,721,096
|
IssuesEvent
|
2018-11-29 17:09:50
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
[Firestore] FirestoreClient.list_collections() is missing, but is in DocumentReference.
|
api: firestore triaged for GA type: process
|
FirestoreClient.list_collections() is missing, but it does exist as part of DocumentReference.
|
1.0
|
[Firestore] FirestoreClient.list_collections() is missing, but is in DocumentReference. - FirestoreClient.list_collections() is missing, but it does exist as part of DocumentReference.
|
process
|
firestoreclient list collections is missing but is in documentreference firestoreclient list collections is missing but it does exist as part of documentreference
| 1
|
15,796
| 19,986,267,425
|
IssuesEvent
|
2022-01-30 18:03:25
|
processing/processing4
|
https://api.github.com/repos/processing/processing4
|
closed
|
Problem with function size(int arg, int arg) in Class
|
help wanted preprocessor
|
When the function `void size(int arg, int arg)` is used with two arguments in `Class` that's cause a syntax problem when this function is call directly in `void setup()`
Processing 4.0.2b / OS Monterey
Code to reproduce:
```
Truc truc = new Truc();
void setup() {
size(200,200);
truc.size(1,1); // problem >>> error à "."
// func();
}
void draw() {
truc.size(1,1); // no problem
}
void func() {
truc.size(1,1); // no problem
}
class Truc {
void size(int x, int y) {
}
}
```
|
1.0
|
Problem with function size(int arg, int arg) in Class - When the function `void size(int arg, int arg)` is used with two arguments in `Class` that's cause a syntax problem when this function is call directly in `void setup()`
Processing 4.0.2b / OS Monterey
Code to reproduce:
```
Truc truc = new Truc();
void setup() {
size(200,200);
truc.size(1,1); // problem >>> error à "."
// func();
}
void draw() {
truc.size(1,1); // no problem
}
void func() {
truc.size(1,1); // no problem
}
class Truc {
void size(int x, int y) {
}
}
```
|
process
|
problem with function size int arg int arg in class when the function void size int arg int arg is used with two arguments in class that s cause a syntax problem when this function is call directly in void setup processing os monterey code to reproduce truc truc new truc void setup size truc size problem error à func void draw truc size no problem void func truc size no problem class truc void size int x int y
| 1
|
20,908
| 27,749,863,053
|
IssuesEvent
|
2023-03-15 19:50:00
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
opened
|
[processor/servicegraph] Deprecate component
|
processor/servicegraph connector/servicegraph
|
### Component(s)
connector/servicegraph, processor/servicegraph
### Describe the issue you're reporting
This is a tracking issue relating to https://github.com/open-telemetry/opentelemetry-collector/issues/7370
The processor has been reimplemented as a connector. (See https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/18389)
We should deprecate the processor when the code owners believe the time is appropriate.
|
1.0
|
[processor/servicegraph] Deprecate component - ### Component(s)
connector/servicegraph, processor/servicegraph
### Describe the issue you're reporting
This is a tracking issue relating to https://github.com/open-telemetry/opentelemetry-collector/issues/7370
The processor has been reimplemented as a connector. (See https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/18389)
We should deprecate the processor when the code owners believe the time is appropriate.
|
process
|
deprecate component component s connector servicegraph processor servicegraph describe the issue you re reporting this is a tracking issue relating to the processor has been reimplemented as a connector see we should deprecate the processor when the code owners believe the time is appropriate
| 1
|
18,947
| 24,908,256,210
|
IssuesEvent
|
2022-10-29 14:39:42
|
calaldees/KaraKara
|
https://api.github.com/repos/calaldees/KaraKara
|
closed
|
Parallel encoding
|
feature processmedia2
|
I have a 64-core server available; ffmpeg has multi-threading but it never seems to use more than a few at once, so to use all the processing power we would need multiple ffmpeg instances. I did a hacky thing of setting encode order to random, commenting out the lock file, and then running multiple processmedia2 docker instances, but that seemed messy as they would occasionally overwrite each other's work.
It would be nice if processmedia2 itself could encode multiple tracks in parallel. I tried updating `encode_media.py` `def encode_media()` to use a `ThreadPool` with `map()`, but somehow that broke in a lot of different ways and I don't even understand what happened...
|
1.0
|
Parallel encoding - I have a 64-core server available; ffmpeg has multi-threading but it never seems to use more than a few at once, so to use all the processing power we would need multiple ffmpeg instances. I did a hacky thing of setting encode order to random, commenting out the lock file, and then running multiple processmedia2 docker instances, but that seemed messy as they would occasionally overwrite each other's work.
It would be nice if processmedia2 itself could encode multiple tracks in parallel. I tried updating `encode_media.py` `def encode_media()` to use a `ThreadPool` with `map()`, but somehow that broke in a lot of different ways and I don't even understand what happened...
|
process
|
parallel encoding i have a core server available ffmpeg has multi threading but it never seems to use more than a few at once so to use all the processing power we would need multiple ffmpeg instances i did a hacky thing of setting encode order to random commenting out the lock file and then running multiple docker instances but that seemed messy as they would occasionally overwrite each other s work it would be nice if itself could encode multiple tracks in parallel i tried updating encode media py def encode media to use a threadpool with map but somehow that broke in a lot of different ways and i don t even understand what happened
| 1
|
228,336
| 18,170,921,835
|
IssuesEvent
|
2021-09-27 19:56:30
|
USAID-OHA-SI/That-is-MSDup
|
https://api.github.com/repos/USAID-OHA-SI/That-is-MSDup
|
closed
|
HTS POS FY19 targets incorrect - age/sex/result plus keypop/result
|
OU - Cameroon Tech Area - Testing MSD
|
Cameroon has reported that their FY19 targets are wrong in the PPR. known issue - just adding another country to the list.
for programs that are primarily KP this really screws with their targets (e.g. doubling them in Cameroon's case)
|
1.0
|
HTS POS FY19 targets incorrect - age/sex/result plus keypop/result - Cameroon has reported that their FY19 targets are wrong in the PPR. known issue - just adding another country to the list.
for programs that are primarily KP this really screws with their targets (e.g. doubling them in Cameroon's case)
|
non_process
|
hts pos targets incorrect age sex result plus keypop result cameroon has reported that their targets are wrong in the ppr known issue just adding another country to the list for programs that are primarily kp this really screws with their targets e g doubling them in cameroon s case
| 0
|
259,080
| 19,584,842,057
|
IssuesEvent
|
2022-01-05 04:40:52
|
GetBlok-io/Translations
|
https://api.github.com/repos/GetBlok-io/Translations
|
closed
|
Translation Needed - Russian - GB_BLOKBOUNTY
|
documentation Bounty
|
2 ERG Bounty per translation.
Looking for translations in the following languages:
Chinese
Portuguese
Spanish
Russian
German
Arabic
Translations must be true and will be verified (NO GOOGLE TRANSLATE). Please make sure to post a comment if you accept the bounty to avoid duplicative work.
Notify @ArOhBeK with any questions.
https://github.com/GetBlok-io/Translations/blob/main/GB_2022_BLOKBOUNTY.pptm
|
1.0
|
Translation Needed - Russian - GB_BLOKBOUNTY - 2 ERG Bounty per translation.
Looking for translations in the following languages:
Chinese
Portuguese
Spanish
Russian
German
Arabic
Translations must be true and will be verified (NO GOOGLE TRANSLATE). Please make sure to post a comment if you accept the bounty to avoid duplicative work.
Notify @ArOhBeK with any questions.
https://github.com/GetBlok-io/Translations/blob/main/GB_2022_BLOKBOUNTY.pptm
|
non_process
|
translation needed russian gb blokbounty erg bounty per translation looking for translations in the following languages chinese portuguese spanish russian german arabic translations must be true and will be verified no google translate please make sure to post a comment if you accept the bounty to avoid duplicative work notify arohbek with any questions
| 0
|
5,100
| 7,881,645,994
|
IssuesEvent
|
2018-06-26 19:47:17
|
ArctosDB/new-collections
|
https://api.github.com/repos/ArctosDB/new-collections
|
closed
|
Angelo State
|
Draft MOU in process Fee Reduction Approved Registered at GRBIO
|
Request for fee reduction received, approved by ASC; need to formally contact to indicate approval of request; new collections manager recently hired with Arctos background from MSB
|
1.0
|
Angelo State - Request for fee reduction received, approved by ASC; need to formally contact to indicate approval of request; new collections manager recently hired with Arctos background from MSB
|
process
|
angelo state request for fee reduction received approved by asc need to formally contact to indicate approval of request new collections manager recently hired with arctos background from msb
| 1
|
219
| 2,649,312,944
|
IssuesEvent
|
2015-03-14 19:56:06
|
eiskaltdcpp/eiskaltdcpp
|
https://api.github.com/repos/eiskaltdcpp/eiskaltdcpp
|
opened
|
Translation issues
|
imported Type-DevProcess
|
_From [Nickollai](https://code.google.com/u/Nickollai/) on March 23, 2010 16:31:21_
Предлагаю их сообщать в этот баг.
1) на иконке в лотке: "Show/hide window"
2) в настройках: "I'm away. State your business and I might answer later if
you're lucky."
_Original issue: http://code.google.com/p/eiskaltdc/issues/detail?id=237_
|
1.0
|
Translation issues - _From [Nickollai](https://code.google.com/u/Nickollai/) on March 23, 2010 16:31:21_
Предлагаю их сообщать в этот баг.
1) на иконке в лотке: "Show/hide window"
2) в настройках: "I'm away. State your business and I might answer later if
you're lucky."
_Original issue: http://code.google.com/p/eiskaltdc/issues/detail?id=237_
|
process
|
translation issues from on march предлагаю их сообщать в этот баг на иконке в лотке show hide window в настройках i m away state your business and i might answer later if you re lucky original issue
| 1
|
266,908
| 23,267,816,053
|
IssuesEvent
|
2022-08-04 19:14:44
|
ValveSoftware/Source-1-Games
|
https://api.github.com/repos/ValveSoftware/Source-1-Games
|
closed
|
[TF2] Lasers on rd_asteroid are broken
|
Team Fortress 2 Need Retest Animation
|
On rd_asteroid, env_laser entities are used at both sides of the map and rely on path_track as endpoints of the lasers so they can be displayed. However, since the 7/7/2022 update, path_track entities are now server-side and as such, can't be used as endpoints for env_lasers. Lasers on rd_asteroid are now rendered like this:

|
1.0
|
[TF2] Lasers on rd_asteroid are broken - On rd_asteroid, env_laser entities are used at both sides of the map and rely on path_track as endpoints of the lasers so they can be displayed. However, since the 7/7/2022 update, path_track entities are now server-side and as such, can't be used as endpoints for env_lasers. Lasers on rd_asteroid are now rendered like this:

|
non_process
|
lasers on rd asteroid are broken on rd asteroid env laser entities are used at both sides of the map and rely on path track as endpoints of the lasers so they can be displayed however since the update path track entities are now server side and as such can t be used as endpoints for env lasers lasers on rd asteroid are now rendered like this
| 0
|
252,277
| 8,034,011,535
|
IssuesEvent
|
2018-07-29 13:48:14
|
Automattic/liveblog
|
https://api.github.com/repos/Automattic/liveblog
|
opened
|
Updates stop when paginating AMP Liveblog
|
Priority::High confirmed bug
|
When viewing an AMP Liveblog, it provides a notification of new entries. These cease as soon as pagination is used.
# Steps to reproduce
1. Create a Liveblog with more than one page
2. Open the AMP version
3. Post a new update via the non-AMP liveblog
4. A notification will appear on the AMP liveblog
5. Click through to page 2 on the AMP liveblog
6. Post a new entry on the non-AMP liveblog
7. You will not see a notification on the AMP liveblog
8. Click back to page 1
9. There will be no new entries or notifications shown until you refresh.
|
1.0
|
Updates stop when paginating AMP Liveblog - When viewing an AMP Liveblog, it provides a notification of new entries. These cease as soon as pagination is used.
# Steps to reproduce
1. Create a Liveblog with more than one page
2. Open the AMP version
3. Post a new update via the non-AMP liveblog
4. A notification will appear on the AMP liveblog
5. Click through to page 2 on the AMP liveblog
6. Post a new entry on the non-AMP liveblog
7. You will not see a notification on the AMP liveblog
8. Click back to page 1
9. There will be no new entries or notifications shown until you refresh.
|
non_process
|
updates stop when paginating amp liveblog when viewing an amp liveblog it provides a notification of new entries these cease as soon as pagination is used steps to reproduce create a liveblog with more than one page open the amp version post a new update via the non amp liveblog a notification will appear on the amp liveblog click through to page on the amp liveblog post a new entry on the non amp liveblog you will not see a notification on the amp liveblog click back to page there will be no new entries or notifications shown until you refresh
| 0
|
127,694
| 18,018,503,231
|
IssuesEvent
|
2021-09-16 16:21:08
|
harrinry/spark-on-k8s-operator
|
https://api.github.com/repos/harrinry/spark-on-k8s-operator
|
opened
|
CVE-2020-36066 (High) detected in github.com/tidwall/match-v1.0.0
|
security vulnerability
|
## CVE-2020-36066 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/tidwall/match-v1.0.0</b></p></summary>
<p>Simple string pattern matcher for Go</p>
<p>
Dependency Hierarchy:
- github.com/google/go-cloud-v0.1.1 (Root Library)
- github.com/tidwall/sjson-v1.0.0
- github.com/tidwall/gjson-v1.1.2
- :x: **github.com/tidwall/match-v1.0.0** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/harrinry/spark-on-k8s-operator/commit/23cd89752b5978470bb8f7c88cae5ab268573cea">23cd89752b5978470bb8f7c88cae5ab268573cea</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
GJSON <1.6.5 allows attackers to cause a denial of service (remote) via crafted JSON.
<p>Publish Date: 2021-01-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36066>CVE-2020-36066</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36066">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36066</a></p>
<p>Release Date: 2021-01-05</p>
<p>Fix Resolution: v1.0.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"GO","packageName":"github.com/tidwall/match","packageVersion":"v1.0.0","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"github.com/google/go-cloud:v0.1.1;github.com/tidwall/sjson:v1.0.0;github.com/tidwall/gjson:v1.1.2;github.com/tidwall/match:v1.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.0.3"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36066","vulnerabilityDetails":"GJSON \u003c1.6.5 allows attackers to cause a denial of service (remote) via crafted JSON.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36066","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-36066 (High) detected in github.com/tidwall/match-v1.0.0 - ## CVE-2020-36066 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/tidwall/match-v1.0.0</b></p></summary>
<p>Simple string pattern matcher for Go</p>
<p>
Dependency Hierarchy:
- github.com/google/go-cloud-v0.1.1 (Root Library)
- github.com/tidwall/sjson-v1.0.0
- github.com/tidwall/gjson-v1.1.2
- :x: **github.com/tidwall/match-v1.0.0** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/harrinry/spark-on-k8s-operator/commit/23cd89752b5978470bb8f7c88cae5ab268573cea">23cd89752b5978470bb8f7c88cae5ab268573cea</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
GJSON <1.6.5 allows attackers to cause a denial of service (remote) via crafted JSON.
<p>Publish Date: 2021-01-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36066>CVE-2020-36066</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36066">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36066</a></p>
<p>Release Date: 2021-01-05</p>
<p>Fix Resolution: v1.0.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"GO","packageName":"github.com/tidwall/match","packageVersion":"v1.0.0","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"github.com/google/go-cloud:v0.1.1;github.com/tidwall/sjson:v1.0.0;github.com/tidwall/gjson:v1.1.2;github.com/tidwall/match:v1.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.0.3"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36066","vulnerabilityDetails":"GJSON \u003c1.6.5 allows attackers to cause a denial of service (remote) via crafted JSON.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36066","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in github com tidwall match cve high severity vulnerability vulnerable library github com tidwall match simple string pattern matcher for go dependency hierarchy github com google go cloud root library github com tidwall sjson github com tidwall gjson x github com tidwall match vulnerable library found in head commit a href found in base branch master vulnerability details gjson allows attackers to cause a denial of service remote via crafted json publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree github com google go cloud github com tidwall sjson github com tidwall gjson github com tidwall match isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails gjson allows attackers to cause a denial of service remote via crafted json vulnerabilityurl
| 0
|
429,618
| 12,426,523,354
|
IssuesEvent
|
2020-05-24 21:37:56
|
danbooru/danbooru
|
https://api.github.com/repos/danbooru/danbooru
|
closed
|
dmail: Turn nested quote block into expand block
|
Low Priority
|
Viewing/previewing dmail only:
Since the Respond link wraps the old message in a quote block, a Respond to that Respond wraps _that_ in a quote block, etc...; you end up with deeply-nested quote blocks taking up a lot of space.
Second level quote blocks should be displayed as expand blocks with the first line in the block as the label, e.g. `[expand=RaisingK said:]`. The block remains `[quote]` in the raw text.
`QuoteA --> QuoteB --> QuoteC` displays as `QuoteA --> ExpandB --> QuoteC`. When the user responds to that, the message displays as `QuoteX --> ExpandA --> QuoteB --> QuoteC`.
|
1.0
|
dmail: Turn nested quote block into expand block - Viewing/previewing dmail only:
Since the Respond link wraps the old message in a quote block, a Respond to that Respond wraps _that_ in a quote block, etc...; you end up with deeply-nested quote blocks taking up a lot of space.
Second level quote blocks should be displayed as expand blocks with the first line in the block as the label, e.g. `[expand=RaisingK said:]`. The block remains `[quote]` in the raw text.
`QuoteA --> QuoteB --> QuoteC` displays as `QuoteA --> ExpandB --> QuoteC`. When the user responds to that, the message displays as `QuoteX --> ExpandA --> QuoteB --> QuoteC`.
|
non_process
|
dmail turn nested quote block into expand block viewing previewing dmail only since the respond link wraps the old message in a quote block a respond to that respond wraps that in a quote block etc you end up with deeply nested quote blocks taking up a lot of space second level quote blocks should be displayed as expand blocks with the first line in the block as the label e g the block remains in the raw text quotea quoteb quotec displays as quotea expandb quotec when the user responds to that the message displays as quotex expanda quoteb quotec
| 0
|
66,767
| 12,824,857,265
|
IssuesEvent
|
2020-07-06 14:07:13
|
MeAmAnUsername/pie
|
https://api.github.com/repos/MeAmAnUsername/pie
|
opened
|
remove try-p2j-ast-exp
|
Component: code base Priority: low Status: specified Type: enhancement
|
It turns errors at stratego compile time into errors at Java compile time / Java runtime, which is bad.
|
1.0
|
remove try-p2j-ast-exp - It turns errors at stratego compile time into errors at Java compile time / Java runtime, which is bad.
|
non_process
|
remove try ast exp it turns errors at stratego compile time into errors at java compile time java runtime which is bad
| 0
|
312,405
| 26,862,348,525
|
IssuesEvent
|
2023-02-03 19:38:21
|
spring-projects/spring-security
|
https://api.github.com/repos/spring-projects/spring-security
|
closed
|
WebTestUtilsTestRuntimeHints should only be invoked for Servlet
|
in: test type: bug status: forward-port
|
Forward port of issue #12622 to 6.1.x.
|
1.0
|
WebTestUtilsTestRuntimeHints should only be invoked for Servlet - Forward port of issue #12622 to 6.1.x.
|
non_process
|
webtestutilstestruntimehints should only be invoked for servlet forward port of issue to x
| 0
|
9,410
| 12,406,901,466
|
IssuesEvent
|
2020-05-21 20:01:35
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
LocationWrapper can be initialized without `messageSandbox`, but `messageSandbox` optional isn't properly handled
|
AREA: client SYSTEM: client side processing TYPE: bug
|
<!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest Hammerhead version (https://github.com/DevExpress/testcafe-hammerhead/releases), where this behavior might have been already addressed.
Before submitting an issue, please check existing issues in this repository (https://github.com/DevExpress/testcafe-hammerhead/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Scenario?
`LocationWrapper` can be initialized without `messageSandbox`, but `messageSandbox` optional isn't properly handled
### What is the Current behavior?
Test is failing on
```
Cannot read property 'on' of undefined
```
when `LocationWrapper` is initialized without `messageSandbox`
https://github.com/DevExpress/testcafe-hammerhead/blob/master/src/client/sandbox/code-instrumentation/properties/index.ts#L87
https://github.com/DevExpress/testcafe-hammerhead/blob/master/src/client/sandbox/code-instrumentation/location/wrapper.ts#L159
### What is the Expected behavior?
Call `messageSandbox.on` only if `messageSandbox` is provided
https://github.com/DevExpress/testcafe-hammerhead/blob/master/src/client/sandbox/code-instrumentation/location/wrapper.ts#L159
### What is your public web site URL?
<!-- Share a public accessible link to your web site or provide a simple app which we can run. -->
Your website URL (or attach your complete example):
<details>
<summary>Your complete app code (or attach your test files):</summary>
<!-- Paste your app code here: -->
```js
```
</details>
<details>
<summary>Screenshots:</summary>
<!-- If applicable, add screenshots to help explain the issue. -->
```
```
</details>
### Steps to Reproduce:
<!-- Describe what we should do to reproduce the behavior you encountered. -->
1. Go to: ...
2. Execute this command: ...
3. See the error: ...
### Your Environment details:
* node.js version: <!-- run `node -v` -->
* browser name and version: <!-- example: IE 11, Chrome 69, Firefox 100, etc. -->
* platform and version: <!-- example: "macOS 10.14, Windows, Linux Ubuntu 18.04.1, iOS 12 -->
* other: <!-- any notes you consider important -->
|
1.0
|
LocationWrapper can be initialized without `messageSandbox`, but `messageSandbox` optional isn't properly handled - <!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest Hammerhead version (https://github.com/DevExpress/testcafe-hammerhead/releases), where this behavior might have been already addressed.
Before submitting an issue, please check existing issues in this repository (https://github.com/DevExpress/testcafe-hammerhead/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Scenario?
`LocationWrapper` can be initialized without `messageSandbox`, but `messageSandbox` optional isn't properly handled
### What is the Current behavior?
Test is failing on
```
Cannot read property 'on' of undefined
```
when `LocationWrapper` is initialized without `messageSandbox`
https://github.com/DevExpress/testcafe-hammerhead/blob/master/src/client/sandbox/code-instrumentation/properties/index.ts#L87
https://github.com/DevExpress/testcafe-hammerhead/blob/master/src/client/sandbox/code-instrumentation/location/wrapper.ts#L159
### What is the Expected behavior?
Call `messageSandbox.on` only if `messageSandbox` is provided
https://github.com/DevExpress/testcafe-hammerhead/blob/master/src/client/sandbox/code-instrumentation/location/wrapper.ts#L159
### What is your public web site URL?
<!-- Share a public accessible link to your web site or provide a simple app which we can run. -->
Your website URL (or attach your complete example):
<details>
<summary>Your complete app code (or attach your test files):</summary>
<!-- Paste your app code here: -->
```js
```
</details>
<details>
<summary>Screenshots:</summary>
<!-- If applicable, add screenshots to help explain the issue. -->
```
```
</details>
### Steps to Reproduce:
<!-- Describe what we should do to reproduce the behavior you encountered. -->
1. Go to: ...
2. Execute this command: ...
3. See the error: ...
### Your Environment details:
* node.js version: <!-- run `node -v` -->
* browser name and version: <!-- example: IE 11, Chrome 69, Firefox 100, etc. -->
* platform and version: <!-- example: "macOS 10.14, Windows, Linux Ubuntu 18.04.1, iOS 12 -->
* other: <!-- any notes you consider important -->
|
process
|
locationwrapper can be initialized without messagesandbox but messagesandbox optional isn t properly handled if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest hammerhead version where this behavior might have been already addressed before submitting an issue please check existing issues in this repository in case a similar issue exists or was already addressed this may save your time and ours what is your scenario locationwrapper can be initialized without messagesandbox but messagesandbox optional isn t properly handled what is the current behavior test is failing on cannot read property on of undefined when locationwrapper is initialized without messagesandbox what is the expected behavior call messagesandbox on only if messagesandbox is provided what is your public web site url your website url or attach your complete example your complete app code or attach your test files js screenshots steps to reproduce go to execute this command see the error your environment details node js version browser name and version platform and version other
| 1
|
311,424
| 9,532,901,324
|
IssuesEvent
|
2019-04-29 19:50:37
|
hackla-engage/engage-client
|
https://api.github.com/repos/hackla-engage/engage-client
|
closed
|
When viewing Item Detail Card, browser "back" button does not behave as expected
|
Priority: Medium Status: Backlog Type: Bug
|
when viewing the item detail card, pressing the back button does not take the user back to the feed. As the URL changes when viewing an item detail card (an id query param is added to it with the id of the card in question), and as clicking the back button does change the URL, it should also mimic the behavior of taking the user back to the previous "screen" (i.e. the feed).
To Reproduce:
1. Open this URL: http://localhost:8080/#/feed?id=40
2. Press the "back" button on your browser.
3. Note that the URL has changed to: http://localhost:8080/#/feed -- this is the URL to view the Feed.
Expected behavior:
- The Item Detail Card disappears and the user is returned to the feed.
Actual behavior:
- The Item Detail Card does not disappear.
|
1.0
|
When viewing Item Detail Card, browser "back" button does not behave as expected - when viewing the item detail card, pressing the back button does not take the user back to the feed. As the URL changes when viewing an item detail card (an id query param is added to it with the id of the card in question), and as clicking the back button does change the URL, it should also mimic the behavior of taking the user back to the previous "screen" (i.e. the feed).
To Reproduce:
1. Open this URL: http://localhost:8080/#/feed?id=40
2. Press the "back" button on your browser.
3. Note that the URL has changed to: http://localhost:8080/#/feed -- this is the URL to view the Feed.
Expected behavior:
- The Item Detail Card disappears and the user is returned to the feed.
Actual behavior:
- The Item Detail Card does not disappear.
|
non_process
|
when viewing item detail card browser back button does not behave as expected when viewing the item detail card pressing the back button does not take the user back to the feed as the url changes when viewing an item detail card an id query param is added to it with the id of the card in question and as clicking the back button does change the url it should also mimic the behavior of taking the user back to the previous screen i e the feed to reproduce open this url press the back button on your browser note that the url has changed to this is the url to view the feed expected behavior the item detail card disappears and the user is returned to the feed actual behavior the item detail card does not disappear
| 0
|
7,670
| 10,758,930,667
|
IssuesEvent
|
2019-10-31 15:46:22
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Storage: 'test_bpo_set_unset_preserves_acls' no longer sees expected 'BadRequest'.
|
api: storage backend testing type: process
|
From [this Kokoro job](https://source.cloud.google.com/results/invocations/2691ce70-20dd-4f90-ad95-d483b80b4ec2/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fstorage/log):
```python
____________ TestIAMConfiguration.test_bpo_set_unset_preserves_acls ____________
self = <tests.system.TestIAMConfiguration testMethod=test_bpo_set_unset_preserves_acls>
def test_bpo_set_unset_preserves_acls(self):
new_bucket_name = "bpo-acls" + unique_resource_id("-")
self.assertRaises(
exceptions.NotFound, Config.CLIENT.get_bucket, new_bucket_name
)
bucket = retry_429(Config.CLIENT.create_bucket)(new_bucket_name)
self.case_buckets_to_delete.append(new_bucket_name)
blob_name = "my-blob.txt"
blob = bucket.blob(blob_name)
payload = b"DEADBEEF"
blob.upload_from_string(payload)
# Preserve ACLs before setting BPO
bucket_acl_before = list(bucket.acl)
blob_acl_before = list(bucket.acl)
# Set BPO
bucket.iam_configuration.bucket_policy_only_enabled = True
bucket.patch()
# While BPO is set, cannot get / set ACLs
with self.assertRaises(exceptions.BadRequest):
> bucket.acl.reload()
E AssertionError: BadRequest not raised
tests/system.py:1709: AssertionError
```
|
1.0
|
Storage: 'test_bpo_set_unset_preserves_acls' no longer sees expected 'BadRequest'. - From [this Kokoro job](https://source.cloud.google.com/results/invocations/2691ce70-20dd-4f90-ad95-d483b80b4ec2/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fstorage/log):
```python
____________ TestIAMConfiguration.test_bpo_set_unset_preserves_acls ____________
self = <tests.system.TestIAMConfiguration testMethod=test_bpo_set_unset_preserves_acls>
def test_bpo_set_unset_preserves_acls(self):
new_bucket_name = "bpo-acls" + unique_resource_id("-")
self.assertRaises(
exceptions.NotFound, Config.CLIENT.get_bucket, new_bucket_name
)
bucket = retry_429(Config.CLIENT.create_bucket)(new_bucket_name)
self.case_buckets_to_delete.append(new_bucket_name)
blob_name = "my-blob.txt"
blob = bucket.blob(blob_name)
payload = b"DEADBEEF"
blob.upload_from_string(payload)
# Preserve ACLs before setting BPO
bucket_acl_before = list(bucket.acl)
blob_acl_before = list(bucket.acl)
# Set BPO
bucket.iam_configuration.bucket_policy_only_enabled = True
bucket.patch()
# While BPO is set, cannot get / set ACLs
with self.assertRaises(exceptions.BadRequest):
> bucket.acl.reload()
E AssertionError: BadRequest not raised
tests/system.py:1709: AssertionError
```
|
process
|
storage test bpo set unset preserves acls no longer sees expected badrequest from python testiamconfiguration test bpo set unset preserves acls self def test bpo set unset preserves acls self new bucket name bpo acls unique resource id self assertraises exceptions notfound config client get bucket new bucket name bucket retry config client create bucket new bucket name self case buckets to delete append new bucket name blob name my blob txt blob bucket blob blob name payload b deadbeef blob upload from string payload preserve acls before setting bpo bucket acl before list bucket acl blob acl before list bucket acl set bpo bucket iam configuration bucket policy only enabled true bucket patch while bpo is set cannot get set acls with self assertraises exceptions badrequest bucket acl reload e assertionerror badrequest not raised tests system py assertionerror
| 1
|
17,359
| 23,185,419,856
|
IssuesEvent
|
2022-08-01 07:58:13
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[New Connector] No watermarks using new source connector.
|
compute/data-processing type/bug
|
will need to update with instructions https://github.com/streamnative/sn-pulsar-flink-workshop
^ can u access this repo? to reproduce
Run docker-compose up pulsar
Run setup.sh
Run this class https://github.com/streamnative/sn-pulsar-flink-workshop/blob/main/src/main/java/io/ipolyzos/compute/v3/EnrichmentStream.java <- You don't need a flink cluster, they binaries will spin up one with a UI -- so navigate to the flink UI
Run both the producers here https://github.com/streamnative/sn-pulsar-flink-workshop/tree/main/src/main/java/io/ipolyzos/producers
and u should see no watermarks.
Again i get this only with pulsar-flink, using another source to generate records and set watermarks seems to work :thinking_face:
According to Ioannis, we don't have watermarks on the source.
|
1.0
|
[New Connector] No watermarks using new source connector. - will need to update with instructions https://github.com/streamnative/sn-pulsar-flink-workshop
^ can u access this repo? to reproduce
Run docker-compose up pulsar
Run setup.sh
Run this class https://github.com/streamnative/sn-pulsar-flink-workshop/blob/main/src/main/java/io/ipolyzos/compute/v3/EnrichmentStream.java <- You don't need a flink cluster, they binaries will spin up one with a UI -- so navigate to the flink UI
Run both the producers here https://github.com/streamnative/sn-pulsar-flink-workshop/tree/main/src/main/java/io/ipolyzos/producers
and u should see no watermarks.
Again i get this only with pulsar-flink, using another source to generate records and set watermarks seems to work :thinking_face:
According to Ioannis, we don't have watermarks on the source.
|
process
|
no watermarks using new source connector will need to update with instructions can u access this repo to reproduce run docker compose up pulsar run setup sh run this class you don t need a flink cluster they binaries will spin up one with a ui so navigate to the flink ui run both the producers here and u should see no watermarks again i get this only with pulsar flink using another source to generate records and set watermarks seems to work thinking face according to ioannis we don t have watermarks on the source
| 1
|
2,352
| 5,164,004,478
|
IssuesEvent
|
2017-01-17 09:12:10
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [eng] Orlando : « Un meurtre homophobe de masse »
|
Language: English Process: [6] Approved
|
# Video title
Orlando : « Un meurtre homophobe de masse »
# URL
https://www.youtube.com/watch?v=0I5nlon7Mn0
# Youtube subtitle language
Anglais
# Duration
3:14
# URL subtitles
https://www.youtube.com/timedtext_editor?ref=player&v=0I5nlon7Mn0&tab=captions&bl=vmp&lang=en&ui=hd&action_mde_edit_form=1
|
1.0
|
[subtitles] [eng] Orlando : « Un meurtre homophobe de masse » - # Video title
Orlando : « Un meurtre homophobe de masse »
# URL
https://www.youtube.com/watch?v=0I5nlon7Mn0
# Youtube subtitle language
Anglais
# Duration
3:14
# URL subtitles
https://www.youtube.com/timedtext_editor?ref=player&v=0I5nlon7Mn0&tab=captions&bl=vmp&lang=en&ui=hd&action_mde_edit_form=1
|
process
|
orlando « un meurtre homophobe de masse » video title orlando « un meurtre homophobe de masse » url youtube subtitle language anglais duration url subtitles
| 1
|
112,069
| 4,506,130,931
|
IssuesEvent
|
2016-09-02 01:46:06
|
Aricelio/App-Balancete
|
https://api.github.com/repos/Aricelio/App-Balancete
|
closed
|
Criar Tabela N pra N 'TagLancamento'
|
higher priority task
|
A tabela Lancamento e a tabela Tag possuem relacionamento N para N
|
1.0
|
Criar Tabela N pra N 'TagLancamento' - A tabela Lancamento e a tabela Tag possuem relacionamento N para N
|
non_process
|
criar tabela n pra n taglancamento a tabela lancamento e a tabela tag possuem relacionamento n para n
| 0
|
17,971
| 23,983,657,651
|
IssuesEvent
|
2022-09-13 17:03:54
|
mdsreq-fga-unb/2022.1-Capita-C
|
https://api.github.com/repos/mdsreq-fga-unb/2022.1-Capita-C
|
closed
|
Processo de Requisitos
|
requisito ProcessoRequisitos REQ Comentários Professor
|
Da mesma maneira como foi realizado na unidade 1, está sendo listado um conjunto de atividades: quando serão feitas, por quem, etc. Onde essas atividades estão ou serão posicionadas no processo de trabalho?
**Fonte**
https://mdsreq-fga-unb.github.io/2022.1-Capita-C/processoER/
|
1.0
|
Processo de Requisitos - Da mesma maneira como foi realizado na unidade 1, está sendo listado um conjunto de atividades: quando serão feitas, por quem, etc. Onde essas atividades estão ou serão posicionadas no processo de trabalho?
**Fonte**
https://mdsreq-fga-unb.github.io/2022.1-Capita-C/processoER/
|
process
|
processo de requisitos da mesma maneira como foi realizado na unidade está sendo listado um conjunto de atividades quando serão feitas por quem etc onde essas atividades estão ou serão posicionadas no processo de trabalho fonte
| 1
|
18,511
| 24,551,581,120
|
IssuesEvent
|
2022-10-12 13:00:49
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Mobile apps] Sign in screen > Password field > 'Eye' icon should not get displayed in the mobile apps
|
Bug P1 iOS Android Process: Fixed Process: Tested dev Auth server
|
Sign in screen > Password field > 'Eye' icon should not get displayed in the mobile apps
|
2.0
|
[Mobile apps] Sign in screen > Password field > 'Eye' icon should not get displayed in the mobile apps - Sign in screen > Password field > 'Eye' icon should not get displayed in the mobile apps
|
process
|
sign in screen password field eye icon should not get displayed in the mobile apps sign in screen password field eye icon should not get displayed in the mobile apps
| 1
|
344,397
| 24,812,256,723
|
IssuesEvent
|
2022-10-25 10:20:21
|
postmanlabs/postman-app-support
|
https://api.github.com/repos/postmanlabs/postman-app-support
|
closed
|
Markdown numbered lists clip tens digit when rendered
|
bug product/documentation
|
### Is there an existing issue for this?
- [X] I have searched the tracker for existing similar issues and I know that duplicates will be closed
### Describe the Issue
When I add ten or more items to a numbered markdown list for documentation, the tens digit is clipped when rendered. So the rended tenth element looks like
```
...
9. ninth element
0. tenth element
1. eleventh element
...
```
All digits are visible in edit mode, in both *Classic Markdown* and *Postman beta* editors. Only when saved or previewed is the tens digit clipped.
### Steps To Reproduce
1. Go to any documentation panel.
2. Create a numbered list with more than nine items
3. Save it
You should see the tens digit clipped when the numbered list is rendered. For convenience, try adding the list below to a Postman markdown editor.
```
1. primum
2. secundum
3. tertium
4. quartum
5. quintum
6. sextum
7. septimum
8. octavum
9. nonum
10. decimum
11. undecimum
```
### Screenshots or Videos

### Operating System
macOS
### Postman Version
UI version: 9.20.3, Desktop Platform Version: 9.19.0
### Postman Platform
Postman App
### Additional Context?
_No response_
|
1.0
|
Markdown numbered lists clip tens digit when rendered - ### Is there an existing issue for this?
- [X] I have searched the tracker for existing similar issues and I know that duplicates will be closed
### Describe the Issue
When I add ten or more items to a numbered markdown list for documentation, the tens digit is clipped when rendered. So the rended tenth element looks like
```
...
9. ninth element
0. tenth element
1. eleventh element
...
```
All digits are visible in edit mode, in both *Classic Markdown* and *Postman beta* editors. Only when saved or previewed is the tens digit clipped.
### Steps To Reproduce
1. Go to any documentation panel.
2. Create a numbered list with more than nine items
3. Save it
You should see the tens digit clipped when the numbered list is rendered. For convenience, try adding the list below to a Postman markdown editor.
```
1. primum
2. secundum
3. tertium
4. quartum
5. quintum
6. sextum
7. septimum
8. octavum
9. nonum
10. decimum
11. undecimum
```
### Screenshots or Videos

### Operating System
macOS
### Postman Version
UI version: 9.20.3, Desktop Platform Version: 9.19.0
### Postman Platform
Postman App
### Additional Context?
_No response_
|
non_process
|
markdown numbered lists clip tens digit when rendered is there an existing issue for this i have searched the tracker for existing similar issues and i know that duplicates will be closed describe the issue when i add ten or more items to a numbered markdown list for documentation the tens digit is clipped when rendered so the rended tenth element looks like ninth element tenth element eleventh element all digits are visible in edit mode in both classic markdown and postman beta editors only when saved or previewed is the tens digit clipped steps to reproduce go to any documentation panel create a numbered list with more than nine items save it you should see the tens digit clipped when the numbered list is rendered for convenience try adding the list below to a postman markdown editor primum secundum tertium quartum quintum sextum septimum octavum nonum decimum undecimum screenshots or videos operating system macos postman version ui version desktop platform version postman platform postman app additional context no response
| 0
|
294,814
| 25,407,746,164
|
IssuesEvent
|
2022-11-22 16:27:58
|
tnha22/gPBL-Team2
|
https://api.github.com/repos/tnha22/gPBL-Team2
|
opened
|
AnhTNH - Testing System
|
Testing
|
Includes:
- Test front-end
- Test APIs
- Test design documents
- Test documents
|
1.0
|
AnhTNH - Testing System - Includes:
- Test front-end
- Test APIs
- Test design documents
- Test documents
|
non_process
|
anhtnh testing system includes test front end test apis test design documents test documents
| 0
|
7,763
| 10,885,389,505
|
IssuesEvent
|
2019-11-18 10:18:02
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Bug report]api-proxy把非function类型属性变成了function
|
bug processing
|
**问题描述**
api-proxy把非function类型属性变成了function
**复现步骤**
请给出如何复现这个问题的方法,像下面一样分步骤描述清楚
1. 访问mpx.env属性,却获取到了一个function类型的属性
**期望的表现**
获取到mpx下的原来的非function属性
|
1.0
|
[Bug report]api-proxy把非function类型属性变成了function - **问题描述**
api-proxy把非function类型属性变成了function
**复现步骤**
请给出如何复现这个问题的方法,像下面一样分步骤描述清楚
1. 访问mpx.env属性,却获取到了一个function类型的属性
**期望的表现**
获取到mpx下的原来的非function属性
|
process
|
api proxy把非function类型属性变成了function 问题描述 api proxy把非function类型属性变成了function 复现步骤 请给出如何复现这个问题的方法,像下面一样分步骤描述清楚 访问mpx env属性,却获取到了一个function类型的属性 期望的表现 获取到mpx下的原来的非function属性
| 1
|
56,002
| 11,494,520,969
|
IssuesEvent
|
2020-02-12 01:49:37
|
toolbox-team/reddit-moderator-toolbox
|
https://api.github.com/repos/toolbox-team/reddit-moderator-toolbox
|
opened
|
Abstract the context listeners a bit more
|
code quality
|
Currently, in order to add an item to the context menu, a lot of logic checking has to be done to add or remove the right elements (for example, [usernotes.js:778-801](https://github.com/toolbox-team/reddit-moderator-toolbox/blob/83613b4e2aaaef637124874be2bcf77ede96682d/extension/data/modules/usernotes.js#L778-L801)). We could greatly simplify this by creating an API something like this:
```js
TBui.registerContextItem({
title: "Show /r/{{subreddit}} usernotes",
icon: TBui.icons.usernote,
isSubredditPage: true,
isMod: true,
}, async (page) => {
// The action to be executed when the context item is clicked.
// Gets page info as an argument so we don't have to store
// things on the trigger itself as data parameters.
});
```
TBui would then be responsible for a single `TBNewPage` listener that would check the conditions of all listeners every time the page changed and add or remove them as necessary. This would also mean we wouldn't have to maintain unique IDs on things anymore, since the listeners can be attached directly to the DOM elements without the need for a `$body.on('click', '#thing', ...)` call from the consumer.
|
1.0
|
Abstract the context listeners a bit more - Currently, in order to add an item to the context menu, a lot of logic checking has to be done to add or remove the right elements (for example, [usernotes.js:778-801](https://github.com/toolbox-team/reddit-moderator-toolbox/blob/83613b4e2aaaef637124874be2bcf77ede96682d/extension/data/modules/usernotes.js#L778-L801)). We could greatly simplify this by creating an API something like this:
```js
TBui.registerContextItem({
title: "Show /r/{{subreddit}} usernotes",
icon: TBui.icons.usernote,
isSubredditPage: true,
isMod: true,
}, async (page) => {
// The action to be executed when the context item is clicked.
// Gets page info as an argument so we don't have to store
// things on the trigger itself as data parameters.
});
```
TBui would then be responsible for a single `TBNewPage` listener that would check the conditions of all listeners every time the page changed and add or remove them as necessary. This would also mean we wouldn't have to maintain unique IDs on things anymore, since the listeners can be attached directly to the DOM elements without the need for a `$body.on('click', '#thing', ...)` call from the consumer.
|
non_process
|
abstract the context listeners a bit more currently in order to add an item to the context menu a lot of logic checking has to be done to add or remove the right elements for example we could greatly simplify this by creating an api something like this js tbui registercontextitem title show r subreddit usernotes icon tbui icons usernote issubredditpage true ismod true async page the action to be executed when the context item is clicked gets page info as an argument so we don t have to store things on the trigger itself as data parameters tbui would then be responsible for a single tbnewpage listener that would check the conditions of all listeners every time the page changed and add or remove them as necessary this would also mean we wouldn t have to maintain unique ids on things anymore since the listeners can be attached directly to the dom elements without the need for a body on click thing call from the consumer
| 0
|
10,292
| 13,145,442,257
|
IssuesEvent
|
2020-08-08 03:30:27
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Feature Request: Packetbeat in docker network
|
:Processors Packetbeat Stalled enhancement needs_team
|
It would be nice if packetbeat were more "docker aware", and if the `add_docker_metadata` could enrich docker container networks.
For example, we have a stack set up on one host, communicating using a bridge network. The packetbeat container is on the same stack, monitoring the host network.
|
1.0
|
Feature Request: Packetbeat in docker network - It would be nice if packetbeat were more "docker aware", and if the `add_docker_metadata` could enrich docker container networks.
For example, we have a stack set up on one host, communicating using a bridge network. The packetbeat container is on the same stack, monitoring the host network.
|
process
|
feature request packetbeat in docker network it would be nice if packetbeat were more docker aware and if the add docker metadata could enrich docker container networks for example we have a stack set up on one host communicating using a bridge network the packetbeat container is on the same stack monitoring the host network
| 1
|
8,891
| 11,986,063,425
|
IssuesEvent
|
2020-04-07 18:37:33
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Add Pod Priority Class
|
P2 enhancement process
|
**Problem**
When initially installing and when resources are constrained and it needs to preempt pods, Kubernetes does not know which pods are more important than others. [Pod priority class](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) is a critical feature for a stable application to set to indicate to Kubernetes that missing information.
**Solution**
Add priority class to all pods:
- 1000: critical - database
- 750: high - importer
- 500: medium - rest and grpc
- 250: low - prometheus
- 0: unset/default
**Alternatives**
- Use older container resources scheduling
**Additional Context**
|
1.0
|
Add Pod Priority Class - **Problem**
When initially installing and when resources are constrained and it needs to preempt pods, Kubernetes does not know which pods are more important than others. [Pod priority class](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) is a critical feature for a stable application to set to indicate to Kubernetes that missing information.
**Solution**
Add priority class to all pods:
- 1000: critical - database
- 750: high - importer
- 500: medium - rest and grpc
- 250: low - prometheus
- 0: unset/default
**Alternatives**
- Use older container resources scheduling
**Additional Context**
|
process
|
add pod priority class problem when initially installing and when resources are constrained and it needs to preempt pods kubernetes does not know which pods are more important than others is a critical feature for a stable application to set to indicate to kubernetes that missing information solution add priority class to all pods critical database high importer medium rest and grpc low prometheus unset default alternatives use older container resources scheduling additional context
| 1
|
17,113
| 22,634,815,018
|
IssuesEvent
|
2022-06-30 17:51:11
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
opened
|
Automate deployment of relay in envs
|
enhancement P2 process
|
### Problem
Helm chart deployment capabilities were added.
However, they are currently manual
### Solution
Add automation support to
- [ ] Deploy to integration on every main branch checking
- [ ] Deploy to previewnet on every vx.y.z tag
- [ ] Semi-automated flow to deploy to testnet once test gates have been confirmed
Most work will be performed by OPS so this may serve mostly to track with occasional chart updates
### Alternatives
_No response_
|
1.0
|
Automate deployment of relay in envs - ### Problem
Helm chart deployment capabilities were added.
However, they are currently manual
### Solution
Add automation support to
- [ ] Deploy to integration on every main branch checking
- [ ] Deploy to previewnet on every vx.y.z tag
- [ ] Semi-automated flow to deploy to testnet once test gates have been confirmed
Most work will be performed by OPS so this may serve mostly to track with occasional chart updates
### Alternatives
_No response_
|
process
|
automate deployment of relay in envs problem helm chart deployment capabilities were added however they are currently manual solution add automation support to deploy to integration on every main branch checking deploy to previewnet on every vx y z tag semi automated flow to deploy to testnet once test gates have been confirmed most work will be performed by ops so this may serve mostly to track with occasional chart updates alternatives no response
| 1
|
3,687
| 6,716,778,075
|
IssuesEvent
|
2017-10-14 13:11:46
|
TraningManagementSystem/tms
|
https://api.github.com/repos/TraningManagementSystem/tms
|
closed
|
SwaggerCodegenを行うGradleプラグインの最新バージョン追いつき
|
dev process
|
### Description
SwaggerCodegenを行うGradleプラグインの最新バージョン追いつき。
----
### Details
SwaggerCodegenを行うGradleプラグインの最新バージョン追いつき。
最新版だと、2.8.0になってしまい記載方法のベストプラクティスも変化してきているので・・・
----
### Relation Issue
なし
----
|
1.0
|
SwaggerCodegenを行うGradleプラグインの最新バージョン追いつき - ### Description
SwaggerCodegenを行うGradleプラグインの最新バージョン追いつき。
----
### Details
SwaggerCodegenを行うGradleプラグインの最新バージョン追いつき。
最新版だと、2.8.0になってしまい記載方法のベストプラクティスも変化してきているので・・・
----
### Relation Issue
なし
----
|
process
|
swaggercodegenを行うgradleプラグインの最新バージョン追いつき description swaggercodegenを行うgradleプラグインの最新バージョン追いつき。 details swaggercodegenを行うgradleプラグインの最新バージョン追いつき。 最新版だと、 ・・・ relation issue なし
| 1
|
364,732
| 25,499,635,958
|
IssuesEvent
|
2022-11-28 02:00:47
|
Choi-Won-Jun/bankServiceProject
|
https://api.github.com/repos/Choi-Won-Jun/bankServiceProject
|
opened
|
[All] 형식 통일하기
|
documentation
|
본인 담당 코드에 아래와 같은 형식이 필요하다면,
1. 100,000 원처럼 금액을 표현 할 때 ---> ₩100,000 으로 통일 해주세요.
추가로 통일 필요한 형식이나, 형식 궁금한게 있다면 댓글로 달아주세요.
|
1.0
|
[All] 형식 통일하기 - 본인 담당 코드에 아래와 같은 형식이 필요하다면,
1. 100,000 원처럼 금액을 표현 할 때 ---> ₩100,000 으로 통일 해주세요.
추가로 통일 필요한 형식이나, 형식 궁금한게 있다면 댓글로 달아주세요.
|
non_process
|
형식 통일하기 본인 담당 코드에 아래와 같은 형식이 필요하다면 원처럼 금액을 표현 할 때 ₩ 으로 통일 해주세요 추가로 통일 필요한 형식이나 형식 궁금한게 있다면 댓글로 달아주세요
| 0
|
81,125
| 15,603,083,463
|
IssuesEvent
|
2021-03-19 01:09:28
|
turkdevops/desktop
|
https://api.github.com/repos/turkdevops/desktop
|
opened
|
CVE-2021-23337 (High) detected in lodash-3.10.1.tgz
|
security vulnerability
|
## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: desktop/node_modules/lodash/package.json</p>
<p>Path to vulnerable library: desktop/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- stop-build-1.1.0.tgz (Root Library)
- ggit-1.15.1.tgz
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Command Injection via template.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23337 (High) detected in lodash-3.10.1.tgz - ## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: desktop/node_modules/lodash/package.json</p>
<p>Path to vulnerable library: desktop/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- stop-build-1.1.0.tgz (Root Library)
- ggit-1.15.1.tgz
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Command Injection via template.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file desktop node modules lodash package json path to vulnerable library desktop node modules lodash package json dependency hierarchy stop build tgz root library ggit tgz x lodash tgz vulnerable library found in base branch development vulnerability details all versions of package lodash all versions of package org fujion webjars lodash are vulnerable to command injection via template publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
3,847
| 6,808,539,491
|
IssuesEvent
|
2017-11-04 04:17:44
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
whenBlock: Date search does not read empty blocks
|
status-inprocess tools-whenBlock type-bug
|
Only searches non-empty blocks in date mode
whenBlock 2017-06-01T00: block #3800775 : 1496275190 : 2017-05-31 23:59:50 UTC
whenBlock 2017-06-01T00:01: block #3800779 : 1496275259 : 2017-06-01 00:00:59 UTC
Here, we ask for block numbers with one second in between, but a difference of four blocks is reported. This is because whenBlock ignores empty blocks when searching by date (it uses the fullBlockIndex.bin file).
The command
whenBlock 3800778
reports the date correctly, so it only happens in date mode (when given a date and asked to report a block).
|
1.0
|
whenBlock: Date search does not read empty blocks - Only searches non-empty blocks in date mode
whenBlock 2017-06-01T00: block #3800775 : 1496275190 : 2017-05-31 23:59:50 UTC
whenBlock 2017-06-01T00:01: block #3800779 : 1496275259 : 2017-06-01 00:00:59 UTC
Here, we ask for block numbers with one second in between, but a difference of four blocks is reported. This is because whenBlock ignores empty blocks when searching by date (it uses the fullBlockIndex.bin file).
The command
whenBlock 3800778
reports the date correctly, so it only happens in date mode (when given a date and asked to report a block).
|
process
|
whenblock date search does not read empty blocks only searches non empty blocks in date mode whenblock block utc whenblock block utc here we ask for block numbers with one second in between but a difference of four blocks is reported this is because whenblock ignores empty blocks when searching by date it uses the fullblockindex bin file the command whenblock reports the date correctly so it only happens in date mode when given a date and asked to report a block
| 1
|
553,291
| 16,362,114,475
|
IssuesEvent
|
2021-05-14 11:08:15
|
epam/ketcher
|
https://api.github.com/repos/epam/ketcher
|
opened
|
Blank page and a console error appear while saving
|
bug priority: high
|
_Affected versions:_ Remote, Standalone
Tested on Ketcher v2.2.1-102-g430a3502
_Steps to reproduce:_
1. Launch Ketcher.
2. Create any structure (e.g. chain).
2. Click the 'Save As' button.
3. Select any format except MDL Molfile V2000.
_Expected result:_ It's possible to save the structure.
_Actual result:_ Blank page and a console error appear.

|
1.0
|
Blank page and a console error appear while saving - _Affected versions:_ Remote, Standalone
Tested on Ketcher v2.2.1-102-g430a3502
_Steps to reproduce:_
1. Launch Ketcher.
2. Create any structure (e.g. chain).
2. Click the 'Save As' button.
3. Select any format except MDL Molfile V2000.
_Expected result:_ It's possible to save the structure.
_Actual result:_ Blank page and a console error appear.

|
non_process
|
blank page and a console error appear while saving affected versions remote standalone tested on ketcher steps to reproduce launch ketcher create any structure e g chain click the save as button select any format except mdl molfile expected result it s possible to save the structure actual result blank page and a console error appear
| 0
|
15,333
| 3,456,148,928
|
IssuesEvent
|
2015-12-17 23:27:48
|
infiniteautomation/ma-core-public
|
https://api.github.com/repos/infiniteautomation/ma-core-public
|
closed
|
Excel Reports - Saving without points quiet failure
|
Enhancement Ready for Testing
|
Saving a report with no points produces a validation error message at the top, but no information as to the error.
|
1.0
|
Excel Reports - Saving without points quiet failure - Saving a report with no points produces a validation error message at the top, but no information as to the error.
|
non_process
|
excel reports saving without points quiet failure saving a report with no points produces a validation error message at the top but no information as to the error
| 0
|
15,871
| 20,036,623,721
|
IssuesEvent
|
2022-02-02 12:36:55
|
syncfusion/ej2-angular-ui-components
|
https://api.github.com/repos/syncfusion/ej2-angular-ui-components
|
closed
|
It can not use two document editor components in one parent component.
|
word-processor
|
HTML elements in document editor have an id property. When using more than one document editor component in one parent component, the app raises warnings:

After destroying the component that contains two document editor components and then whenever I click anywhere on the app page, the app shows this error:

|
1.0
|
It can not use two document editor components in one parent component. - HTML elements in document editor have an id property. When using more than one document editor component in one parent component, the app raises warnings:

After destroying the component that contains two document editor components and then whenever I click anywhere on the app page, the app shows this error:

|
process
|
it can not use two document editor components in one parent component html elements in document editor have an id property when using more than one document editor component in one parent component the app raises warnings after destroying the component that contains two document editor components and then whenever i click anywhere on the app page the app shows this error
| 1
|
90,532
| 26,132,479,138
|
IssuesEvent
|
2022-12-29 07:38:24
|
spack/spack
|
https://api.github.com/repos/spack/spack
|
opened
|
Installation issue: esmf
|
build-error
|
### Steps to reproduce the issue
ESMF fails to compile on NOAA WCOSS2/Acorn systems with Cray PE+Intel compilers+Cray MPICH. Builds successfully if ESMF_OS=Linux and ESMF_COMM=mpich3, but esmf/package.py sets those to Unicos and mpi, respectively.
```console
$ spack spec -l esmf
Input spec
--------------------------------
esmf
Concretized
--------------------------------
6rxazwx esmf@8.3.0b09%intel@19.1.3.304~debug~external-lapack+mpi+netcdf~parallelio+pio~pnetcdf~shared~xerces build_system=makefile arch=linux-sles15-zen2
zhz3fn4 ^cray-mpich@8.1.9%intel@19.1.3.304~wrappers build_system=generic arch=linux-sles15-zen2
7vjuutq ^libxml2@2.10.3%intel@19.1.3.304~python build_system=autotools arch=linux-sles15-zen2
inla2gw ^libiconv@1.16%intel@19.1.3.304 build_system=autotools libs=shared,static arch=linux-sles15-zen2
eyz7fdy ^pkg-config@0.29.2%intel@19.1.3.304+internal_glib build_system=autotools arch=linux-sles15-zen2
ilrrzhx ^xz@5.2.6%intel@19.1.3.304~pic build_system=autotools libs=shared,static arch=linux-sles15-zen2
e65xq45 ^netcdf-c@4.7.4%intel@19.1.3.304~dap~fsync~hdf4~jna+mpi+optimize~parallel-netcdf+pic~shared build_system=autotools arch=linux-sles15-zen2
xc2opw5 ^hdf5@1.10.6%intel@19.1.3.304~cxx+fortran+hl~ipo~java+mpi~shared~szip+threadsafe+tools api=default build_system=cmake build_type=RelWithDebInfo arch=linux-sles15-zen2
utbfs5w ^cmake@3.20.2%intel@19.1.3.304~doc+ncurses+ownlibs~qt build_system=generic build_type=Release arch=linux-sles15-zen2
sj5fkki ^m4@1.4.18%intel@19.1.3.304+sigsegv build_system=autotools patches=3877ab5,fc9b616 arch=linux-sles15-zen2
cry7dbu ^netcdf-fortran@4.5.4%intel@19.1.3.304~doc+pic~shared build_system=autotools arch=linux-sles15-zen2
ij722et ^zlib@1.2.11%intel@19.1.3.304+optimize+pic~shared build_system=makefile arch=linux-sles15-zen2
...
```
@climbfuji @jedwards4b
### Error message
Here is a sampling of the error messages (other undefined references include mpi_send_, mpi_wait_, etc.):
<details><summary>Error message</summary><pre>
>> 13388 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:162: undefined reference to `mpi_bcast_'
>> 13389 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:163: undefined reference to `mpi_bcast_'
>> 13390 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:164: undefined reference to `mpi_bcast_'
>> 13391 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:165: undefined reference to `mpi_bcast_'
</pre></details>
### Information on your system
Lmod modules in ESMF build env:
1) craype-x86-rome (H) 2) envvar/1.0 3) PrgEnv-intel/8.3.3 4) intel/19.1.3.304 5) craype/2.7.13 6) libfabric/1.11.0.0. (H) 7) craype-network-ofi (H) 8) cray-mpich/8.1.9
spack debug report:
* **Spack:** 0.20.0.dev0 (3a152138df8f6db58019b1b8d19b75a0bbbd0c23)
* **Python:** 3.6.15
* **Platform:** linux-sles15-zen2
* **Concretizer:** clingo
### Additional information
[spack-build-env.txt](https://github.com/spack/spack/files/10318160/spack-build-env.txt)
[spack-build-out.txt](https://github.com/spack/spack/files/10318162/spack-build-out.txt)
### General information
- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [X] I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers
- [X] I have uploaded the build log and environment files
- [X] I have searched the issues of this repo and believe this is not a duplicate
|
1.0
|
Installation issue: esmf - ### Steps to reproduce the issue
ESMF fails to compile on NOAA WCOSS2/Acorn systems with Cray PE+Intel compilers+Cray MPICH. Builds successfully if ESMF_OS=Linux and ESMF_COMM=mpich3, but esmf/package.py sets those to Unicos and mpi, respectively.
```console
$ spack spec -l esmf
Input spec
--------------------------------
esmf
Concretized
--------------------------------
6rxazwx esmf@8.3.0b09%intel@19.1.3.304~debug~external-lapack+mpi+netcdf~parallelio+pio~pnetcdf~shared~xerces build_system=makefile arch=linux-sles15-zen2
zhz3fn4 ^cray-mpich@8.1.9%intel@19.1.3.304~wrappers build_system=generic arch=linux-sles15-zen2
7vjuutq ^libxml2@2.10.3%intel@19.1.3.304~python build_system=autotools arch=linux-sles15-zen2
inla2gw ^libiconv@1.16%intel@19.1.3.304 build_system=autotools libs=shared,static arch=linux-sles15-zen2
eyz7fdy ^pkg-config@0.29.2%intel@19.1.3.304+internal_glib build_system=autotools arch=linux-sles15-zen2
ilrrzhx ^xz@5.2.6%intel@19.1.3.304~pic build_system=autotools libs=shared,static arch=linux-sles15-zen2
e65xq45 ^netcdf-c@4.7.4%intel@19.1.3.304~dap~fsync~hdf4~jna+mpi+optimize~parallel-netcdf+pic~shared build_system=autotools arch=linux-sles15-zen2
xc2opw5 ^hdf5@1.10.6%intel@19.1.3.304~cxx+fortran+hl~ipo~java+mpi~shared~szip+threadsafe+tools api=default build_system=cmake build_type=RelWithDebInfo arch=linux-sles15-zen2
utbfs5w ^cmake@3.20.2%intel@19.1.3.304~doc+ncurses+ownlibs~qt build_system=generic build_type=Release arch=linux-sles15-zen2
sj5fkki ^m4@1.4.18%intel@19.1.3.304+sigsegv build_system=autotools patches=3877ab5,fc9b616 arch=linux-sles15-zen2
cry7dbu ^netcdf-fortran@4.5.4%intel@19.1.3.304~doc+pic~shared build_system=autotools arch=linux-sles15-zen2
ij722et ^zlib@1.2.11%intel@19.1.3.304+optimize+pic~shared build_system=makefile arch=linux-sles15-zen2
...
```
@climbfuji @jedwards4b
### Error message
Here is a sampling of the error messages (other undefined references include mpi_send_, mpi_wait_, etc.):
<details><summary>Error message</summary><pre>
>> 13388 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:162: undefined reference to `mpi_bcast_'
>> 13389 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:163: undefined reference to `mpi_bcast_'
>> 13390 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:164: undefined reference to `mpi_bcast_'
>> 13391 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:165: undefined reference to `mpi_bcast_'
</pre></details>
### Information on your system
Lmod modules in ESMF build env:
1) craype-x86-rome (H) 2) envvar/1.0 3) PrgEnv-intel/8.3.3 4) intel/19.1.3.304 5) craype/2.7.13 6) libfabric/1.11.0.0. (H) 7) craype-network-ofi (H) 8) cray-mpich/8.1.9
spack debug report:
* **Spack:** 0.20.0.dev0 (3a152138df8f6db58019b1b8d19b75a0bbbd0c23)
* **Python:** 3.6.15
* **Platform:** linux-sles15-zen2
* **Concretizer:** clingo
### Additional information
[spack-build-env.txt](https://github.com/spack/spack/files/10318160/spack-build-env.txt)
[spack-build-out.txt](https://github.com/spack/spack/files/10318162/spack-build-out.txt)
### General information
- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [X] I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers
- [X] I have uploaded the build log and environment files
- [X] I have searched the issues of this repo and believe this is not a duplicate
|
non_process
|
installation issue esmf steps to reproduce the issue esmf fails to compile on noaa acorn systems with cray pe intel compilers cray mpich builds successfully if esmf os linux and esmf comm but esmf package py sets those to unicos and mpi respectively console spack spec l esmf input spec esmf concretized esmf intel debug external lapack mpi netcdf parallelio pio pnetcdf shared xerces build system makefile arch linux cray mpich intel wrappers build system generic arch linux intel python build system autotools arch linux libiconv intel build system autotools libs shared static arch linux pkg config intel internal glib build system autotools arch linux ilrrzhx xz intel pic build system autotools libs shared static arch linux netcdf c intel dap fsync jna mpi optimize parallel netcdf pic shared build system autotools arch linux intel cxx fortran hl ipo java mpi shared szip threadsafe tools api default build system cmake build type relwithdebinfo arch linux cmake intel doc ncurses ownlibs qt build system generic build type release arch linux intel sigsegv build system autotools patches arch linux netcdf fortran intel doc pic shared build system autotools arch linux zlib intel optimize pic shared build system makefile arch linux climbfuji error message here is a sampling of the error messages other undefined references include mpi send mpi wait etc error message usr gcc suse linux suse linux bin ld path to cache build stage spack stage esmf spack src src infrastr ucture io pio piodarray in undefined reference to mpi bcast usr gcc suse linux suse linux bin ld path to cache build stage spack stage esmf spack src src infrastr ucture io pio piodarray in undefined reference to mpi bcast usr gcc suse linux suse linux bin ld path to cache build stage spack stage esmf spack src src infrastr ucture io pio piodarray in undefined reference to mpi bcast usr gcc suse linux suse linux bin ld path to cache build stage spack stage esmf spack src src infrastr ucture io pio piodarray in undefined reference to mpi bcast information on your system lmod modules in esmf build env craype rome h envvar prgenv intel intel craype libfabric h craype network ofi h cray mpich spack debug report spack python platform linux concretizer clingo additional information general information i have run spack debug report and reported the version of spack python platform i have run spack maintainers and mentioned any maintainers i have uploaded the build log and environment files i have searched the issues of this repo and believe this is not a duplicate
| 0
|
20,050
| 10,450,801,217
|
IssuesEvent
|
2019-09-19 11:24:59
|
nais/naiserator
|
https://api.github.com/repos/nais/naiserator
|
closed
|
Create ClusterRole with reduced privileges
|
good first issue security
|
Today, naiserator uses the `cluster-admin` role. This is too much privilege and must be constrained.
|
True
|
Create ClusterRole with reduced privileges - Today, naiserator uses the `cluster-admin` role. This is too much privilege and must be constrained.
|
non_process
|
create clusterrole with reduced privileges today naiserator uses the cluster admin role this is too much privilege and must be constrained
| 0
|
17,054
| 22,471,514,977
|
IssuesEvent
|
2022-06-22 08:32:03
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
opened
|
[Epic] Process Instance Modification
|
kind/epic team/process-automation
|
## Description
As a user, I can modify an active process instance to repair the execution. The execution may be stuck and I want to continue the execution on a different activity (i.e. skip or repeat activities).
Use cases:
* Repair a process instance that is stuck (primary)
* Testing activities of a process in isolation (secondary)
Additional resources:
* Related to #9366
* [Kickoff meeting](https://docs.google.com/document/d/1JOHLvFv83ysmiliol0As4-f9nhCeXR6wTlAnG1UvNoQ) (internal)
## Concept
We modify a process instance by **terminating** active element instances and **activating** new element instances.
The termination of an element instance includes:
* writing a `terminating` and `terminated` event
* deleting event subscriptions
* resolving incidents
* canceling jobs
The activation of an element instance is similar to #9366 and includes:
* writing a `activate` command
* create variables if defined
* if no element instance of the flow scope exists then activate the flow scope by writing an `activating` and `activated` event
### Ancestor selection for activation
This is an advanced feature for activating a new element instance. By default, the new element instance is created within an existing element instance of the flow scope. For example, when activating an element inside an embedded subprocess and the subprocess is already active.
If there is more than one element instance of the flow scope active then the engine can't decide which element instance to create the new element instance in. Instead, the element instance must be selected by its element instance key. The new element instance is created within the selected element instance.
If the selected element instance is not of the flow scope but from a higher scope (e.g. the process instance key instead of the element instance key of the subprocess) then the engine creates a new element instance of the flow scope first and then creates the new element instance within this scope.
### Propagation of termination
If an element instance is terminated and the element is a container (i.e. an embedded/event subprocess, multi-instance, call activity, process) then the engine terminates also the child instances (recursively).
The termination is also propagated to higher scopes. After the termination and activation instructions are applied, the engine checks the element instances of container elements. If an element instance of a container element doesn't have any child instances anymore then the engine terminates also the element instance of the container element (recursively).
## Task Breakdown
### Broker
### Gateway API
### Exporters/Metrics
### Testing
### Zeebe Process Test
### Documentation
### Out of scope
|
1.0
|
[Epic] Process Instance Modification - ## Description
As a user, I can modify an active process instance to repair the execution. The execution may be stuck and I want to continue the execution on a different activity (i.e. skip or repeat activities).
Use cases:
* Repair a process instance that is stuck (primary)
* Testing activities of a process in isolation (secondary)
Additional resources:
* Related to #9366
* [Kickoff meeting](https://docs.google.com/document/d/1JOHLvFv83ysmiliol0As4-f9nhCeXR6wTlAnG1UvNoQ) (internal)
## Concept
We modify a process instance by **terminating** active element instances and **activating** new element instances.
The termination of an element instance includes:
* writing a `terminating` and `terminated` event
* deleting event subscriptions
* resolving incidents
* canceling jobs
The activation of an element instance is similar to #9366 and includes:
* writing a `activate` command
* create variables if defined
* if no element instance of the flow scope exists then activate the flow scope by writing an `activating` and `activated` event
### Ancestor selection for activation
This is an advanced feature for activating a new element instance. By default, the new element instance is created within an existing element instance of the flow scope. For example, when activating an element inside an embedded subprocess and the subprocess is already active.
If there is more than one element instance of the flow scope active then the engine can't decide which element instance to create the new element instance in. Instead, the element instance must be selected by its element instance key. The new element instance is created within the selected element instance.
If the selected element instance is not of the flow scope but from a higher scope (e.g. the process instance key instead of the element instance key of the subprocess) then the engine creates a new element instance of the flow scope first and then creates the new element instance within this scope.
### Propagation of termination
If an element instance is terminated and the element is a container (i.e. an embedded/event subprocess, multi-instance, call activity, process) then the engine terminates also the child instances (recursively).
The termination is also propagated to higher scopes. After the termination and activation instructions are applied, the engine checks the element instances of container elements. If an element instance of a container element doesn't have any child instances anymore then the engine terminates also the element instance of the container element (recursively).
## Task Breakdown
### Broker
### Gateway API
### Exporters/Metrics
### Testing
### Zeebe Process Test
### Documentation
### Out of scope
|
process
|
process instance modification description as a user i can modify an active process instance to repair the execution the execution may be stuck and i want to continue the execution on a different activity i e skip or repeat activities use cases repair a process instance that is stuck primary testing activities of a process in isolation secondary additional resources related to internal concept we modify a process instance by terminating active element instances and activating new element instances the termination of an element instance includes writing a terminating and terminated event deleting event subscriptions resolving incidents canceling jobs the activation of an element instance is similar to and includes writing a activate command create variables if defined if no element instance of the flow scope exists then activate the flow scope by writing an activating and activated event ancestor selection for activation this is an advanced feature for activating a new element instance by default the new element instance is created within an existing element instance of the flow scope for example when activating an element inside an embedded subprocess and the subprocess is already active if there is more than one element instance of the flow scope active then the engine can t decide which element instance to create the new element instance in instead the element instance must be selected by its element instance key the new element instance is created within the selected element instance if the selected element instance is not of the flow scope but from a higher scope e g the process instance key instead of the element instance key of the subprocess then the engine creates a new element instance of the flow scope first and then creates the new element instance within this scope propagation of termination if an element instance is terminated and the element is a container i e an embedded event subprocess multi instance call activity process then the engine terminates also the child instances recursively the termination is also propagated to higher scopes after the termination and activation instructions are applied the engine checks the element instances of container elements if an element instance of a container element doesn t have any child instances anymore then the engine terminates also the element instance of the container element recursively task breakdown broker gateway api exporters metrics testing zeebe process test documentation out of scope
| 1
|
239,277
| 18,267,922,111
|
IssuesEvent
|
2021-10-04 10:37:21
|
marmelab/react-admin
|
https://api.github.com/repos/marmelab/react-admin
|
closed
|
Module not found: Can't resolve 'graphql' in 'C:\Users\me\Desktop\react-admin-example\node_modules\ra-data-graphql-simple\esm'
|
good first issue documentation
|
I followed the instructions [here](https://www.npmjs.com/package/ra-data-graphql-simple) but i got the following error
**node_modules/ra-data-graphql-simple/esm/buildGqlQuery.js
Module not found: Can't resolve 'graphql' in 'C:\Users\Razi\Desktop\react-admin-auth0-example\node_modules\ra-data-graphql-simple\esm'**
in App.js :
```
import * as React from 'react';
import { Provider } from 'react-redux';
import { Component } from 'react';
import buildGraphQLProvider from 'ra-data-graphql-simple';
// material
import PostIcon from '@material-ui/icons/Book';
import UserIcon from '@material-ui/icons/Group';
// react admin
import {fetchUtils,Admin, Resource, ListGuesser} from 'react-admin';
import jsonServerProvider from 'ra-data-json-server';
// pages
import {OrderList, OrderEdit, OrderCreate, OrderShow} from './pages/orders';
import {UserList} from './pages/users';
import loginPage from "./pages/login";
// components
import Dashboard from './components/Dashboard';
import authProvider from './utils/authProvider';
// browser history
import { createBrowserHistory as createHistory } from 'history';
const history = createHistory();
class App extends Component {
constructor() {
super();
this.state = { dataProvider: null };
}
componentDidMount() {
buildGraphQLProvider({ clientOptions: { uri: ' https://domain.name/graphql' }})
.then(dataProvider => this.setState({ dataProvider }));
}
render() {
const { dataProvider } = this.state;
if (!dataProvider) {
return <div>Loading</div>;
}
return (
<Admin
authProvider={authProvider}
dataProvider={dataProvider}
history={history}
dashboard={Dashboard}
loginPage={loginPage}
>
<Resource
name="orders"
icon={PostIcon}
list={OrderList}
edit={OrderEdit}
create={OrderCreate}
show={OrderShow}
/>
</Admin>
);
}
}
```
export default App;
My dependencies :
```
"dependencies": {
"@auth0/auth0-spa-js": "^1.12.1",
"@material-ui/core": "^4.10.0",
"history": "^4.10.1",
"html2canvas": "^1.0.0-rc.7",
"jspdf": "^2.2.0",
"jspdf-react": "^1.0.11",
"ra-data-graphql": "^3.10.1",
"ra-data-graphql-simple": "^3.10.2",
"ra-data-json-server": "^3.0.0",
"react": "^16.9.0",
"react-admin": "^3.11.0",
"react-dom": "^16.9.0",
"react-pdf": "^5.0.0",
"react-scripts": "^3.0.0",
"react-to-print": "^2.12.0"
},
```
|
1.0
|
Module not found: Can't resolve 'graphql' in 'C:\Users\me\Desktop\react-admin-example\node_modules\ra-data-graphql-simple\esm' -
I followed the instructions [here](https://www.npmjs.com/package/ra-data-graphql-simple) but i got the following error
**node_modules/ra-data-graphql-simple/esm/buildGqlQuery.js
Module not found: Can't resolve 'graphql' in 'C:\Users\Razi\Desktop\react-admin-auth0-example\node_modules\ra-data-graphql-simple\esm'**
in App.js :
```
import * as React from 'react';
import { Provider } from 'react-redux';
import { Component } from 'react';
import buildGraphQLProvider from 'ra-data-graphql-simple';
// material
import PostIcon from '@material-ui/icons/Book';
import UserIcon from '@material-ui/icons/Group';
// react admin
import {fetchUtils,Admin, Resource, ListGuesser} from 'react-admin';
import jsonServerProvider from 'ra-data-json-server';
// pages
import {OrderList, OrderEdit, OrderCreate, OrderShow} from './pages/orders';
import {UserList} from './pages/users';
import loginPage from "./pages/login";
// components
import Dashboard from './components/Dashboard';
import authProvider from './utils/authProvider';
// browser history
import { createBrowserHistory as createHistory } from 'history';
const history = createHistory();
class App extends Component {
constructor() {
super();
this.state = { dataProvider: null };
}
componentDidMount() {
buildGraphQLProvider({ clientOptions: { uri: ' https://domain.name/graphql' }})
.then(dataProvider => this.setState({ dataProvider }));
}
render() {
const { dataProvider } = this.state;
if (!dataProvider) {
return <div>Loading</div>;
}
return (
<Admin
authProvider={authProvider}
dataProvider={dataProvider}
history={history}
dashboard={Dashboard}
loginPage={loginPage}
>
<Resource
name="orders"
icon={PostIcon}
list={OrderList}
edit={OrderEdit}
create={OrderCreate}
show={OrderShow}
/>
</Admin>
);
}
}
```
export default App;
My dependencies :
```
"dependencies": {
"@auth0/auth0-spa-js": "^1.12.1",
"@material-ui/core": "^4.10.0",
"history": "^4.10.1",
"html2canvas": "^1.0.0-rc.7",
"jspdf": "^2.2.0",
"jspdf-react": "^1.0.11",
"ra-data-graphql": "^3.10.1",
"ra-data-graphql-simple": "^3.10.2",
"ra-data-json-server": "^3.0.0",
"react": "^16.9.0",
"react-admin": "^3.11.0",
"react-dom": "^16.9.0",
"react-pdf": "^5.0.0",
"react-scripts": "^3.0.0",
"react-to-print": "^2.12.0"
},
```
|
non_process
|
module not found can t resolve graphql in c users me desktop react admin example node modules ra data graphql simple esm i followed the instructions but i got the following error node modules ra data graphql simple esm buildgqlquery js module not found can t resolve graphql in c users razi desktop react admin example node modules ra data graphql simple esm in app js import as react from react import provider from react redux import component from react import buildgraphqlprovider from ra data graphql simple material import posticon from material ui icons book import usericon from material ui icons group react admin import fetchutils admin resource listguesser from react admin import jsonserverprovider from ra data json server pages import orderlist orderedit ordercreate ordershow from pages orders import userlist from pages users import loginpage from pages login components import dashboard from components dashboard import authprovider from utils authprovider browser history import createbrowserhistory as createhistory from history const history createhistory class app extends component constructor super this state dataprovider null componentdidmount buildgraphqlprovider clientoptions uri then dataprovider this setstate dataprovider render const dataprovider this state if dataprovider return loading return admin authprovider authprovider dataprovider dataprovider history history dashboard dashboard loginpage loginpage resource name orders icon posticon list orderlist edit orderedit create ordercreate show ordershow export default app my dependencies dependencies spa js material ui core history rc jspdf jspdf react ra data graphql ra data graphql simple ra data json server react react admin react dom react pdf react scripts react to print
| 0
|
1,635
| 4,254,941,301
|
IssuesEvent
|
2016-07-09 03:58:35
|
pattern-lab/the-spec
|
https://api.github.com/repos/pattern-lab/the-spec
|
closed
|
Discussion & Vote: Developing an Acid Test StarterKit
|
process-enhancement vote pending / needed
|
I need to check if my changes to core affect output in an unexpected way. I also need to make sure my changes meet the needs of spec.
-----
We have no good way to ensure that Node and PHP are developing towards spec or if we are in compliance. We use the demo but it doesn't address all of the various test cases we need to develop to properly define the boundaries of the spec.
An [Acid Test](http://www.acidtests.org) StarterKit could give us a common base to use for testing. If we roll in visual regression testing we can quickly see where our changes break the expected output. This idea is based on the initial work of James Nash so full credit to him for the idea.
The timeline for this feature is sooner rather than later. At a minimum we can roll out the StarterKit and tack on visual regression testing at some point in the future. I'm willing to put together the patterns for the StarterKit as PHP is closest to being spec-compliant.
Tagging process-enhancement because this is all about making sure we work better together across platforms.
This vote will close at 10EST on July 22nd or once two voting members have given the yay or nay.
/cc @pattern-lab/voting-members
|
1.0
|
Discussion & Vote: Developing an Acid Test StarterKit -
I need to check if my changes to core affect output in an unexpected way. I also need to make sure my changes meet the needs of spec.
-----
We have no good way to ensure that Node and PHP are developing towards spec or if we are in compliance. We use the demo but it doesn't address all of the various test cases we need to develop to properly define the boundaries of the spec.
An [Acid Test](http://www.acidtests.org) StarterKit could give us a common base to use for testing. If we roll in visual regression testing we can quickly see where our changes break the expected output. This idea is based on the initial work of James Nash so full credit to him for the idea.
The timeline for this feature is sooner rather than later. At a minimum we can roll out the StarterKit and tack on visual regression testing at some point in the future. I'm willing to put together the patterns for the StarterKit as PHP is closest to being spec-compliant.
Tagging process-enhancement because this is all about making sure we work better together across platforms.
This vote will close at 10EST on July 22nd or once two voting members have given the yay or nay.
/cc @pattern-lab/voting-members
|
process
|
discussion vote developing an acid test starterkit i need to check if my changes to core affect output in an unexpected way i also need to make sure my changes meet the needs of spec we have no good way to ensure that node and php are developing towards spec or if we are in compliance we use the demo but it doesn t address all of the various test cases we need to develop to properly define the boundaries of the spec an starterkit could give us a common base to use for testing if we roll in visual regression testing we can quickly see where our changes break the expected output this idea is based on the initial work of james nash so full credit to him for the idea the timeline for this feature is sooner rather than later at a minimum we can roll out the starterkit and tack on visual regression testing at some point in the future i m willing to put together the patterns for the starterkit as php is closest to being spec compliant tagging process enhancement because this is all about making sure we work better together across platforms this vote will close at on july or once two voting members have given the yay or nay cc pattern lab voting members
| 1
|
18,353
| 6,583,127,035
|
IssuesEvent
|
2017-09-13 03:24:23
|
HypothesisWorks/hypothesis-python
|
https://api.github.com/repos/HypothesisWorks/hypothesis-python
|
closed
|
Keeping Hypothesis current on conda-forge
|
CI-and-build
|
Downstream issue: conda-forge/hypothesis-feedstock#8
`conda` is a pip+virtualenv alternative which is very popular for scientific and data python users (including me). The default repos are maintained by Continuum, but there is also an increasingly popular community-run channel called conda-forge (think RHEL: CentOS). Having a current version of Hypothesis on conda-forge would likely encourage PyData projects to use it for some of their tests - especially given our NumPy support.
There is already a Hypothesis package in the conda-forge channel (linked above), but it receives relatively infrequent updates - about once a quarter - because the process is largely manual. Helping them stay up to date by issuing a pull with the new version number and hash as part of our deploy script would be very nice. Any bot we use for #616 could automate it entirely, which would be even better.
|
1.0
|
Keeping Hypothesis current on conda-forge - Downstream issue: conda-forge/hypothesis-feedstock#8
`conda` is a pip+virtualenv alternative which is very popular for scientific and data python users (including me). The default repos are maintained by Continuum, but there is also an increasingly popular community-run channel called conda-forge (think RHEL: CentOS). Having a current version of Hypothesis on conda-forge would likely encourage PyData projects to use it for some of their tests - especially given our NumPy support.
There is already a Hypothesis package in the conda-forge channel (linked above), but it receives relatively infrequent updates - about once a quarter - because the process is largely manual. Helping them stay up to date by issuing a pull with the new version number and hash as part of our deploy script would be very nice. Any bot we use for #616 could automate it entirely, which would be even better.
|
non_process
|
keeping hypothesis current on conda forge downstream issue conda forge hypothesis feedstock conda is a pip virtualenv alternative which is very popular for scientific and data python users including me the default repos are maintained by continuum but there is also an increasingly popular community run channel called conda forge think rhel centos having a current version of hypothesis on conda forge would likely encourage pydata projects to use it for some of their tests especially given our numpy support there is already a hypothesis package in the conda forge channel linked above but it receives relatively infrequent updates about once a quarter because the process is largely manual helping them stay up to date by issuing a pull with the new version number and hash as part of our deploy script would be very nice any bot we use for could automate it entirely which would be even better
| 0
|
104,108
| 8,961,885,762
|
IssuesEvent
|
2019-01-28 10:51:17
|
humera987/FXLabs-Test-Automation
|
https://api.github.com/repos/humera987/FXLabs-Test-Automation
|
closed
|
API Test 1 : ApiV1ProjectsIdSearchAutoSuggestionsSearchStatusGetQueryParamPagesizeSla
|
API Test 1 API Test 1
|
Project : API Test 1
Job : JOB
Env : ENV
Category : null
Tags : null
Severity : null
Region : AliTest
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZmZmOTE2NjItMTk4Ni00Mjk2LWE3OWMtNDYxYWY1MjQ0OWVm; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 28 Jan 2019 10:15:36 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/projects/KadqwKLd/search-auto-suggestions/search/KadqwKLd?pageSize=1001
Request :
Response :
{
"timestamp" : "2019-01-28T10:15:37.617+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/projects/KadqwKLd/search-auto-suggestions/search/KadqwKLd"
}
Logs :
Assertion [@StatusCode == 200 AND @ResponseTime < 1000] resolved-to [404 == 200 AND 1844 < 1000] result [Failed]
--- FX Bot ---
|
2.0
|
API Test 1 : ApiV1ProjectsIdSearchAutoSuggestionsSearchStatusGetQueryParamPagesizeSla - Project : API Test 1
Job : JOB
Env : ENV
Category : null
Tags : null
Severity : null
Region : AliTest
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZmZmOTE2NjItMTk4Ni00Mjk2LWE3OWMtNDYxYWY1MjQ0OWVm; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 28 Jan 2019 10:15:36 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/projects/KadqwKLd/search-auto-suggestions/search/KadqwKLd?pageSize=1001
Request :
Response :
{
"timestamp" : "2019-01-28T10:15:37.617+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/projects/KadqwKLd/search-auto-suggestions/search/KadqwKLd"
}
Logs :
Assertion [@StatusCode == 200 AND @ResponseTime < 1000] resolved-to [404 == 200 AND 1844 < 1000] result [Failed]
--- FX Bot ---
|
non_process
|
api test project api test job job env env category null tags null severity null region alitest result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api projects kadqwkld search auto suggestions search kadqwkld logs assertion resolved to result fx bot
| 0
|
16,945
| 22,301,859,557
|
IssuesEvent
|
2022-06-13 09:26:16
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
PostgreSQL should work with array ids
|
process/candidate kind/improvement tech/engines/migration engine tech/engines/query engine tech/engines/introspection engine tech/engines/datamodel topic: postgresql list team/schema team/client topic: postgresql
|
PostgreSQL lets you to create a table with an array as the primary key. Prisma does not let you to do that, blocking some use-cases. We should enable all array types on PostgreSQL for `@id`/`@@id` columns, and make sure Query Engine will work properly with these identifiers.
The Schema Team side is done already in the following PR: https://github.com/prisma/prisma-engines/pull/2971
|
1.0
|
PostgreSQL should work with array ids - PostgreSQL lets you to create a table with an array as the primary key. Prisma does not let you to do that, blocking some use-cases. We should enable all array types on PostgreSQL for `@id`/`@@id` columns, and make sure Query Engine will work properly with these identifiers.
The Schema Team side is done already in the following PR: https://github.com/prisma/prisma-engines/pull/2971
|
process
|
postgresql should work with array ids postgresql lets you to create a table with an array as the primary key prisma does not let you to do that blocking some use cases we should enable all array types on postgresql for id id columns and make sure query engine will work properly with these identifiers the schema team side is done already in the following pr
| 1
|
5,318
| 8,132,382,140
|
IssuesEvent
|
2018-08-18 11:06:20
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
[System.Diagnostics] Using LoadUserProfile on Linux: Starting a process with a different identity is not supported on this platform
|
area-System.Diagnostics.Process bug
|
**Hello!** I'm new to .NET Core (recently moved my project from .NET Framework for it).
### Story
- I didn't know that RunAs is partially available on Linux, so I used all functionality of it in my code.
- I catched exception `System.PlatformNotSupportedException: Starting a process with a different identity is not supported on this platform`, googled it and saw solution of #30140 by @eerhardt.
- **Nothing helped.** Then, I saw `LoadUserProfile = false` in my code, deleted it, and... that was real solution!
### Reproduce
- Use `System.Diagnostics.Process` functionality in code
- Set `LoadUserProfile` in process `StartInfo`
- Build project and run it on Linux (and maybe on other non-Windows platforms)
- Get `System.PlatformNotSupportedException` exception
### Improvement idea
Maybe we need to change exception message for LoadUserProfile on Linux, so other users can understand it more quckly than me?
**P.S. Sorry for my bad English, it's not my main language...**
|
1.0
|
[System.Diagnostics] Using LoadUserProfile on Linux: Starting a process with a different identity is not supported on this platform - **Hello!** I'm new to .NET Core (recently moved my project from .NET Framework for it).
### Story
- I didn't know that RunAs is partially available on Linux, so I used all functionality of it in my code.
- I catched exception `System.PlatformNotSupportedException: Starting a process with a different identity is not supported on this platform`, googled it and saw solution of #30140 by @eerhardt.
- **Nothing helped.** Then, I saw `LoadUserProfile = false` in my code, deleted it, and... that was real solution!
### Reproduce
- Use `System.Diagnostics.Process` functionality in code
- Set `LoadUserProfile` in process `StartInfo`
- Build project and run it on Linux (and maybe on other non-Windows platforms)
- Get `System.PlatformNotSupportedException` exception
### Improvement idea
Maybe we need to change exception message for LoadUserProfile on Linux, so other users can understand it more quckly than me?
**P.S. Sorry for my bad English, it's not my main language...**
|
process
|
using loaduserprofile on linux starting a process with a different identity is not supported on this platform hello i m new to net core recently moved my project from net framework for it story i didn t know that runas is partially available on linux so i used all functionality of it in my code i catched exception system platformnotsupportedexception starting a process with a different identity is not supported on this platform googled it and saw solution of by eerhardt nothing helped then i saw loaduserprofile false in my code deleted it and that was real solution reproduce use system diagnostics process functionality in code set loaduserprofile in process startinfo build project and run it on linux and maybe on other non windows platforms get system platformnotsupportedexception exception improvement idea maybe we need to change exception message for loaduserprofile on linux so other users can understand it more quckly than me p s sorry for my bad english it s not my main language
| 1
|
7,641
| 10,737,503,526
|
IssuesEvent
|
2019-10-29 13:13:57
|
Open-EO/openeo-api
|
https://api.github.com/repos/Open-EO/openeo-api
|
closed
|
Spec for "bands" cube:dimension?
|
data discovery processes stac / ogc work in progress
|
For [collection metadata](https://open-eo.github.io/openeo-api/apireference/#tag/EO-Data-Discovery/paths/~1collections~1{collection_id}/get) the spec defines the cube dimensions under `properties/cube:dimensions`. For spatial and temporal dimensions the `type` is well defined (`spatial` and `temporal`), but for "additional dimension" (typically for spectral bands I guess) it a is free form "custom type".
Example from the example in the spec:
"properties": {
"cube:dimensions": {
.....
"spectral": {
"type": "bands",
"values": ["B1", "B2", "B3", ... ]
}
Example from earth engine (https://earthengine.openeo.org/v0.4/collections/COPERNICUS/S2_SR):
"properties": {
"cube:dimensions": {
.....
"bands": {
"type": "bands",
"values": ["B1", "B2", "B3", ... ]
}
Note how both use the type "bands" , but use a different key ("bands" versus "spectral").
At the moment this "bands" type is just the generic "custom type" and not specified in the spec as far I know. But wouldn't it be better to do so as it is a very common dimension?
Also: shouldn't there be a recommendation/spec about the key names in of the `cube:dimensions` map/dict? See the above example with "bands" versus "spectral". For example: recommend to give the common dimension names "x", "y", "temporal" and "bands"?
|
1.0
|
Spec for "bands" cube:dimension? - For [collection metadata](https://open-eo.github.io/openeo-api/apireference/#tag/EO-Data-Discovery/paths/~1collections~1{collection_id}/get) the spec defines the cube dimensions under `properties/cube:dimensions`. For spatial and temporal dimensions the `type` is well defined (`spatial` and `temporal`), but for "additional dimension" (typically for spectral bands I guess) it a is free form "custom type".
Example from the example in the spec:
"properties": {
"cube:dimensions": {
.....
"spectral": {
"type": "bands",
"values": ["B1", "B2", "B3", ... ]
}
Example from earth engine (https://earthengine.openeo.org/v0.4/collections/COPERNICUS/S2_SR):
"properties": {
"cube:dimensions": {
.....
"bands": {
"type": "bands",
"values": ["B1", "B2", "B3", ... ]
}
Note how both use the type "bands" , but use a different key ("bands" versus "spectral").
At the moment this "bands" type is just the generic "custom type" and not specified in the spec as far I know. But wouldn't it be better to do so as it is a very common dimension?
Also: shouldn't there be a recommendation/spec about the key names in of the `cube:dimensions` map/dict? See the above example with "bands" versus "spectral". For example: recommend to give the common dimension names "x", "y", "temporal" and "bands"?
|
process
|
spec for bands cube dimension for the spec defines the cube dimensions under properties cube dimensions for spatial and temporal dimensions the type is well defined spatial and temporal but for additional dimension typically for spectral bands i guess it a is free form custom type example from the example in the spec properties cube dimensions spectral type bands values example from earth engine properties cube dimensions bands type bands values note how both use the type bands but use a different key bands versus spectral at the moment this bands type is just the generic custom type and not specified in the spec as far i know but wouldn t it be better to do so as it is a very common dimension also shouldn t there be a recommendation spec about the key names in of the cube dimensions map dict see the above example with bands versus spectral for example recommend to give the common dimension names x y temporal and bands
| 1
|
4,283
| 7,190,606,802
|
IssuesEvent
|
2018-02-02 17:51:34
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
--verbose doesn't really work
|
libs-utillib status-inprocess type-enhancement
|
In some cases, --verbose does nothing. In some cases, --verbose has only an 'on' / 'off' meaning. In some cases (ethslurp) it has a meaning but only up to level 2. Should make this work correctly or better document it, but not before version 0.2.0 is released.
|
1.0
|
--verbose doesn't really work - In some cases, --verbose does nothing. In some cases, --verbose has only an 'on' / 'off' meaning. In some cases (ethslurp) it has a meaning but only up to level 2. Should make this work correctly or better document it, but not before version 0.2.0 is released.
|
process
|
verbose doesn t really work in some cases verbose does nothing in some cases verbose has only an on off meaning in some cases ethslurp it has a meaning but only up to level should make this work correctly or better document it but not before version is released
| 1
|
129
| 2,564,931,544
|
IssuesEvent
|
2015-02-06 23:36:37
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
NativeTestService.exe binary should be removed from the repo
|
System.ServiceProcess.ServiceController test bug
|
I should've caught this in the PR, but it looks like we committed a binary for System.ServiceProcess.ServiceController's NativeTestService to the repo. This should get built as a dependency of the test project. There's no need to have the binary committed and referenced directly. See the System.Diagnostics.Process tests for an example.
Since this is a Win32 specific contract, we shouldn't need to worry about converting the build to cmake.
|
1.0
|
NativeTestService.exe binary should be removed from the repo - I should've caught this in the PR, but it looks like we committed a binary for System.ServiceProcess.ServiceController's NativeTestService to the repo. This should get built as a dependency of the test project. There's no need to have the binary committed and referenced directly. See the System.Diagnostics.Process tests for an example.
Since this is a Win32 specific contract, we shouldn't need to worry about converting the build to cmake.
|
process
|
nativetestservice exe binary should be removed from the repo i should ve caught this in the pr but it looks like we committed a binary for system serviceprocess servicecontroller s nativetestservice to the repo this should get built as a dependency of the test project there s no need to have the binary committed and referenced directly see the system diagnostics process tests for an example since this is a specific contract we shouldn t need to worry about converting the build to cmake
| 1
|
226,816
| 18,044,203,150
|
IssuesEvent
|
2021-09-18 15:49:05
|
logicmoo/logicmoo_workspace
|
https://api.github.com/repos/logicmoo/logicmoo_workspace
|
opened
|
logicmoo.pfc.test.sanity_base.NEG_01V JUnit
|
Test_9999 logicmoo.pfc.test.sanity_base unit_test NEG_01V
|
(cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc)
GH_MASTER_ISSUE_FINFO=
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ANEG_01V
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3/packs_sys/pfc/t/sanity_base/neg_01v.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/NEG_01V/logicmoo_pfc_test_sanity_base_NEG_01V_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/67/testReport/logicmoo.pfc.test.sanity_base/NEG_01V/logicmoo_pfc_test_sanity_base_NEG_01V_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3
https://github.com/logicmoo/logicmoo_workspace/blob/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3/packs_sys/pfc/t/sanity_base/neg_01v.pfc
```
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01v.pfc'),
%~ /var/lib/jenkins/.local/share/swi-prolog/pack/logicmoo_utils/prolog/logicmoo_test_header.pl:92
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- use_module(library(statistics)).
%:- mpred_notrace_exec.
% reset runtime counter
%:- mpred_notrace_exec.
% reset runtime counter
:- statistics(runtime,_Secs).
~path(X,X).
path(1,2).
path(2,3).
path(3,4).
path(1,1).
:- mpred_test(path(3, 4)).
%~ mpred_test("Test_0001_Line_0000__path_3",baseKB:path(3,4))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L20
/*~
%~ mpred_test("Test_0001_Line_0000__path_3",baseKB:path(3,4))
passed=info(why_was_true(baseKB:path(3,4)))
Justifications for path(3,4):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L17 ',17) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0001_Line_0000__path_3'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0001_Line_0000__path_3-junit.xml
~*/
:- mpred_test(path(2, 3)).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01v.pfc:21
%~ mpred_test("Test_0002_Line_0000__path_2",baseKB:path(2,3))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L21
/*~
%~ mpred_test("Test_0002_Line_0000__path_2",baseKB:path(2,3))
passed=info(why_was_true(baseKB:path(2,3)))
Justifications for path(2,3):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L16 ',16) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0002_Line_0000__path_2'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0002_Line_0000__path_2-junit.xml
~*/
:- mpred_test(path(1, 2)).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01v.pfc:22
%~ mpred_test("Test_0003_Line_0000__path_1",baseKB:path(1,2))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L22
/*~
%~ mpred_test("Test_0003_Line_0000__path_1",baseKB:path(1,2))
passed=info(why_was_true(baseKB:path(1,2)))
Justifications for path(1,2):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L15 ',15) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0003_Line_0000__path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0003_Line_0000__path_1-junit.xml
~*/
:- mpred_test(~path(1,1)).
%~ mpred_test("Test_0004_Line_0000__path_1",baseKB: ~path(1,1))
/*~
%~ mpred_test("Test_0004_Line_0000__path_1",baseKB: ~path(1,1))
^ Call: (68) [baseKB] ~path(1, 1)
^ Unify: (68) [baseKB] ~ (baseKB:path(1, 1))
^ Call: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 1)), info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 1)))))
^ Unify: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 1)), info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 1)))))
Call: (76) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (76) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (76) [loop_check] prolog_frame_attribute(1320, parent_goal, loop_check_term_frame(_92900, info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, _92906, _92908))
^ Fail: (76) [loop_check] prolog_frame_attribute(1320, parent_goal, loop_check_term_frame(_92900, info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, _92906, _92908))
^ Redo: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 1)), info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 1)))))
Call: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 1))
Unify: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 1))
^ Call: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 1)), info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 1)))))
^ Unify: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 1)), info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 1)))))
Call: (83) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (83) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (83) [loop_check] prolog_frame_attribute(1459, parent_goal, loop_check_term_frame(_98624, info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, _98630, _98632))
^ Fail: (83) [loop_check] prolog_frame_attribute(1459, parent_goal, loop_check_term_frame(_98624, info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, _98630, _98632))
^ Redo: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 1)), info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 1)))))
Call: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 1))
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 1))
^ Call: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _101956)
^ Unify: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _101956)
^ Call: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _101956)
^ Fail: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _101956)
^ Fail: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _101956)
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 1))
^ Call: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 1))
^ Unify: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 1))
^ Call: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 1))
^ Unify: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 1))
^ Fail: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 1))
^ Redo: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 1))
^ Exit: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 1))
^ Call: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _110126), call(_110126)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Unify: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _110126), call(_110126)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _110126)
^ Fail: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _110126)
Call: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
Unify: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Call: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _113912))
^ Unify: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _113912))
^ Call: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _113912)
^ Fail: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _113912)
^ Fail: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _113912))
Fail: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Fail: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _110126), call(_110126)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _118332)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_118902))
^ Call: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _118902)
^ Fail: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _118902)
^ Redo: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _120904)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _121532)
^ Call: (88) [pfc_lib] mpred_core_database_term(path, 2, _122100)
^ Fail: (88) [pfc_lib] mpred_core_database_term(path, 2, _122100)
^ Fail: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _123474)
^ Call: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _124078), call(_124078)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Unify: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _124078), call(_124078)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _124078)
^ Fail: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _124078)
Call: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
Unify: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _127864))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _127864))
^ Call: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _127864)
^ Fail: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _127864)
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _127864))
Fail: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Fail: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _124078), call(_124078)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_132230, path, 2, prologHybrid), _132260), call(_132260)*->true;clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_132230, path, 2, prologHybrid), _132260), call(_132260)*->true;clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))))
^ Call: (91) [hook_database] clause(mpred_prop(_132230, path, 2, prologHybrid), _132260)
^ Fail: (91) [hook_database] clause(mpred_prop(_132230, path, 2, prologHybrid), _132260)
Call: (91) [hook_database] clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))
Unify: (91) [hook_database] clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))
^ Call: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_132230, path, 2, prologHybrid), _136046))
^ Unify: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_132230, path, 2, prologHybrid), _136046))
^ Call: (93) [baseKB] clause(mpred_prop(_132230, path, 2, prologHybrid), _136046)
^ Fail: (93) [baseKB] clause(mpred_prop(_132230, path, 2, prologHybrid), _136046)
^ Fail: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_132230, path, 2, prologHybrid), _136046))
Fail: (91) [hook_database] clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_132230, path, 2, prologHybrid), _132260), call(_132260)*->true;clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))))
^ Call: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _140466)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_141036))
^ Call: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _141036)
^ Fail: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _141036)
^ Redo: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _143038)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _143666)
^ Call: (89) [pfc_lib] mpred_core_database_term(path, 2, _144234)
^ Fail: (89) [pfc_lib] mpred_core_database_term(path, 2, _144234)
^ Fail: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _145608)
Call: (98) [$autoload] leave_sandbox(_146206)
Unify: (98) [$autoload] leave_sandbox(_146206)
Exit: (98) [$autoload] leave_sandbox(false)
Call: (97) [$autoload] restore_sandbox(false)
Unify: (97) [$autoload] restore_sandbox(false)
Exit: (97) [$autoload] restore_sandbox(false)
Fail: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 1))
^ Fail: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 1)), info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 1)))))
Fail: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 1))
^ Fail: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 1)), info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 1)))))
^ Fail: (68) [baseKB] ~ (baseKB:path(1, 1))
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+ ~path(1,1))),rtrace(baseKB: ~path(1,1))))
no_proof_for(\+ ~path(1,1)).
no_proof_for(\+ ~path(1,1)).
no_proof_for(\+ ~path(1,1)).
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0004_Line_0000__path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0004_Line_0000__path_1-junit.xml
~*/
:- mpred_test(~path(2,2)).
%~ mpred_test("Test_0005_Line_0000__path_2",baseKB: ~path(2,2))
/*~
%~ mpred_test("Test_0005_Line_0000__path_2",baseKB: ~path(2,2))
^ Call: (68) [baseKB] ~path(2, 2)
^ Unify: (68) [baseKB] ~ (baseKB:path(2, 2))
^ Call: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(2, 2)), info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(2, 2)))))
^ Unify: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(2, 2)), info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(2, 2)))))
Call: (76) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (76) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (76) [loop_check] prolog_frame_attribute(1189, parent_goal, loop_check_term_frame(_259520, info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, _259526, _259528))
^ Fail: (76) [loop_check] prolog_frame_attribute(1189, parent_goal, loop_check_term_frame(_259520, info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, _259526, _259528))
^ Redo: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(2, 2)), info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(2, 2)))))
Call: (76) [pfc_lib] neg_in_code0(baseKB:path(2, 2))
Unify: (76) [pfc_lib] neg_in_code0(baseKB:path(2, 2))
^ Call: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(2, 2)), info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(2, 2)))))
^ Unify: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(2, 2)), info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(2, 2)))))
Call: (83) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (83) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (83) [loop_check] prolog_frame_attribute(1328, parent_goal, loop_check_term_frame(_265244, info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, _265250, _265252))
^ Fail: (83) [loop_check] prolog_frame_attribute(1328, parent_goal, loop_check_term_frame(_265244, info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, _265250, _265252))
^ Redo: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(2, 2)), info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(2, 2)))))
Call: (83) [pfc_lib] neg_may_naf(baseKB:path(2, 2))
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(2, 2))
^ Call: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _268576)
^ Unify: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _268576)
^ Call: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _268576)
^ Fail: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _268576)
^ Fail: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _268576)
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(2, 2))
^ Call: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(2, 2))
^ Unify: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(2, 2))
^ Call: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(2, 2))
^ Unify: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(2, 2))
^ Fail: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(2, 2))
^ Redo: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(2, 2))
^ Exit: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(2, 2))
^ Call: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _276746), call(_276746)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Unify: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _276746), call(_276746)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _276746)
^ Fail: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _276746)
Call: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
Unify: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Call: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _280532))
^ Unify: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _280532))
^ Call: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _280532)
^ Fail: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _280532)
^ Fail: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _280532))
Fail: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Fail: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _276746), call(_276746)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _284952)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_285522))
^ Call: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _285522)
^ Fail: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _285522)
^ Redo: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _287524)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _288152)
^ Call: (88) [pfc_lib] mpred_core_database_term(path, 2, _288720)
^ Fail: (88) [pfc_lib] mpred_core_database_term(path, 2, _288720)
^ Fail: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _290094)
^ Call: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _290698), call(_290698)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Unify: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _290698), call(_290698)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _290698)
^ Fail: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _290698)
Call: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
Unify: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _294484))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _294484))
^ Call: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _294484)
^ Fail: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _294484)
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _294484))
Fail: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Fail: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _290698), call(_290698)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_298850, path, 2, prologHybrid), _298880), call(_298880)*->true;clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_298850, path, 2, prologHybrid), _298880), call(_298880)*->true;clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))))
^ Call: (91) [hook_database] clause(mpred_prop(_298850, path, 2, prologHybrid), _298880)
^ Fail: (91) [hook_database] clause(mpred_prop(_298850, path, 2, prologHybrid), _298880)
Call: (91) [hook_database] clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))
Unify: (91) [hook_database] clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))
^ Call: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_298850, path, 2, prologHybrid), _302666))
^ Unify: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_298850, path, 2, prologHybrid), _302666))
^ Call: (93) [baseKB] clause(mpred_prop(_298850, path, 2, prologHybrid), _302666)
^ Fail: (93) [baseKB] clause(mpred_prop(_298850, path, 2, prologHybrid), _302666)
^ Fail: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_298850, path, 2, prologHybrid), _302666))
Fail: (91) [hook_database] clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_298850, path, 2, prologHybrid), _298880), call(_298880)*->true;clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))))
^ Call: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _307086)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_307656))
^ Call: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _307656)
^ Fail: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _307656)
^ Redo: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _309658)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _310286)
^ Call: (89) [pfc_lib] mpred_core_database_term(path, 2, _310854)
^ Fail: (89) [pfc_lib] mpred_core_database_term(path, 2, _310854)
^ Fail: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _312228)
Call: (98) [$autoload] leave_sandbox(_312826)
Unify: (98) [$autoload] leave_sandbox(_312826)
Exit: (98) [$autoload] leave_sandbox(false)
Call: (97) [$autoload] restore_sandbox(false)
Unify: (97) [$autoload] restore_sandbox(false)
Exit: (97) [$autoload] restore_sandbox(false)
Fail: (83) [pfc_lib] neg_may_naf(baseKB:path(2, 2))
^ Fail: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(2, 2)), info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(2, 2)))))
Fail: (76) [pfc_lib] neg_in_code0(baseKB:path(2, 2))
^ Fail: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(2, 2)), info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(2, 2)))))
^ Fail: (68) [baseKB] ~ (baseKB:path(2, 2))
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+ ~path(2,2))),rtrace(baseKB: ~path(2,2))))
no_proof_for(\+ ~path(2,2)).
no_proof_for(\+ ~path(2,2)).
no_proof_for(\+ ~path(2,2)).
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0005_Line_0000__path_2'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0005_Line_0000__path_2-junit.xml
~*/
/*
:- mpred_test(\+path(1,1)).
:- mpred_test(\+path(2,2)).
*/
:- mpred_test(\+path(3,1)).
%~ mpred_test("Test_0006_Line_0000__naf_path_3",baseKB:(\+path(3,1)))
/*~
%~ mpred_test("Test_0006_Line_0000__naf_path_3",baseKB:(\+path(3,1)))
passed=info(why_was_true(baseKB:(\+path(3,1))))
no_proof_for(\+path(3,1)).
no_proof_for(\+path(3,1)).
no_proof_for(\+path(3,1)).
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0006_Line_0000__naf_path_3'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0006_Line_0000__naf_path_3-junit.xml
~*/
:- mpred_test(\+ ~path(1,5)).
%~ mpred_test("Test_0007_Line_0000__naf_path_1",baseKB:(\+ ~path(1,5)))
/*~
%~ mpred_test("Test_0007_Line_0000__naf_path_1",baseKB:(\+ ~path(1,5)))
passed=info(why_was_true(baseKB:(\+ ~path(1,5))))
no_proof_for(\+ ~path(1,5)).
no_proof_for(\+ ~path(1,5)).
no_proof_for(\+ ~path(1,5)).
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0007_Line_0000__naf_path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0007_Line_0000__naf_path_1-junit.xml
~*/
:- break.
%~ skipped(blocks_on_input,break)
%~ unused(save_junit_results)
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01v.pfc:36
%~ test_completed_exit(6)
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
```
totalTime=1
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ANEG_01V
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3/packs_sys/pfc/t/sanity_base/neg_01v.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/NEG_01V/logicmoo_pfc_test_sanity_base_NEG_01V_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/67/testReport/logicmoo.pfc.test.sanity_base/NEG_01V/logicmoo_pfc_test_sanity_base_NEG_01V_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3
https://github.com/logicmoo/logicmoo_workspace/blob/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3/packs_sys/pfc/t/sanity_base/neg_01v.pfc
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k neg_01v.pfc (returned 6)
|
3.0
|
logicmoo.pfc.test.sanity_base.NEG_01V JUnit - (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc)
GH_MASTER_ISSUE_FINFO=
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ANEG_01V
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3/packs_sys/pfc/t/sanity_base/neg_01v.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/NEG_01V/logicmoo_pfc_test_sanity_base_NEG_01V_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/67/testReport/logicmoo.pfc.test.sanity_base/NEG_01V/logicmoo_pfc_test_sanity_base_NEG_01V_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3
https://github.com/logicmoo/logicmoo_workspace/blob/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3/packs_sys/pfc/t/sanity_base/neg_01v.pfc
```
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01v.pfc'),
%~ /var/lib/jenkins/.local/share/swi-prolog/pack/logicmoo_utils/prolog/logicmoo_test_header.pl:92
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- use_module(library(statistics)).
%:- mpred_notrace_exec.
% reset runtime counter
%:- mpred_notrace_exec.
% reset runtime counter
:- statistics(runtime,_Secs).
~path(X,X).
path(1,2).
path(2,3).
path(3,4).
path(1,1).
:- mpred_test(path(3, 4)).
%~ mpred_test("Test_0001_Line_0000__path_3",baseKB:path(3,4))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L20
/*~
%~ mpred_test("Test_0001_Line_0000__path_3",baseKB:path(3,4))
passed=info(why_was_true(baseKB:path(3,4)))
Justifications for path(3,4):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L17 ',17) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0001_Line_0000__path_3'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0001_Line_0000__path_3-junit.xml
~*/
:- mpred_test(path(2, 3)).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01v.pfc:21
%~ mpred_test("Test_0002_Line_0000__path_2",baseKB:path(2,3))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L21
/*~
%~ mpred_test("Test_0002_Line_0000__path_2",baseKB:path(2,3))
passed=info(why_was_true(baseKB:path(2,3)))
Justifications for path(2,3):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L16 ',16) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0002_Line_0000__path_2'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0002_Line_0000__path_2-junit.xml
~*/
:- mpred_test(path(1, 2)).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01v.pfc:22
%~ mpred_test("Test_0003_Line_0000__path_1",baseKB:path(1,2))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L22
/*~
%~ mpred_test("Test_0003_Line_0000__path_1",baseKB:path(1,2))
passed=info(why_was_true(baseKB:path(1,2)))
Justifications for path(1,2):
[36m 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/neg_01v.pfc#L15 ',15) [0m
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0003_Line_0000__path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0003_Line_0000__path_1-junit.xml
~*/
:- mpred_test(~path(1,1)).
%~ mpred_test("Test_0004_Line_0000__path_1",baseKB: ~path(1,1))
/*~
%~ mpred_test("Test_0004_Line_0000__path_1",baseKB: ~path(1,1))
^ Call: (68) [baseKB] ~path(1, 1)
^ Unify: (68) [baseKB] ~ (baseKB:path(1, 1))
^ Call: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 1)), info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 1)))))
^ Unify: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 1)), info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 1)))))
Call: (76) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (76) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (76) [loop_check] prolog_frame_attribute(1320, parent_goal, loop_check_term_frame(_92900, info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, _92906, _92908))
^ Fail: (76) [loop_check] prolog_frame_attribute(1320, parent_goal, loop_check_term_frame(_92900, info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, _92906, _92908))
^ Redo: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 1)), info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 1)))))
Call: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 1))
Unify: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 1))
^ Call: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 1)), info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 1)))))
^ Unify: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 1)), info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 1)))))
Call: (83) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (83) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (83) [loop_check] prolog_frame_attribute(1459, parent_goal, loop_check_term_frame(_98624, info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, _98630, _98632))
^ Fail: (83) [loop_check] prolog_frame_attribute(1459, parent_goal, loop_check_term_frame(_98624, info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, _98630, _98632))
^ Redo: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 1)), info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 1)))))
Call: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 1))
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 1))
^ Call: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _101956)
^ Unify: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _101956)
^ Call: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _101956)
^ Fail: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _101956)
^ Fail: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _101956)
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 1))
^ Call: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 1))
^ Unify: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 1))
^ Call: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 1))
^ Unify: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 1))
^ Fail: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(1, 1))
^ Redo: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 1))
^ Exit: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(1, 1))
^ Call: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _110126), call(_110126)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Unify: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _110126), call(_110126)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _110126)
^ Fail: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _110126)
Call: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
Unify: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Call: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _113912))
^ Unify: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _113912))
^ Call: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _113912)
^ Fail: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _113912)
^ Fail: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _113912))
Fail: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Fail: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _110126), call(_110126)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _118332)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_118902))
^ Call: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _118902)
^ Fail: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _118902)
^ Redo: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _120904)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _121532)
^ Call: (88) [pfc_lib] mpred_core_database_term(path, 2, _122100)
^ Fail: (88) [pfc_lib] mpred_core_database_term(path, 2, _122100)
^ Fail: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _123474)
^ Call: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _124078), call(_124078)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Unify: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _124078), call(_124078)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _124078)
^ Fail: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _124078)
Call: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
Unify: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _127864))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _127864))
^ Call: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _127864)
^ Fail: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _127864)
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _127864))
Fail: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Fail: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _124078), call(_124078)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_132230, path, 2, prologHybrid), _132260), call(_132260)*->true;clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_132230, path, 2, prologHybrid), _132260), call(_132260)*->true;clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))))
^ Call: (91) [hook_database] clause(mpred_prop(_132230, path, 2, prologHybrid), _132260)
^ Fail: (91) [hook_database] clause(mpred_prop(_132230, path, 2, prologHybrid), _132260)
Call: (91) [hook_database] clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))
Unify: (91) [hook_database] clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))
^ Call: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_132230, path, 2, prologHybrid), _136046))
^ Unify: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_132230, path, 2, prologHybrid), _136046))
^ Call: (93) [baseKB] clause(mpred_prop(_132230, path, 2, prologHybrid), _136046)
^ Fail: (93) [baseKB] clause(mpred_prop(_132230, path, 2, prologHybrid), _136046)
^ Fail: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_132230, path, 2, prologHybrid), _136046))
Fail: (91) [hook_database] clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_132230, path, 2, prologHybrid), _132260), call(_132260)*->true;clause_b(baseKB:mpred_prop(_132230, path, 2, prologHybrid))))
^ Call: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _140466)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_141036))
^ Call: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _141036)
^ Fail: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _141036)
^ Redo: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _143038)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _143666)
^ Call: (89) [pfc_lib] mpred_core_database_term(path, 2, _144234)
^ Fail: (89) [pfc_lib] mpred_core_database_term(path, 2, _144234)
^ Fail: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _145608)
Call: (98) [$autoload] leave_sandbox(_146206)
Unify: (98) [$autoload] leave_sandbox(_146206)
Exit: (98) [$autoload] leave_sandbox(false)
Call: (97) [$autoload] restore_sandbox(false)
Unify: (97) [$autoload] restore_sandbox(false)
Exit: (97) [$autoload] restore_sandbox(false)
Fail: (83) [pfc_lib] neg_may_naf(baseKB:path(1, 1))
^ Fail: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(1, 1)), info(pfc_lib:neg_may_naf(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1459, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(1, 1)))))
Fail: (76) [pfc_lib] neg_in_code0(baseKB:path(1, 1))
^ Fail: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(1, 1)), info(pfc_lib:neg_in_code0(baseKB:path(1, 1)), 'mpred_core.pl':273), 1, 1320, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(1, 1)))))
^ Fail: (68) [baseKB] ~ (baseKB:path(1, 1))
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+ ~path(1,1))),rtrace(baseKB: ~path(1,1))))
no_proof_for(\+ ~path(1,1)).
no_proof_for(\+ ~path(1,1)).
no_proof_for(\+ ~path(1,1)).
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0004_Line_0000__path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0004_Line_0000__path_1-junit.xml
~*/
:- mpred_test(~path(2,2)).
%~ mpred_test("Test_0005_Line_0000__path_2",baseKB: ~path(2,2))
/*~
%~ mpred_test("Test_0005_Line_0000__path_2",baseKB: ~path(2,2))
^ Call: (68) [baseKB] ~path(2, 2)
^ Unify: (68) [baseKB] ~ (baseKB:path(2, 2))
^ Call: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(2, 2)), info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(2, 2)))))
^ Unify: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(2, 2)), info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(2, 2)))))
Call: (76) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (76) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (76) [loop_check] prolog_frame_attribute(1189, parent_goal, loop_check_term_frame(_259520, info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, _259526, _259528))
^ Fail: (76) [loop_check] prolog_frame_attribute(1189, parent_goal, loop_check_term_frame(_259520, info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, _259526, _259528))
^ Redo: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(2, 2)), info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(2, 2)))))
Call: (76) [pfc_lib] neg_in_code0(baseKB:path(2, 2))
Unify: (76) [pfc_lib] neg_in_code0(baseKB:path(2, 2))
^ Call: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(2, 2)), info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(2, 2)))))
^ Unify: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(2, 2)), info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(2, 2)))))
Call: (83) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (83) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (83) [loop_check] prolog_frame_attribute(1328, parent_goal, loop_check_term_frame(_265244, info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, _265250, _265252))
^ Fail: (83) [loop_check] prolog_frame_attribute(1328, parent_goal, loop_check_term_frame(_265244, info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, _265250, _265252))
^ Redo: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(2, 2)), info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(2, 2)))))
Call: (83) [pfc_lib] neg_may_naf(baseKB:path(2, 2))
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(2, 2))
^ Call: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _268576)
^ Unify: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _268576)
^ Call: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _268576)
^ Fail: (88) [system] clause(pfc_lib:prologNegByFailure(path), true, _268576)
^ Fail: (87) [pfc_lib] hook_database:clause_i(pfc_lib:prologNegByFailure(path), true, _268576)
Unify: (83) [pfc_lib] neg_may_naf(baseKB:path(2, 2))
^ Call: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(2, 2))
^ Unify: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(2, 2))
^ Call: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(2, 2))
^ Unify: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(2, 2))
^ Fail: (85) [pfc_lib] ucatch:is_ftVar(baseKB:path(2, 2))
^ Redo: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(2, 2))
^ Exit: (84) [pfc_lib] ucatch:is_ftCompound(baseKB:path(2, 2))
^ Call: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _276746), call(_276746)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Unify: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _276746), call(_276746)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _276746)
^ Fail: (90) [hook_database] clause(mpred_prop(baseKB, path, 2, prologHybrid), _276746)
Call: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
Unify: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Call: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _280532))
^ Unify: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _280532))
^ Call: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _280532)
^ Fail: (92) [baseKB] clause(mpred_prop(baseKB, path, 2, prologHybrid), _280532)
^ Fail: (91) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologHybrid), _280532))
Fail: (90) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))
^ Fail: (88) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologHybrid), _276746), call(_276746)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologHybrid))))
^ Call: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _284952)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_285522))
^ Call: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _285522)
^ Fail: (88) [pfc_lib] mpred_database_term_syntax(path, 2, _285522)
^ Redo: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _287524)
^ Unify: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _288152)
^ Call: (88) [pfc_lib] mpred_core_database_term(path, 2, _288720)
^ Fail: (88) [pfc_lib] mpred_core_database_term(path, 2, _288720)
^ Fail: (87) [pfc_lib] baseKB:mpred_database_term(path, 2, _290094)
^ Call: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _290698), call(_290698)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Unify: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _290698), call(_290698)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _290698)
^ Fail: (88) [hook_database] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _290698)
Call: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
Unify: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _294484))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _294484))
^ Call: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _294484)
^ Fail: (90) [baseKB] clause(mpred_prop(baseKB, path, 2, prologBuiltin), _294484)
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(baseKB, path, 2, prologBuiltin), _294484))
Fail: (88) [hook_database] clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))
^ Fail: (86) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(baseKB, path, 2, prologBuiltin), _290698), call(_290698)*->true;clause_b(baseKB:mpred_prop(baseKB, path, 2, prologBuiltin))))
^ Call: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_298850, path, 2, prologHybrid), _298880), call(_298880)*->true;clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))))
^ Unify: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_298850, path, 2, prologHybrid), _298880), call(_298880)*->true;clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))))
^ Call: (91) [hook_database] clause(mpred_prop(_298850, path, 2, prologHybrid), _298880)
^ Fail: (91) [hook_database] clause(mpred_prop(_298850, path, 2, prologHybrid), _298880)
Call: (91) [hook_database] clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))
Unify: (91) [hook_database] clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))
^ Call: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_298850, path, 2, prologHybrid), _302666))
^ Unify: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_298850, path, 2, prologHybrid), _302666))
^ Call: (93) [baseKB] clause(mpred_prop(_298850, path, 2, prologHybrid), _302666)
^ Fail: (93) [baseKB] clause(mpred_prop(_298850, path, 2, prologHybrid), _302666)
^ Fail: (92) [hook_database] hook_database:pfc_with_quiet_vars_lock(baseKB:clause(mpred_prop(_298850, path, 2, prologHybrid), _302666))
Fail: (91) [hook_database] clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))
^ Fail: (89) [hook_database] hook_database:pfc_with_quiet_vars_lock((clause(mpred_prop(_298850, path, 2, prologHybrid), _298880), call(_298880)*->true;clause_b(baseKB:mpred_prop(_298850, path, 2, prologHybrid))))
^ Call: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _307086)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, syntaxic(_307656))
^ Call: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _307656)
^ Fail: (89) [pfc_lib] mpred_database_term_syntax(path, 2, _307656)
^ Redo: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _309658)
^ Unify: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _310286)
^ Call: (89) [pfc_lib] mpred_core_database_term(path, 2, _310854)
^ Fail: (89) [pfc_lib] mpred_core_database_term(path, 2, _310854)
^ Fail: (88) [pfc_lib] baseKB:mpred_database_term(path, 2, _312228)
Call: (98) [$autoload] leave_sandbox(_312826)
Unify: (98) [$autoload] leave_sandbox(_312826)
Exit: (98) [$autoload] leave_sandbox(false)
Call: (97) [$autoload] restore_sandbox(false)
Unify: (97) [$autoload] restore_sandbox(false)
Exit: (97) [$autoload] restore_sandbox(false)
Fail: (83) [pfc_lib] neg_may_naf(baseKB:path(2, 2))
^ Fail: (82) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_may_naf(baseKB:path(2, 2)), info(pfc_lib:neg_may_naf(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1328, pfc_lib:trace_or_throw(looped(pfc_lib:neg_may_naf(baseKB:path(2, 2)))))
Fail: (76) [pfc_lib] neg_in_code0(baseKB:path(2, 2))
^ Fail: (75) [loop_check] loop_check:loop_check_term_frame(pfc_lib:neg_in_code0(baseKB:path(2, 2)), info(pfc_lib:neg_in_code0(baseKB:path(2, 2)), 'mpred_core.pl':273), 1, 1189, pfc_lib:trace_or_throw(looped(pfc_lib:neg_in_code0(baseKB:path(2, 2)))))
^ Fail: (68) [baseKB] ~ (baseKB:path(2, 2))
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+ ~path(2,2))),rtrace(baseKB: ~path(2,2))))
no_proof_for(\+ ~path(2,2)).
no_proof_for(\+ ~path(2,2)).
no_proof_for(\+ ~path(2,2)).
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0005_Line_0000__path_2'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0005_Line_0000__path_2-junit.xml
~*/
/*
:- mpred_test(\+path(1,1)).
:- mpred_test(\+path(2,2)).
*/
:- mpred_test(\+path(3,1)).
%~ mpred_test("Test_0006_Line_0000__naf_path_3",baseKB:(\+path(3,1)))
/*~
%~ mpred_test("Test_0006_Line_0000__naf_path_3",baseKB:(\+path(3,1)))
passed=info(why_was_true(baseKB:(\+path(3,1))))
no_proof_for(\+path(3,1)).
no_proof_for(\+path(3,1)).
no_proof_for(\+path(3,1)).
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0006_Line_0000__naf_path_3'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0006_Line_0000__naf_path_3-junit.xml
~*/
:- mpred_test(\+ ~path(1,5)).
%~ mpred_test("Test_0007_Line_0000__naf_path_1",baseKB:(\+ ~path(1,5)))
/*~
%~ mpred_test("Test_0007_Line_0000__naf_path_1",baseKB:(\+ ~path(1,5)))
passed=info(why_was_true(baseKB:(\+ ~path(1,5))))
no_proof_for(\+ ~path(1,5)).
no_proof_for(\+ ~path(1,5)).
no_proof_for(\+ ~path(1,5)).
name ='logicmoo.pfc.test.sanity_base.NEG_01V-Test_0007_Line_0000__naf_path_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.NEG_01V'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif neg_01v.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.NEG_01V-Test_0007_Line_0000__naf_path_1-junit.xml
~*/
:- break.
%~ skipped(blocks_on_input,break)
%~ unused(save_junit_results)
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/neg_01v.pfc:36
%~ test_completed_exit(6)
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
```
totalTime=1
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ANEG_01V
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3/packs_sys/pfc/t/sanity_base/neg_01v.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/NEG_01V/logicmoo_pfc_test_sanity_base_NEG_01V_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/67/testReport/logicmoo.pfc.test.sanity_base/NEG_01V/logicmoo_pfc_test_sanity_base_NEG_01V_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3
https://github.com/logicmoo/logicmoo_workspace/blob/869479bc8cf913ee2df5ebbe49363a2dad9c9fb3/packs_sys/pfc/t/sanity_base/neg_01v.pfc
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k neg_01v.pfc (returned 6)
|
non_process
|
logicmoo pfc test sanity base neg junit cd var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base timeout foreground preserve status s sigkill k lmoo clif neg pfc gh master issue finfo issue search gitlab latest this build github running var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base neg pfc var lib jenkins local share swi prolog pack logicmoo utils prolog logicmoo test header pl this test might need use module library logicmoo plarkc use module library statistics mpred notrace exec reset runtime counter mpred notrace exec reset runtime counter statistics runtime secs path x x path path path path mpred test path mpred test test line path basekb path file mpred test test line path basekb path passed info why was true basekb path justifications for path basekb name logicmoo pfc test sanity base neg test line path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line path junit xml mpred test path var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base neg pfc mpred test test line path basekb path file mpred test test line path basekb path passed info why was true basekb path justifications for path basekb name logicmoo pfc test sanity base neg test line path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line path junit xml mpred test path var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base neg pfc mpred test test line path basekb path file mpred test test line path basekb path passed info why was true basekb path justifications for path basekb name logicmoo pfc test sanity base neg test line path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line path junit xml mpred test path mpred test test line path basekb path mpred test test line path basekb path call path unify basekb path call loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path unify loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info pfc lib neg in basekb path mpred core pl fail prolog frame attribute parent goal loop check term frame info pfc lib neg in basekb path mpred core pl redo loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path call neg in basekb path unify neg in basekb path call loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path unify loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info pfc lib neg may naf basekb path mpred core pl fail prolog frame attribute parent goal loop check term frame info pfc lib neg may naf basekb path mpred core pl redo loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path call neg may naf basekb path unify neg may naf basekb path call hook database clause i pfc lib prolognegbyfailure path true unify hook database clause i pfc lib prolognegbyfailure path true call clause pfc lib prolognegbyfailure path true fail clause pfc lib prolognegbyfailure path true fail hook database clause i pfc lib prolognegbyfailure path true unify neg may naf basekb path call ucatch is ftcompound basekb path unify ucatch is ftcompound basekb path call ucatch is ftvar basekb path unify ucatch is ftvar basekb path fail ucatch is ftvar basekb path redo ucatch is ftcompound basekb path exit ucatch is ftcompound basekb path call hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid unify hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid call clause mpred prop basekb path prologhybrid fail clause mpred prop basekb path prologhybrid call clause b basekb mpred prop basekb path prologhybrid unify clause b basekb mpred prop basekb path prologhybrid call hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid unify hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid call clause mpred prop basekb path prologhybrid fail clause mpred prop basekb path prologhybrid fail hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid fail clause b basekb mpred prop basekb path prologhybrid fail hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid call basekb mpred database term path unify basekb mpred database term path syntaxic call mpred database term syntax path fail mpred database term syntax path redo basekb mpred database term path unify basekb mpred database term path call mpred core database term path fail mpred core database term path fail basekb mpred database term path call hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin unify hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin call clause mpred prop basekb path prologbuiltin fail clause mpred prop basekb path prologbuiltin call clause b basekb mpred prop basekb path prologbuiltin unify clause b basekb mpred prop basekb path prologbuiltin call hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin unify hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin call clause mpred prop basekb path prologbuiltin fail clause mpred prop basekb path prologbuiltin fail hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin fail clause b basekb mpred prop basekb path prologbuiltin fail hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin call hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid unify hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid call clause mpred prop path prologhybrid fail clause mpred prop path prologhybrid call clause b basekb mpred prop path prologhybrid unify clause b basekb mpred prop path prologhybrid call hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid unify hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid call clause mpred prop path prologhybrid fail clause mpred prop path prologhybrid fail hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid fail clause b basekb mpred prop path prologhybrid fail hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid call basekb mpred database term path unify basekb mpred database term path syntaxic call mpred database term syntax path fail mpred database term syntax path redo basekb mpred database term path unify basekb mpred database term path call mpred core database term path fail mpred core database term path fail basekb mpred database term path call leave sandbox unify leave sandbox exit leave sandbox false call restore sandbox false unify restore sandbox false exit restore sandbox false fail neg may naf basekb path fail loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path fail neg in basekb path fail loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path fail basekb path call must sanity mquietly if true rtrace tat normal unify must sanity mquietly if true rtrace tat normal failure info why was true basekb path rtrace basekb path no proof for path no proof for path no proof for path name logicmoo pfc test sanity base neg test line path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line path junit xml mpred test path mpred test test line path basekb path mpred test test line path basekb path call path unify basekb path call loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path unify loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info pfc lib neg in basekb path mpred core pl fail prolog frame attribute parent goal loop check term frame info pfc lib neg in basekb path mpred core pl redo loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path call neg in basekb path unify neg in basekb path call loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path unify loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info pfc lib neg may naf basekb path mpred core pl fail prolog frame attribute parent goal loop check term frame info pfc lib neg may naf basekb path mpred core pl redo loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path call neg may naf basekb path unify neg may naf basekb path call hook database clause i pfc lib prolognegbyfailure path true unify hook database clause i pfc lib prolognegbyfailure path true call clause pfc lib prolognegbyfailure path true fail clause pfc lib prolognegbyfailure path true fail hook database clause i pfc lib prolognegbyfailure path true unify neg may naf basekb path call ucatch is ftcompound basekb path unify ucatch is ftcompound basekb path call ucatch is ftvar basekb path unify ucatch is ftvar basekb path fail ucatch is ftvar basekb path redo ucatch is ftcompound basekb path exit ucatch is ftcompound basekb path call hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid unify hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid call clause mpred prop basekb path prologhybrid fail clause mpred prop basekb path prologhybrid call clause b basekb mpred prop basekb path prologhybrid unify clause b basekb mpred prop basekb path prologhybrid call hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid unify hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid call clause mpred prop basekb path prologhybrid fail clause mpred prop basekb path prologhybrid fail hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologhybrid fail clause b basekb mpred prop basekb path prologhybrid fail hook database pfc with quiet vars lock clause mpred prop basekb path prologhybrid call true clause b basekb mpred prop basekb path prologhybrid call basekb mpred database term path unify basekb mpred database term path syntaxic call mpred database term syntax path fail mpred database term syntax path redo basekb mpred database term path unify basekb mpred database term path call mpred core database term path fail mpred core database term path fail basekb mpred database term path call hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin unify hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin call clause mpred prop basekb path prologbuiltin fail clause mpred prop basekb path prologbuiltin call clause b basekb mpred prop basekb path prologbuiltin unify clause b basekb mpred prop basekb path prologbuiltin call hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin unify hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin call clause mpred prop basekb path prologbuiltin fail clause mpred prop basekb path prologbuiltin fail hook database pfc with quiet vars lock basekb clause mpred prop basekb path prologbuiltin fail clause b basekb mpred prop basekb path prologbuiltin fail hook database pfc with quiet vars lock clause mpred prop basekb path prologbuiltin call true clause b basekb mpred prop basekb path prologbuiltin call hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid unify hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid call clause mpred prop path prologhybrid fail clause mpred prop path prologhybrid call clause b basekb mpred prop path prologhybrid unify clause b basekb mpred prop path prologhybrid call hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid unify hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid call clause mpred prop path prologhybrid fail clause mpred prop path prologhybrid fail hook database pfc with quiet vars lock basekb clause mpred prop path prologhybrid fail clause b basekb mpred prop path prologhybrid fail hook database pfc with quiet vars lock clause mpred prop path prologhybrid call true clause b basekb mpred prop path prologhybrid call basekb mpred database term path unify basekb mpred database term path syntaxic call mpred database term syntax path fail mpred database term syntax path redo basekb mpred database term path unify basekb mpred database term path call mpred core database term path fail mpred core database term path fail basekb mpred database term path call leave sandbox unify leave sandbox exit leave sandbox false call restore sandbox false unify restore sandbox false exit restore sandbox false fail neg may naf basekb path fail loop check loop check term frame pfc lib neg may naf basekb path info pfc lib neg may naf basekb path mpred core pl pfc lib trace or throw looped pfc lib neg may naf basekb path fail neg in basekb path fail loop check loop check term frame pfc lib neg in basekb path info pfc lib neg in basekb path mpred core pl pfc lib trace or throw looped pfc lib neg in basekb path fail basekb path call must sanity mquietly if true rtrace tat normal unify must sanity mquietly if true rtrace tat normal failure info why was true basekb path rtrace basekb path no proof for path no proof for path no proof for path name logicmoo pfc test sanity base neg test line path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line path junit xml mpred test path mpred test path mpred test path mpred test test line naf path basekb path mpred test test line naf path basekb path passed info why was true basekb path no proof for path no proof for path no proof for path name logicmoo pfc test sanity base neg test line naf path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line naf path junit xml mpred test path mpred test test line naf path basekb path mpred test test line naf path basekb path passed info why was true basekb path no proof for path no proof for path no proof for path name logicmoo pfc test sanity base neg test line naf path junit classname logicmoo pfc test sanity base neg junit cmd timeout foreground preserve status s sigkill k lmoo clif neg pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base neg test line naf path junit xml break skipped blocks on input break unused save junit results var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base neg pfc test completed exit dynamic junit prop dynamic junit prop dynamic junit prop totaltime issue search gitlab latest this build github failed var lib jenkins workspace logicmoo workspace bin lmoo junit minor k neg pfc returned
| 0
|
19,769
| 26,143,988,745
|
IssuesEvent
|
2022-12-29 23:57:26
|
vectordotdev/vector
|
https://api.github.com/repos/vectordotdev/vector
|
closed
|
Log schemas
|
type: enhancement domain: processing Epic
|
A common need for Vector users is the ability to map data according to different schemas. This is a key requirement for Vector since it aims to be schema, standard, and vendor-neutral. In order to deliver on this claim, Vector must not only support a variety of schemas independently, but it must also assist in the interchange between them.
# Use Cases
## Transitioning to Vector
Schemas create very heavy lock-in. This is because most downstream systems depend on this schema. To name a few:
1. Alerts.
2. Graphs and dashboards.
3. Storages.
4. Humans.
Changing a schema can break all of these things which usually is not acceptable. To prevent this, Vector must adopt their current schema in a way that downstream dependencies do not notice.
## Transitioning Vendors
The use case above illustrates the need for Vector to support a single schema at a time, but there are cases where a user would need to support multiple. For example, when transitioning vendors. Vector must not only support the "read" schema but also transform the data to an entirely new "write" schema.
For example, if a user is transitioning from Splunk to Elasticsearch, Vector must ingest the data under the [Splunk Common Information Model](https://docs.splunk.com/Documentation/CIM/4.17.0/User/Overview) and transform it to the [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/index.html).
## Automatic Dashboards, Alerts, & Insights
A benefit of using a vendor's agent is that it'll unlock automatic dashboards, alerts, and other features. This not only saves a considerable amount of time and effort, but you can effectively delegate the management of these things to your chosen vendor. For example, I assume that DataDog, and their community, continually improve their dashboards. In this case, it's very important that Vector can transparently adopt the DataDog schema so that DataDog Vector users can receive the same benefit. It also alleviates us from having to maintain these entities as well.
# Schemas
1. [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/index.html)
2. [Splunk Common Information Model (CIM)](https://docs.splunk.com/Documentation/CIM/4.17.0/User/Overview)
3. [OpenTelemetry Log Data Model](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/logs/data-model.md)
4. [GELF](https://docs.graylog.org/en/3.2/pages/gelf.html#gelf-payload-specification)
5. [DataDog's reserved log attributes](https://docs.datadoghq.com/logs/log_collection/?tab=http#reserved-attributes)
6. ...and more
# Proposal
In short, I'm proposing that we attach the known schema to each Vector event during ingestion. This would allow us to lookup fields and map them across schemas. There are a lot of little details to discuss which we can cover in an RFC. To name a few:
1. How would Vector detect the schema?
2. Should Vector strictly enforce the schema? Ex: Not allowing users to add fields that would violate the schema.
3. Should Vector reject data at the source-level that does not conform to the chosen schema?
4. Should Vector adopt a default schema? Ex: OpenTelemetry.
5. What happens when the user has a custom schema that we know nothing about? Ex: require them to manually map data when necessary.
|
1.0
|
Log schemas - A common need for Vector users is the ability to map data according to different schemas. This is a key requirement for Vector since it aims to be schema, standard, and vendor-neutral. In order to deliver on this claim, Vector must not only support a variety of schemas independently, but it must also assist in the interchange between them.
# Use Cases
## Transitioning to Vector
Schemas create very heavy lock-in. This is because most downstream systems depend on this schema. To name a few:
1. Alerts.
2. Graphs and dashboards.
3. Storages.
4. Humans.
Changing a schema can break all of these things which usually is not acceptable. To prevent this, Vector must adopt their current schema in a way that downstream dependencies do not notice.
## Transitioning Vendors
The use case above illustrates the need for Vector to support a single schema at a time, but there are cases where a user would need to support multiple. For example, when transitioning vendors. Vector must not only support the "read" schema but also transform the data to an entirely new "write" schema.
For example, if a user is transitioning from Splunk to Elasticsearch, Vector must ingest the data under the [Splunk Common Information Model](https://docs.splunk.com/Documentation/CIM/4.17.0/User/Overview) and transform it to the [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/index.html).
## Automatic Dashboards, Alerts, & Insights
A benefit of using a vendor's agent is that it'll unlock automatic dashboards, alerts, and other features. This not only saves a considerable amount of time and effort, but you can effectively delegate the management of these things to your chosen vendor. For example, I assume that DataDog, and their community, continually improve their dashboards. In this case, it's very important that Vector can transparently adopt the DataDog schema so that DataDog Vector users can receive the same benefit. It also alleviates us from having to maintain these entities as well.
# Schemas
1. [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/index.html)
2. [Splunk Common Information Model (CIM)](https://docs.splunk.com/Documentation/CIM/4.17.0/User/Overview)
3. [OpenTelemetry Log Data Model](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/logs/data-model.md)
4. [GELF](https://docs.graylog.org/en/3.2/pages/gelf.html#gelf-payload-specification)
5. [DataDog's reserved log attributes](https://docs.datadoghq.com/logs/log_collection/?tab=http#reserved-attributes)
6. ...and more
# Proposal
In short, I'm proposing that we attach the known schema to each Vector event during ingestion. This would allow us to lookup fields and map them across schemas. There are a lot of little details to discuss which we can cover in an RFC. To name a few:
1. How would Vector detect the schema?
2. Should Vector strictly enforce the schema? Ex: Not allowing users to add fields that would violate the schema.
3. Should Vector reject data at the source-level that does not conform to the chosen schema?
4. Should Vector adopt a default schema? Ex: OpenTelemetry.
5. What happens when the user has a custom schema that we know nothing about? Ex: require them to manually map data when necessary.
|
process
|
log schemas a common need for vector users is the ability to map data according to different schemas this is a key requirement for vector since it aims to be schema standard and vendor neutral in order to deliver on this claim vector must not only support a variety of schemas independently but it must also assist in the interchange between them use cases transitioning to vector schemas create very heavy lock in this is because most downstream systems depend on this schema to name a few alerts graphs and dashboards storages humans changing a schema can break all of these things which usually is not acceptable to prevent this vector must adopt their current schema in a way that downstream dependencies do not notice transitioning vendors the use case above illustrates the need for vector to support a single schema at a time but there are cases where a user would need to support multiple for example when transitioning vendors vector must not only support the read schema but also transform the data to an entirely new write schema for example if a user is transitioning from splunk to elasticsearch vector must ingest the data under the and transform it to the automatic dashboards alerts insights a benefit of using a vendor s agent is that it ll unlock automatic dashboards alerts and other features this not only saves a considerable amount of time and effort but you can effectively delegate the management of these things to your chosen vendor for example i assume that datadog and their community continually improve their dashboards in this case it s very important that vector can transparently adopt the datadog schema so that datadog vector users can receive the same benefit it also alleviates us from having to maintain these entities as well schemas and more proposal in short i m proposing that we attach the known schema to each vector event during ingestion this would allow us to lookup fields and map them across schemas there are a lot of little details to discuss which we can cover in an rfc to name a few how would vector detect the schema should vector strictly enforce the schema ex not allowing users to add fields that would violate the schema should vector reject data at the source level that does not conform to the chosen schema should vector adopt a default schema ex opentelemetry what happens when the user has a custom schema that we know nothing about ex require them to manually map data when necessary
| 1
|
283,801
| 30,913,549,264
|
IssuesEvent
|
2023-08-05 02:12:35
|
hshivhare67/kernel_v4.19.72
|
https://api.github.com/repos/hshivhare67/kernel_v4.19.72
|
reopened
|
CVE-2021-28660 (High) detected in linuxlinux-4.19.282
|
Mend: dependency security vulnerability
|
## CVE-2021-28660 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.282</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/hshivhare67/kernel_v4.19.72/commit/139c4e073703974ca0b05255c4cff6dcd52a8e31">139c4e073703974ca0b05255c4cff6dcd52a8e31</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
rtw_wx_set_scan in drivers/staging/rtl8188eu/os_dep/ioctl_linux.c in the Linux kernel through 5.11.6 allows writing beyond the end of the ->ssid[] array. NOTE: from the perspective of kernel.org releases, CVE IDs are not normally used for drivers/staging/* (unfinished work); however, system integrators may have situations in which a drivers/staging issue is relevant to their own customer base.
<p>Publish Date: 2021-03-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28660>CVE-2021-28660</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-28660">https://www.linuxkernelcves.com/cves/CVE-2021-28660</a></p>
<p>Release Date: 2021-03-17</p>
<p>Fix Resolution: v4.4.262,v4.9.262,v4.14.226,v4.19.181,v5.4.106,v5.10.24,v5.11.7,v5.12-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-28660 (High) detected in linuxlinux-4.19.282 - ## CVE-2021-28660 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.282</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/hshivhare67/kernel_v4.19.72/commit/139c4e073703974ca0b05255c4cff6dcd52a8e31">139c4e073703974ca0b05255c4cff6dcd52a8e31</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
rtw_wx_set_scan in drivers/staging/rtl8188eu/os_dep/ioctl_linux.c in the Linux kernel through 5.11.6 allows writing beyond the end of the ->ssid[] array. NOTE: from the perspective of kernel.org releases, CVE IDs are not normally used for drivers/staging/* (unfinished work); however, system integrators may have situations in which a drivers/staging issue is relevant to their own customer base.
<p>Publish Date: 2021-03-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28660>CVE-2021-28660</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-28660">https://www.linuxkernelcves.com/cves/CVE-2021-28660</a></p>
<p>Release Date: 2021-03-17</p>
<p>Fix Resolution: v4.4.262,v4.9.262,v4.14.226,v4.19.181,v5.4.106,v5.10.24,v5.11.7,v5.12-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers staging os dep ioctl linux c drivers staging os dep ioctl linux c vulnerability details rtw wx set scan in drivers staging os dep ioctl linux c in the linux kernel through allows writing beyond the end of the ssid array note from the perspective of kernel org releases cve ids are not normally used for drivers staging unfinished work however system integrators may have situations in which a drivers staging issue is relevant to their own customer base publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
186,900
| 15,087,248,817
|
IssuesEvent
|
2021-02-05 21:47:15
|
observablehq/inputs
|
https://api.github.com/repos/observablehq/inputs
|
closed
|
Link to examples on Observable
|
documentation
|
`Examples` links in d3's doc pages are very useful

It would be good to do the same, say https://observablehq.com/@observablehq/input-button for Button.
|
1.0
|
Link to examples on Observable - `Examples` links in d3's doc pages are very useful

It would be good to do the same, say https://observablehq.com/@observablehq/input-button for Button.
|
non_process
|
link to examples on observable examples links in s doc pages are very useful it would be good to do the same say for button
| 0
|
18,064
| 24,068,081,266
|
IssuesEvent
|
2022-09-17 19:41:40
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Prince of Chaos from "The Callback Queen" (Screenshots and Poster Added)
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Prince of Chaos
Type (film/tv show): film - epic fantasy
Film or show in which it appears: The Callback Queen
Is the parent film/show streaming anywhere? Yes - Amazon Prime
About when in the parent film/show does it appear? There's a screening at the end.
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamp: 1:23:16 to end
Synopsis: Based on the _Prince of Chaos_ novels by legendary author Horatio King
Starring: Kate Loughlin as Princess Grace
Studio: BFI Film
Quote: Become the nightmare that haunts their dreams. Become the death-bringer of prophesy. Become the prince of chaos!










|
1.0
|
Add Prince of Chaos from "The Callback Queen" (Screenshots and Poster Added) - Please add as much of the following info as you can:
Title: Prince of Chaos
Type (film/tv show): film - epic fantasy
Film or show in which it appears: The Callback Queen
Is the parent film/show streaming anywhere? Yes - Amazon Prime
About when in the parent film/show does it appear? There's a screening at the end.
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamp: 1:23:16 to end
Synopsis: Based on the _Prince of Chaos_ novels by legendary author Horatio King
Starring: Kate Loughlin as Princess Grace
Studio: BFI Film
Quote: Become the nightmare that haunts their dreams. Become the death-bringer of prophesy. Become the prince of chaos!










|
process
|
add prince of chaos from the callback queen screenshots and poster added please add as much of the following info as you can title prince of chaos type film tv show film epic fantasy film or show in which it appears the callback queen is the parent film show streaming anywhere yes amazon prime about when in the parent film show does it appear there s a screening at the end actual footage of the film show can be seen yes no yes timestamp to end synopsis based on the prince of chaos novels by legendary author horatio king starring kate loughlin as princess grace studio bfi film quote become the nightmare that haunts their dreams become the death bringer of prophesy become the prince of chaos
| 1
|
22,056
| 30,573,917,827
|
IssuesEvent
|
2023-07-21 02:26:42
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
roblox-pyc 1.19.73 has 2 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.19.73",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.19.73/src/robloxpy.py:134",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.19.73/src/robloxpy.py:141",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpl_1xsvqq/roblox-pyc"
}
}```
|
1.0
|
roblox-pyc 1.19.73 has 2 GuardDog issues - https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.19.73",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.19.73/src/robloxpy.py:134",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.19.73/src/robloxpy.py:141",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpl_1xsvqq/roblox-pyc"
}
}```
|
process
|
roblox pyc has guarddog issues dependency roblox pyc version result issues errors results silent process execution location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmpl roblox pyc
| 1
|
229,808
| 17,578,628,350
|
IssuesEvent
|
2021-08-16 02:12:49
|
dankamongmen/notcurses
|
https://api.github.com/repos/dankamongmen/notcurses
|
closed
|
remove support for the iTerm2 graphics protocol
|
documentation enhancement macOS
|
As explained in #1987, the author of iTerm2 is opposed to cells and graphics sharing a cell, and thus has no plans to implement transparency (as we define it) for graphics (@gnachman states that it does honor transparency, which it does in a sense, but that transparency is only with regards to the background color, not any glyphery). Combined with iTerm2's prompting to save a file when one attempts to display a graphic, this pretty much makes it unsuitable for the Notcurses model. Remove the iTerm2 protocol support, which right now only serves to make us look incompetent.
macOS users can of course use Kitty or WezTerm to get glyph-aware transparency.
Let's go ahead and yank this by 2.3.14.
|
1.0
|
remove support for the iTerm2 graphics protocol - As explained in #1987, the author of iTerm2 is opposed to cells and graphics sharing a cell, and thus has no plans to implement transparency (as we define it) for graphics (@gnachman states that it does honor transparency, which it does in a sense, but that transparency is only with regards to the background color, not any glyphery). Combined with iTerm2's prompting to save a file when one attempts to display a graphic, this pretty much makes it unsuitable for the Notcurses model. Remove the iTerm2 protocol support, which right now only serves to make us look incompetent.
macOS users can of course use Kitty or WezTerm to get glyph-aware transparency.
Let's go ahead and yank this by 2.3.14.
|
non_process
|
remove support for the graphics protocol as explained in the author of is opposed to cells and graphics sharing a cell and thus has no plans to implement transparency as we define it for graphics gnachman states that it does honor transparency which it does in a sense but that transparency is only with regards to the background color not any glyphery combined with s prompting to save a file when one attempts to display a graphic this pretty much makes it unsuitable for the notcurses model remove the protocol support which right now only serves to make us look incompetent macos users can of course use kitty or wezterm to get glyph aware transparency let s go ahead and yank this by
| 0
|
14,048
| 16,853,304,494
|
IssuesEvent
|
2021-06-21 00:02:22
|
Leviatan-Analytics/LA-data-processing
|
https://api.github.com/repos/Leviatan-Analytics/LA-data-processing
|
closed
|
Research export CUDA EasyOCR Model [2]
|
Data Processing Sprint 2 Week 4
|
Estimated time: 2 hs per assignee
Research about the compatibility of EasyOCR library with a library to export the models to different platforms.
Output: Research document with the gathered information.
|
1.0
|
Research export CUDA EasyOCR Model [2] - Estimated time: 2 hs per assignee
Research about the compatibility of EasyOCR library with a library to export the models to different platforms.
Output: Research document with the gathered information.
|
process
|
research export cuda easyocr model estimated time hs per assignee research about the compatibility of easyocr library with a library to export the models to different platforms output research document with the gathered information
| 1
|
194,679
| 22,262,071,844
|
IssuesEvent
|
2022-06-10 02:04:29
|
nanopathi/linux-4.19.72_CVE-2021-3347
|
https://api.github.com/repos/nanopathi/linux-4.19.72_CVE-2021-3347
|
reopened
|
CVE-2020-12770 (Medium) detected in linuxlinux-4.19.239
|
security vulnerability
|
## CVE-2020-12770 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.239</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nanopathi/linux-4.19.72_CVE-2021-3347/commit/8b7c061438f230c475fd8cd97a0917f6ebb9fbe0">8b7c061438f230c475fd8cd97a0917f6ebb9fbe0</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel through 5.6.11. sg_write lacks an sg_remove_request call in a certain failure case, aka CID-83c6f2390040.
<p>Publish Date: 2020-05-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-12770>CVE-2020-12770</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12770">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12770</a></p>
<p>Release Date: 2020-07-29</p>
<p>Fix Resolution: v5.7-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-12770 (Medium) detected in linuxlinux-4.19.239 - ## CVE-2020-12770 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.239</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nanopathi/linux-4.19.72_CVE-2021-3347/commit/8b7c061438f230c475fd8cd97a0917f6ebb9fbe0">8b7c061438f230c475fd8cd97a0917f6ebb9fbe0</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel through 5.6.11. sg_write lacks an sg_remove_request call in a certain failure case, aka CID-83c6f2390040.
<p>Publish Date: 2020-05-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-12770>CVE-2020-12770</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12770">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12770</a></p>
<p>Release Date: 2020-07-29</p>
<p>Fix Resolution: v5.7-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers scsi sg c drivers scsi sg c vulnerability details an issue was discovered in the linux kernel through sg write lacks an sg remove request call in a certain failure case aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
324,855
| 27,825,599,815
|
IssuesEvent
|
2023-03-19 18:25:52
|
Patribots4738/ChargedUp2023
|
https://api.github.com/repos/Patribots4738/ChargedUp2023
|
closed
|
Quick rotational alignment
|
enhancement logic needs testing teleop
|
rotate the bot to 180/0 depending on what we are closer to, and use that to line up with the April tag / substation for the robot
|
1.0
|
Quick rotational alignment - rotate the bot to 180/0 depending on what we are closer to, and use that to line up with the April tag / substation for the robot
|
non_process
|
quick rotational alignment rotate the bot to depending on what we are closer to and use that to line up with the april tag substation for the robot
| 0
|
96,437
| 8,614,607,412
|
IssuesEvent
|
2018-11-19 17:58:41
|
CICE-Consortium/CICE
|
https://api.github.com/repos/CICE-Consortium/CICE
|
closed
|
Need upwind advection test -> Test coverage and Options Implementation
|
testing
|
In the test matrix,
https://docs.google.com/spreadsheets/d/1dMzOC2GAsGqXJvs7xHybCQkPRneeUm78Ah1OXp34bAQ
alt04 is supposed to test the upwind advection scheme but this causes the model to fail so remap is being used temporarily. We need to add or modify one of the tests to cover upwind.
|
1.0
|
Need upwind advection test -> Test coverage and Options Implementation - In the test matrix,
https://docs.google.com/spreadsheets/d/1dMzOC2GAsGqXJvs7xHybCQkPRneeUm78Ah1OXp34bAQ
alt04 is supposed to test the upwind advection scheme but this causes the model to fail so remap is being used temporarily. We need to add or modify one of the tests to cover upwind.
|
non_process
|
need upwind advection test test coverage and options implementation in the test matrix is supposed to test the upwind advection scheme but this causes the model to fail so remap is being used temporarily we need to add or modify one of the tests to cover upwind
| 0
|
4,637
| 7,482,182,777
|
IssuesEvent
|
2018-04-04 23:48:47
|
UnbFeelings/unb-feelings-GQA
|
https://api.github.com/repos/UnbFeelings/unb-feelings-GQA
|
closed
|
Atualizar descrição das atividades do processo
|
document process wiki
|
Atualizar todas as atividades descritas do processo, inserindo as linhas: Responsáveis e Envolvidos
|
1.0
|
Atualizar descrição das atividades do processo - Atualizar todas as atividades descritas do processo, inserindo as linhas: Responsáveis e Envolvidos
|
process
|
atualizar descrição das atividades do processo atualizar todas as atividades descritas do processo inserindo as linhas responsáveis e envolvidos
| 1
|
5,759
| 8,598,731,009
|
IssuesEvent
|
2018-11-15 22:47:03
|
gfrebello/qs-trip-planning-procedure
|
https://api.github.com/repos/gfrebello/qs-trip-planning-procedure
|
closed
|
Update Class Diagram
|
Priority:High Process:Create/Update Class Diagram
|
Some mistakes were verified in the class diagram during stages of development. They need to be revised and corrected. This task has been created with this purpose.
|
1.0
|
Update Class Diagram - Some mistakes were verified in the class diagram during stages of development. They need to be revised and corrected. This task has been created with this purpose.
|
process
|
update class diagram some mistakes were verified in the class diagram during stages of development they need to be revised and corrected this task has been created with this purpose
| 1
|
233,736
| 7,704,024,880
|
IssuesEvent
|
2018-05-21 10:35:34
|
handsontable/handsontable
|
https://api.github.com/repos/handsontable/handsontable
|
reopened
|
getColWidth(col) does not return actual col width
|
Core: callbacks Guess: few hours Priority: normal Type: Bug
|
I've defined `colWidths` in the options upon initialization, but I've also enabled the `strecthH` property, which stretches the column widths to fill the entire container. When I call the `getColWidth()` method, it returns the widths specified in my `colWidths` option rather than the actual calculated column width. This may not be a bug, but it seems like there should be some way to get the calculated column widths.
|
1.0
|
getColWidth(col) does not return actual col width - I've defined `colWidths` in the options upon initialization, but I've also enabled the `strecthH` property, which stretches the column widths to fill the entire container. When I call the `getColWidth()` method, it returns the widths specified in my `colWidths` option rather than the actual calculated column width. This may not be a bug, but it seems like there should be some way to get the calculated column widths.
|
non_process
|
getcolwidth col does not return actual col width i ve defined colwidths in the options upon initialization but i ve also enabled the strecthh property which stretches the column widths to fill the entire container when i call the getcolwidth method it returns the widths specified in my colwidths option rather than the actual calculated column width this may not be a bug but it seems like there should be some way to get the calculated column widths
| 0
|
14,069
| 16,891,327,502
|
IssuesEvent
|
2021-06-23 09:35:37
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
color calibration: some adaption settings breaks processing with opencl on intel gpu
|
bug: pending scope: image processing
|
activating color calibration with
adaption = linear Bradford or CAT16 or XYZ
and
"clip negative RGB from gamut" is unchecked
breaks OpenCL processing using intel on-cpu gpu
**To Reproduce**
1. edit opencl_device_priority to force preview or full pixel pipe to be processed on the intel gpu
2. activate color calibration
3. uncheck "clip negative RGB from gamut"
4. select adaption linear Bradford or CAT16 or XYZ
5. preview or full is black according to step 1
**workaround:**
the b/w setting are defined without having "clip negative RGB from gamut" checked
So maybe checking clip negative RGB from gamut in the presets might be a temporary solution
**Platform**
* darktable version : darktable 3.5.0+2648~g5586d4178
* OS : OSx
* Graphics card :
0.193262 [opencl_init] 0 'Intel(R) UHD Graphics 630'
0.193268 [opencl_init] 1 'AMD Radeon Pro 5500M Compute Engine'
|
1.0
|
color calibration: some adaption settings breaks processing with opencl on intel gpu - activating color calibration with
adaption = linear Bradford or CAT16 or XYZ
and
"clip negative RGB from gamut" is unchecked
breaks OpenCL processing using intel on-cpu gpu
**To Reproduce**
1. edit opencl_device_priority to force preview or full pixel pipe to be processed on the intel gpu
2. activate color calibration
3. uncheck "clip negative RGB from gamut"
4. select adaption linear Bradford or CAT16 or XYZ
5. preview or full is black according to step 1
**workaround:**
the b/w setting are defined without having "clip negative RGB from gamut" checked
So maybe checking clip negative RGB from gamut in the presets might be a temporary solution
**Platform**
* darktable version : darktable 3.5.0+2648~g5586d4178
* OS : OSx
* Graphics card :
0.193262 [opencl_init] 0 'Intel(R) UHD Graphics 630'
0.193268 [opencl_init] 1 'AMD Radeon Pro 5500M Compute Engine'
|
process
|
color calibration some adaption settings breaks processing with opencl on intel gpu activating color calibration with adaption linear bradford or or xyz and clip negative rgb from gamut is unchecked breaks opencl processing using intel on cpu gpu to reproduce edit opencl device priority to force preview or full pixel pipe to be processed on the intel gpu activate color calibration uncheck clip negative rgb from gamut select adaption linear bradford or or xyz preview or full is black according to step workaround the b w setting are defined without having clip negative rgb from gamut checked so maybe checking clip negative rgb from gamut in the presets might be a temporary solution platform darktable version darktable os osx graphics card intel r uhd graphics amd radeon pro compute engine
| 1
|
18,341
| 24,465,240,897
|
IssuesEvent
|
2022-10-07 14:30:04
|
flowsforapex/apex-flowsforapex
|
https://api.github.com/repos/flowsforapex/apex-flowsforapex
|
closed
|
[bug]: Manage Instance Step process plug-in doesn't work in call activity
|
bug process-plugin
|
### Flows for APEX version
22.2 (development)
### Oracle Database version
18
### Oracle APEX version
20.1
### What happened?
When trying to complete a task using the plug-in, you end up with an error message and the step is not completed.
Looking at the debug shows a no data found error because the process try to get information about the next object but the query does a join with the flow_instances_vw view where there is no value for a call activity
### Steps to reproduce
start a process contaning a call activity
Try to complete a task inside the call activty using the plug-in
### Expected behaviour
The task is correctly completed.
|
1.0
|
[bug]: Manage Instance Step process plug-in doesn't work in call activity - ### Flows for APEX version
22.2 (development)
### Oracle Database version
18
### Oracle APEX version
20.1
### What happened?
When trying to complete a task using the plug-in, you end up with an error message and the step is not completed.
Looking at the debug shows a no data found error because the process try to get information about the next object but the query does a join with the flow_instances_vw view where there is no value for a call activity
### Steps to reproduce
start a process contaning a call activity
Try to complete a task inside the call activty using the plug-in
### Expected behaviour
The task is correctly completed.
|
process
|
manage instance step process plug in doesn t work in call activity flows for apex version development oracle database version oracle apex version what happened when trying to complete a task using the plug in you end up with an error message and the step is not completed looking at the debug shows a no data found error because the process try to get information about the next object but the query does a join with the flow instances vw view where there is no value for a call activity steps to reproduce start a process contaning a call activity try to complete a task inside the call activty using the plug in expected behaviour the task is correctly completed
| 1
|
6,065
| 8,902,727,958
|
IssuesEvent
|
2019-01-17 08:32:45
|
Juris-M/citeproc-js
|
https://api.github.com/repos/Juris-M/citeproc-js
|
closed
|
text/number variable="locator" includes a "p." locator label when the associated term is empty
|
fix in process
|
The following:
```xml
<term name="page" form="long">
<single></single>
<multiple></multiple>
</term>
<term name="page" form="symbol">
<single></single>
<multiple></multiple>
</term>
<term name="page" form="short">
<single></single>
<multiple></multiple>
</term>
```
```xml
<macro name="juris-locator">
<number variable="locator" prefix="{" suffix="}"/>
</macro>
```
Renders as `{p. 53}`. If you put any content in the 'page' term, then the "p. " goes away. There are two problems here:
1. It seems the engine thinks the entire term is not defined, and runs locale fallback. However, my style wants empty page locators! You can, it seems, put content in the locator and then put choose branches everywhere to prevent the page label ever rendering. The source of the error might be something like `if ("") {} else console.log("empty strings test as false")`.
2. The text or number variable should never include a label. I'm not sure why it would be doing that. That's a pretty significant deviation from spec - maybe there are other problems here?
|
1.0
|
text/number variable="locator" includes a "p." locator label when the associated term is empty - The following:
```xml
<term name="page" form="long">
<single></single>
<multiple></multiple>
</term>
<term name="page" form="symbol">
<single></single>
<multiple></multiple>
</term>
<term name="page" form="short">
<single></single>
<multiple></multiple>
</term>
```
```xml
<macro name="juris-locator">
<number variable="locator" prefix="{" suffix="}"/>
</macro>
```
Renders as `{p. 53}`. If you put any content in the 'page' term, then the "p. " goes away. There are two problems here:
1. It seems the engine thinks the entire term is not defined, and runs locale fallback. However, my style wants empty page locators! You can, it seems, put content in the locator and then put choose branches everywhere to prevent the page label ever rendering. The source of the error might be something like `if ("") {} else console.log("empty strings test as false")`.
2. The text or number variable should never include a label. I'm not sure why it would be doing that. That's a pretty significant deviation from spec - maybe there are other problems here?
|
process
|
text number variable locator includes a p locator label when the associated term is empty the following xml xml renders as p if you put any content in the page term then the p goes away there are two problems here it seems the engine thinks the entire term is not defined and runs locale fallback however my style wants empty page locators you can it seems put content in the locator and then put choose branches everywhere to prevent the page label ever rendering the source of the error might be something like if else console log empty strings test as false the text or number variable should never include a label i m not sure why it would be doing that that s a pretty significant deviation from spec maybe there are other problems here
| 1
|
21,357
| 29,189,589,951
|
IssuesEvent
|
2023-05-19 18:35:36
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] Implement functions for available aggregations
|
.metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
To power this UI:
<img width="322" alt="image" src="https://user-images.githubusercontent.com/1455846/221764039-012db68f-8549-45f9-a519-8c1dbeedf2e7.png">
Needs to return available aggregation types for the current database as a list of maps in JS and UI-friendly (i.e., `clj->js` manner) manner, and we should be able to take one of these maps and pass it back in to `(aggregation query ...)` (as added by #28693) to add it to the query
For aggregations that need a Field or expression, either include the possible Fields inline, or implement a separate function to get the options for a choice from the first step.
We'll have to make sure we include Fields that aren't currently projected e.g. if we already have a `[:sum X]` aggregation it should still be possible to add a `[:sum Y]`, even if `Y` isn't currently in the `:lib/stage-metadata` for the current stage
|
1.0
|
[MLv2] Implement functions for available aggregations - To power this UI:
<img width="322" alt="image" src="https://user-images.githubusercontent.com/1455846/221764039-012db68f-8549-45f9-a519-8c1dbeedf2e7.png">
Needs to return available aggregation types for the current database as a list of maps in JS and UI-friendly (i.e., `clj->js` manner) manner, and we should be able to take one of these maps and pass it back in to `(aggregation query ...)` (as added by #28693) to add it to the query
For aggregations that need a Field or expression, either include the possible Fields inline, or implement a separate function to get the options for a choice from the first step.
We'll have to make sure we include Fields that aren't currently projected e.g. if we already have a `[:sum X]` aggregation it should still be possible to add a `[:sum Y]`, even if `Y` isn't currently in the `:lib/stage-metadata` for the current stage
|
process
|
implement functions for available aggregations to power this ui img width alt image src needs to return available aggregation types for the current database as a list of maps in js and ui friendly i e clj js manner manner and we should be able to take one of these maps and pass it back in to aggregation query as added by to add it to the query for aggregations that need a field or expression either include the possible fields inline or implement a separate function to get the options for a choice from the first step we ll have to make sure we include fields that aren t currently projected e g if we already have a aggregation it should still be possible to add a even if y isn t currently in the lib stage metadata for the current stage
| 1
|
592,499
| 17,909,536,656
|
IssuesEvent
|
2021-09-09 01:59:46
|
hackforla/tdm-calculator
|
https://api.github.com/repos/hackforla/tdm-calculator
|
closed
|
Add User defined strategy description on Summary Page
|
role: front-end level: medium priority: MUST HAVE p-Feature - Final Summary Page milestone: missing
|
### Overview
We need to add a new section on the final summary page that displays the user's description of the User defined strategy.
### Action Items
- [ ] Add a line called "User Defined Strategy Description:" to the final summary page
- [ ] Display the description in the column under the other project information or inputs
- [ ] Make so this is formatted correctly for long blocks of text
### Resources/Instructions
https://www.figma.com/file/nD9QK56Mzq7xNSaSUoeGx0/TDM-Calculator?node-id=1744%3A3531
|
1.0
|
Add User defined strategy description on Summary Page - ### Overview
We need to add a new section on the final summary page that displays the user's description of the User defined strategy.
### Action Items
- [ ] Add a line called "User Defined Strategy Description:" to the final summary page
- [ ] Display the description in the column under the other project information or inputs
- [ ] Make so this is formatted correctly for long blocks of text
### Resources/Instructions
https://www.figma.com/file/nD9QK56Mzq7xNSaSUoeGx0/TDM-Calculator?node-id=1744%3A3531
|
non_process
|
add user defined strategy description on summary page overview we need to add a new section on the final summary page that displays the user s description of the user defined strategy action items add a line called user defined strategy description to the final summary page display the description in the column under the other project information or inputs make so this is formatted correctly for long blocks of text resources instructions
| 0
|
53,182
| 13,131,397,110
|
IssuesEvent
|
2020-08-06 16:56:42
|
qpdb/mentat
|
https://api.github.com/repos/qpdb/mentat
|
opened
|
Figure out how to make Android library footprint size smaller
|
A-build A-design A-ffi
|
Currently (13/3/2018) the Android app footprint size is huge (70mb). We need to ensure that we can build Android apps with a sensible footprint. This ticket tracks discovering build options and streamlining our imports to reduce our release Android library size.
|
1.0
|
Figure out how to make Android library footprint size smaller - Currently (13/3/2018) the Android app footprint size is huge (70mb). We need to ensure that we can build Android apps with a sensible footprint. This ticket tracks discovering build options and streamlining our imports to reduce our release Android library size.
|
non_process
|
figure out how to make android library footprint size smaller currently the android app footprint size is huge we need to ensure that we can build android apps with a sensible footprint this ticket tracks discovering build options and streamlining our imports to reduce our release android library size
| 0
|
113,184
| 17,116,151,203
|
IssuesEvent
|
2021-07-11 11:52:52
|
theHinneh/Exercism
|
https://api.github.com/repos/theHinneh/Exercism
|
closed
|
CVE-2020-28500 (Medium) detected in lodash-4.17.15.tgz
|
security vulnerability
|
## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: Exercism/gigasecond/package.json</p>
<p>Path to vulnerable library: Exercism/gigasecond/node_modules/lodash,Exercism/hello-world/node_modules/lodash,Exercism/two-fer/node_modules/lodash,Exercism/leap/node_modules/lodash,Exercism/word-count/node_modules/lodash</p>
<p>
Dependency Hierarchy:
- eslint-7.2.0.tgz (Root Library)
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/theHinneh/Exercism/commit/061d94f44ec35d7bfb84d7f077afccd350659bc4">061d94f44ec35d7bfb84d7f077afccd350659bc4</a></p>
<p>Found in base branch: <b>typescript</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash-4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28500 (Medium) detected in lodash-4.17.15.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: Exercism/gigasecond/package.json</p>
<p>Path to vulnerable library: Exercism/gigasecond/node_modules/lodash,Exercism/hello-world/node_modules/lodash,Exercism/two-fer/node_modules/lodash,Exercism/leap/node_modules/lodash,Exercism/word-count/node_modules/lodash</p>
<p>
Dependency Hierarchy:
- eslint-7.2.0.tgz (Root Library)
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/theHinneh/Exercism/commit/061d94f44ec35d7bfb84d7f077afccd350659bc4">061d94f44ec35d7bfb84d7f077afccd350659bc4</a></p>
<p>Found in base branch: <b>typescript</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash-4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file exercism gigasecond package json path to vulnerable library exercism gigasecond node modules lodash exercism hello world node modules lodash exercism two fer node modules lodash exercism leap node modules lodash exercism word count node modules lodash dependency hierarchy eslint tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch typescript vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
4,673
| 7,517,256,305
|
IssuesEvent
|
2018-04-12 02:31:15
|
UnbFeelings/unb-feelings-GQA
|
https://api.github.com/repos/UnbFeelings/unb-feelings-GQA
|
closed
|
Definir critérios de avaliação
|
document help wanted process wiki
|
- [x] Criar documento "Objetos de Auditoria"
- [x] Listar todos os objetos a serem auditados
- [x] Definir critérios de avaliação para cada objeto auditado
@UnbFeelings/gqa
|
1.0
|
Definir critérios de avaliação - - [x] Criar documento "Objetos de Auditoria"
- [x] Listar todos os objetos a serem auditados
- [x] Definir critérios de avaliação para cada objeto auditado
@UnbFeelings/gqa
|
process
|
definir critérios de avaliação criar documento objetos de auditoria listar todos os objetos a serem auditados definir critérios de avaliação para cada objeto auditado unbfeelings gqa
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.