Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
193,674 | 6,886,919,509 | IssuesEvent | 2017-11-21 21:16:25 | GoogleChrome/lighthouse | https://api.github.com/repos/GoogleChrome/lighthouse | closed | Document --no-enable-error-reporting and CI env var | needs-priority | https://github.com/GoogleChrome/lighthouse/blob/master/docs/error-reporting.md mentions the `--no-enable-error-reporting` but the readme and `lighthouse --help` do not list this flag.
That page also mentions as `CI` env variable. Needs examples and mention of that in `--help`.
It may also be better to rename the flag `--disable-enable-error-reporting` to be consistent with the other disable flags.
| 1.0 | Document --no-enable-error-reporting and CI env var - https://github.com/GoogleChrome/lighthouse/blob/master/docs/error-reporting.md mentions the `--no-enable-error-reporting` but the readme and `lighthouse --help` do not list this flag.
That page also mentions as `CI` env variable. Needs examples and mention of that in `--help`.
It may also be better to rename the flag `--disable-enable-error-reporting` to be consistent with the other disable flags.
| non_process | document no enable error reporting and ci env var mentions the no enable error reporting but the readme and lighthouse help do not list this flag that page also mentions as ci env variable needs examples and mention of that in help it may also be better to rename the flag disable enable error reporting to be consistent with the other disable flags | 0 |
20,389 | 27,045,255,583 | IssuesEvent | 2023-02-13 09:18:38 | camunda/issues | https://api.github.com/repos/camunda/issues | opened | Allow unsupported BPMN elements in non executable pools | component:zeebe-process-automation public kind:epic feature-parity target:8.2-alpha4 | ### Value Proposition Statement
Use unsupported BPMN elements in non-executable pools for documentation purposes.
### User Problem
Customers following our BPMN methodology described also in Real-Life BPMN might create collaboration diagrams that contain executable and non-executable Pools (for example a model like in https://camunda.com/bpmn/examples/#bpmn-examples-four-eyes-principle).
In https://github.com/camunda-community-hub/camunda-7-to-8-migration/issues/19 Stephan raised the problem that Zeebe rejects deployments containing unsupported elements, even if in a nonexecutable pool (e.g. Element: ConditionalEventDefinition_0lvmueg ERROR: Event definition of this type is not supported ).
### User Stories
Zeebe does not need to check elements in non-executable pools - as they are not executed and are ignored anyway.
This would allow using all BPMN elements for documentation purposes
### Implementation Notes
<!-- Notes to consider for implementation, for example:
* In Cawemo we already have the capability to manage templates via the feature that we call “catalog”
* What we would build now is the ability to a) use this feature in the web modeler to create templates and b) when the context pad opens for defining the type of a task, the templates that decorate service tasks are shown
* We should clarify terminology (integrations vs. connectors vs. job workers vs. element templates.) Particularly “element templates” might not be a term that a user intuitively understands.
* See these high level wireframes to capture the idea -->
### Breakdown
* [x] https://github.com/camunda/zeebe/issues/9542
* [x] https://github.com/camunda/camunda-modeler/issues/3368
#### Discovery phase ##
<!-- Example: link to "Conduct customer interview with xyz" -->
#### Define phase ##
<!-- Consider: UI, UX, technical design, documentation design -->
<!-- Example: link to "Define User-Journey Flow" or "Define target architecture" -->
Design Planning
* Reviewed by design: {date}
* Designer assigned: {Yes, No Design Necessary, or No Designer Available}
* Assignee:
* Design Brief - {link to design brief }
* Research Brief - {link to research brief }
Design Deliverables
* {Deliverable Name} {Link to GH Issue}
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
<!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. -->
#### Validate phase ##
<!-- Example: link to "Evaluate usage data of last quarter" -->
### Links to additional collateral
- https://github.com/camunda/zeebe/issues/9542
| 1.0 | Allow unsupported BPMN elements in non executable pools - ### Value Proposition Statement
Use unsupported BPMN elements in non-executable pools for documentation purposes.
### User Problem
Customers following our BPMN methodology described also in Real-Life BPMN might create collaboration diagrams that contain executable and non-executable Pools (for example a model like in https://camunda.com/bpmn/examples/#bpmn-examples-four-eyes-principle).
In https://github.com/camunda-community-hub/camunda-7-to-8-migration/issues/19 Stephan raised the problem that Zeebe rejects deployments containing unsupported elements, even if in a nonexecutable pool (e.g. Element: ConditionalEventDefinition_0lvmueg ERROR: Event definition of this type is not supported ).
### User Stories
Zeebe does not need to check elements in non-executable pools - as they are not executed and are ignored anyway.
This would allow using all BPMN elements for documentation purposes
### Implementation Notes
<!-- Notes to consider for implementation, for example:
* In Cawemo we already have the capability to manage templates via the feature that we call “catalog”
* What we would build now is the ability to a) use this feature in the web modeler to create templates and b) when the context pad opens for defining the type of a task, the templates that decorate service tasks are shown
* We should clarify terminology (integrations vs. connectors vs. job workers vs. element templates.) Particularly “element templates” might not be a term that a user intuitively understands.
* See these high level wireframes to capture the idea -->
### Breakdown
* [x] https://github.com/camunda/zeebe/issues/9542
* [x] https://github.com/camunda/camunda-modeler/issues/3368
#### Discovery phase ##
<!-- Example: link to "Conduct customer interview with xyz" -->
#### Define phase ##
<!-- Consider: UI, UX, technical design, documentation design -->
<!-- Example: link to "Define User-Journey Flow" or "Define target architecture" -->
Design Planning
* Reviewed by design: {date}
* Designer assigned: {Yes, No Design Necessary, or No Designer Available}
* Assignee:
* Design Brief - {link to design brief }
* Research Brief - {link to research brief }
Design Deliverables
* {Deliverable Name} {Link to GH Issue}
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
<!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. -->
#### Validate phase ##
<!-- Example: link to "Evaluate usage data of last quarter" -->
### Links to additional collateral
- https://github.com/camunda/zeebe/issues/9542
| process | allow unsupported bpmn elements in non executable pools value proposition statement use unsupported bpmn elements in non executable pools for documentation purposes user problem customers following our bpmn methodology described also in real life bpmn might create collaboration diagrams that contain executable and non executable pools for example a model like in in stephan raised the problem that zeebe rejects deployments containing unsupported elements even if in a nonexecutable pool e g element conditionaleventdefinition error event definition of this type is not supported user stories zeebe does not need to check elements in non executable pools as they are not executed and are ignored anyway this would allow using all bpmn elements for documentation purposes implementation notes notes to consider for implementation for example in cawemo we already have the capability to manage templates via the feature that we call “catalog” what we would build now is the ability to a use this feature in the web modeler to create templates and b when the context pad opens for defining the type of a task the templates that decorate service tasks are shown we should clarify terminology integrations vs connectors vs job workers vs element templates particularly “element templates” might not be a term that a user intuitively understands see these high level wireframes to capture the idea breakdown discovery phase define phase design planning reviewed by design date designer assigned yes no design necessary or no designer available assignee design brief link to design brief research brief link to research brief design deliverables deliverable name link to gh issue documentation planning risk management risk class risk treatment implement phase validate phase links to additional collateral | 1 |
13,305 | 15,780,024,781 | IssuesEvent | 2021-04-01 09:25:57 | ooi-data/CE04OSPS-SF01B-2A-CTDPFA107-streamed-ctdpf_sbe43_sample | https://api.github.com/repos/ooi-data/CE04OSPS-SF01B-2A-CTDPFA107-streamed-ctdpf_sbe43_sample | opened | 🛑 Processing failed: ResponseParserError | process | ## Overview
`ResponseParserError` found in `processing_task` task during run ended on 2021-04-01T09:25:57.143881.
## Details
Flow name: `CE04OSPS-SF01B-2A-CTDPFA107-streamed-ctdpf_sbe43_sample`
Task name: `processing_task`
Error type: `ResponseParserError`
Error message: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 311, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1146, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1445, in rm
super().rm(path, recursive=recursive, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 196, in rm
maybe_sync(self._rm, self, path, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 100, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 71, in sync
raise exc.with_traceback(tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 55, in f
result[0] = await future
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1404, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1396, in _bulk_delete
await self._call_s3(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err) from err
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 234, in _call_s3
return await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 140, in _make_api_call
http, parsed_response = await self._make_request(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 160, in _make_request
return await self._endpoint.make_request(operation_model, request_dict)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 101, in _send_request
success_response, exception = await self._get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 120, in _get_response
success_response, exception = await self._do_get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 180, in _do_get_response
parsed_response = parser.parse(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 245, in parse
parsed = self._do_parse(response, shape)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 809, in _do_parse
self._add_modeled_parse(response, shape, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 818, in _add_modeled_parse
self._parse_payload(response, shape, member_shapes, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 858, in _parse_payload
original_parsed = self._initial_body_parse(response['body'])
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 944, in _initial_body_parse
return self._parse_xml_string_to_dom(xml_string)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 454, in _parse_xml_string_to_dom
raise ResponseParserError(
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
```
</details>
| 1.0 | 🛑 Processing failed: ResponseParserError - ## Overview
`ResponseParserError` found in `processing_task` task during run ended on 2021-04-01T09:25:57.143881.
## Details
Flow name: `CE04OSPS-SF01B-2A-CTDPFA107-streamed-ctdpf_sbe43_sample`
Task name: `processing_task`
Error type: `ResponseParserError`
Error message: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 311, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1146, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1445, in rm
super().rm(path, recursive=recursive, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 196, in rm
maybe_sync(self._rm, self, path, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 100, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 71, in sync
raise exc.with_traceback(tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 55, in f
result[0] = await future
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1404, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1396, in _bulk_delete
await self._call_s3(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err) from err
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 234, in _call_s3
return await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 140, in _make_api_call
http, parsed_response = await self._make_request(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 160, in _make_request
return await self._endpoint.make_request(operation_model, request_dict)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 101, in _send_request
success_response, exception = await self._get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 120, in _get_response
success_response, exception = await self._do_get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 180, in _do_get_response
parsed_response = parser.parse(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 245, in parse
parsed = self._do_parse(response, shape)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 809, in _do_parse
self._add_modeled_parse(response, shape, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 818, in _add_modeled_parse
self._parse_payload(response, shape, member_shapes, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 858, in _parse_payload
original_parsed = self._initial_body_parse(response['body'])
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 944, in _initial_body_parse
return self._parse_xml_string_to_dom(xml_string)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 454, in _parse_xml_string_to_dom
raise ResponseParserError(
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
```
</details>
| process | 🛑 processing failed responseparsererror overview responseparsererror found in processing task task during run ended on details flow name streamed ctdpf sample task name processing task error type responseparsererror error message unable to parse response no element found line column invalid xml received further retries may succeed b n traceback traceback most recent call last file usr share miniconda envs harvester lib site packages ooi harvester processor pipeline py line in processing task file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize zarr source store fs delete source store root recursive true file srv conda envs notebook lib site packages fsspec spec py line in delete return self rm path recursive recursive maxdepth maxdepth file srv conda envs notebook lib site packages core py line in rm super rm path recursive recursive kwargs file srv conda envs notebook lib site packages fsspec asyn py line in rm maybe sync self rm self path kwargs file srv conda envs notebook lib site packages fsspec asyn py line in maybe sync return sync loop func args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in sync raise exc with traceback tb file srv conda envs notebook lib site packages fsspec asyn py line in f result await future file srv conda envs notebook lib site packages core py line in rm await asyncio gather file srv conda envs notebook lib site packages core py line in bulk delete await self call file srv conda envs notebook lib site packages core py line in call raise translate boto error err from err file srv conda envs notebook lib site packages core py line in call return await method additional kwargs file srv conda envs notebook lib site packages aiobotocore client py line in make api call http parsed response await self make request file srv conda envs notebook lib site packages aiobotocore client py line in make request return await self endpoint make request operation model request dict file srv conda envs notebook lib site packages aiobotocore endpoint py line in send request success response exception await self get response file srv conda envs notebook lib site packages aiobotocore endpoint py line in get response success response exception await self do get response file srv conda envs notebook lib site packages aiobotocore endpoint py line in do get response parsed response parser parse file srv conda envs notebook lib site packages botocore parsers py line in parse parsed self do parse response shape file srv conda envs notebook lib site packages botocore parsers py line in do parse self add modeled parse response shape final parsed file srv conda envs notebook lib site packages botocore parsers py line in add modeled parse self parse payload response shape member shapes final parsed file srv conda envs notebook lib site packages botocore parsers py line in parse payload original parsed self initial body parse response file srv conda envs notebook lib site packages botocore parsers py line in initial body parse return self parse xml string to dom xml string file srv conda envs notebook lib site packages botocore parsers py line in parse xml string to dom raise responseparsererror botocore parsers responseparsererror unable to parse response no element found line column invalid xml received further retries may succeed b n | 1 |
7,862 | 11,038,993,010 | IssuesEvent | 2019-12-08 17:31:56 | Jeffail/benthos | https://api.github.com/repos/Jeffail/benthos | closed | Permit customising successful response status in http_client output | enhancement outputs processors sponsored | When using the `http_client` output, it considers every `2xx` response as successful, acking messages in the pipeline. However, it would be desirable to customise this behaviour, by adding additional response statuses that should be considered a "success".
Proposal:
Add `successful_on` field to the `http_client` output that lists additional status codes that are considered successful.
e.g.
```yaml
successful_on:
- 409
``` | 1.0 | Permit customising successful response status in http_client output - When using the `http_client` output, it considers every `2xx` response as successful, acking messages in the pipeline. However, it would be desirable to customise this behaviour, by adding additional response statuses that should be considered a "success".
Proposal:
Add `successful_on` field to the `http_client` output that lists additional status codes that are considered successful.
e.g.
```yaml
successful_on:
- 409
``` | process | permit customising successful response status in http client output when using the http client output it considers every response as successful acking messages in the pipeline however it would be desirable to customise this behaviour by adding additional response statuses that should be considered a success proposal add successful on field to the http client output that lists additional status codes that are considered successful e g yaml successful on | 1 |
4,318 | 7,204,594,853 | IssuesEvent | 2018-02-06 13:16:18 | openvstorage/framework | https://api.github.com/repos/openvstorage/framework | closed | Could not determine the Arakoon master node | process_wontfix | hi
when i first run ovs setup error
[root@localhost OpenvStorage]# ovs setup
+++++++++++++++++++++++++++++
+++ Open vStorage Setup +++
+++++++++++++++++++++++++++++
+++ Setting up connections +++
+++ Collecting cluster information +++
Avahi installed
[127.0.0.1] dbus already running
[127.0.0.1] avahi-daemon already running
No clusters found. Make a selection please:
1: Create a new cluster
2: Join a cluster
Select Nr: 1
Select the public IP address of localhost. Make a selection please:
1: 192.168.122.1
2: 192.168.3.30
Select Nr: 2
Please enter the cluster name: aa
+++ Preparing node +++
Setting up and exchanging SSH keys
Updating hosts file
+++ Setting up first node +++
Setting up configuration management
Use an external cluster? (y/[n]):
Setting up configuration Arakoon
Enable RDMA? (y/[n]):
ERROR: Failed to setup first node
ERROR: Could not determine the Arakoon master node
+++++++++++++++++++++++++++++++++++++++++++++++++++++
+++ An unexpected error occurred: +++
+++ Could not determine the Arakoon master node +++
+++++++++++++++++++++++++++++++++++++++++++++++++++++
The log
2017-12-13 17:20:57 69400 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 0 - INFO - Open vStorage Setup
2017-12-13 17:20:57 69400 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 1 - INFO - Setting up connections
2017-12-13 17:20:57 71800 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 2 - INFO - Collecting cluster information
2017-12-13 17:20:57 72600 +0800 - localhost.localdomain - 20039/140322542499648 - lib/nodeinstallation.py - setup_node - 3 - DEBUG - Current host: localhost
2017-12-13 17:20:57 72600 +0800 - localhost.localdomain - 20039/140322542499648 - lib/nodeinstallation.py - setup_node - 4 - DEBUG - Cluster selection
2017-12-13 17:20:57 73000 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 5 - INFO - Avahi installed
2017-12-13 17:21:09 02400 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 6 - INFO - Preparing node
2017-12-13 17:21:09 02500 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 7 - INFO - Setting up and exchanging SSH keys
2017-12-13 17:21:09 73700 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 8 - INFO - Updating hosts file
2017-12-13 17:21:09 76100 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 9 - INFO - Setting up first node
2017-12-13 17:21:09 76500 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 10 - INFO - Setting up configuration management
2017-12-13 17:21:15 24400 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 11 - INFO - Setting up configuration Arakoon
2017-12-13 17:21:22 45200 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 25 - ERROR - Failed to setup first node
Traceback (most recent call last):
File "ovs/lib/nodeinstallation.py", line 469, in setup_node
rdma=rdma)
File "ovs/lib/nodeinstallation.py", line 771, in _setup_first_node
metadata = ArakoonInstaller.get_unused_arakoon_metadata_and_claim(cluster_type=ServiceType.ARAKOON_CLUSTER_TYPES.FWK)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/arakooninstaller.py", line 684, in get_unused_arakoon_metadata_and_claim
if arakoon_client.exists(ArakoonInstaller.METADATA_KEY):
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 44, in new_function
return f(self, *args, **kw)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 167, in exists
return PyrakoonClient._try(self._identifier, self._client.exists, key)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 213, in _try
return_value = method(*args, **kwargs)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 160, in wrapped
return fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 143, in wrapped
return fun(*new_args)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 246, in exists
return self._client.exists(key, consistency = consistency)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/client/utils.py", line 99, in wrapped
return self._process(message) #pylint: disable=W0212
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1121, in _process
connection = self._send_to_master(bytes_)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1180, in _send_to_master
self.determine_master()
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1223, in determine_master
raise ArakoonNoMaster
ArakoonNoMaster: Could not determine the Arakoon master node
2017-12-13 17:21:22 45400 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 26 - ERROR - Could not determine the Arakoon master node
Traceback (most recent call last):
File "ovs/lib/nodeinstallation.py", line 469, in setup_node
rdma=rdma)
File "ovs/lib/nodeinstallation.py", line 771, in _setup_first_node
metadata = ArakoonInstaller.get_unused_arakoon_metadata_and_claim(cluster_type=ServiceType.ARAKOON_CLUSTER_TYPES.FWK)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/arakooninstaller.py", line 684, in get_unused_arakoon_metadata_and_claim
if arakoon_client.exists(ArakoonInstaller.METADATA_KEY):
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 44, in new_function
return f(self, *args, **kw)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 167, in exists
return PyrakoonClient._try(self._identifier, self._client.exists, key)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 213, in _try
return_value = method(*args, **kwargs)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 160, in wrapped
return fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 143, in wrapped
return fun(*new_args)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 246, in exists
return self._client.exists(key, consistency = consistency)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/client/utils.py", line 99, in wrapped
return self._process(message) #pylint: disable=W0212
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1121, in _process
connection = self._send_to_master(bytes_)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1180, in _send_to_master
self.determine_master()
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1223, in determine_master
raise ArakoonNoMaster
ArakoonNoMaster: Could not determine the Arakoon master node
2017-12-13 17:21:22 46100 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 27 - INFO -
2017-12-13 17:21:22 46100 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 28 - ERROR - An unexpected error occurred:
Traceback (most recent call last):
File "ovs/lib/nodeinstallation.py", line 469, in setup_node
rdma=rdma)
File "ovs/lib/nodeinstallation.py", line 771, in _setup_first_node
metadata = ArakoonInstaller.get_unused_arakoon_metadata_and_claim(cluster_type=ServiceType.ARAKOON_CLUSTER_TYPES.FWK)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/arakooninstaller.py", line 684, in get_unused_arakoon_metadata_and_claim
if arakoon_client.exists(ArakoonInstaller.METADATA_KEY):
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 44, in new_function
return f(self, *args, **kw)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 167, in exists
return PyrakoonClient._try(self._identifier, self._client.exists, key)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 213, in _try
return_value = method(*args, **kwargs)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 160, in wrapped
return fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 143, in wrapped
return fun(*new_args)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 246, in exists
return self._client.exists(key, consistency = consistency)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/client/utils.py", line 99, in wrapped
return self._process(message) #pylint: disable=W0212
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1121, in _process
connection = self._send_to_master(bytes_)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1180, in _send_to_master
self.determine_master()
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1223, in determine_master
raise ArakoonNoMaster
ArakoonNoMaster: Could not determine the Arakoon master node
2017-12-13 17:21:22 46200 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 29 - ERROR - Could not determine the Arakoon master node
Traceback (most recent call last):
File "ovs/lib/nodeinstallation.py", line 469, in setup_node
rdma=rdma)
File "ovs/lib/nodeinstallation.py", line 771, in _setup_first_node
metadata = ArakoonInstaller.get_unused_arakoon_metadata_and_claim(cluster_type=ServiceType.ARAKOON_CLUSTER_TYPES.FWK)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/arakooninstaller.py", line 684, in get_unused_arakoon_metadata_and_claim
if arakoon_client.exists(ArakoonInstaller.METADATA_KEY):
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 44, in new_function
return f(self, *args, **kw)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 167, in exists
return PyrakoonClient._try(self._identifier, self._client.exists, key)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 213, in _try
return_value = method(*args, **kwargs)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 160, in wrapped
return fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 143, in wrapped
return fun(*new_args)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 246, in exists
return self._client.exists(key, consistency = consistency)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/client/utils.py", line 99, in wrapped
return self._process(message) #pylint: disable=W0212
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1121, in _process
connection = self._send_to_master(bytes_)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1180, in _send_to_master
self.determine_master()
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1223, in determine_master
raise ArakoonNoMaster
ArakoonNoMaster: Could not determine the Arakoon master node
how can i solve this problem
thanks
| 1.0 | Could not determine the Arakoon master node - hi
when i first run ovs setup error
[root@localhost OpenvStorage]# ovs setup
+++++++++++++++++++++++++++++
+++ Open vStorage Setup +++
+++++++++++++++++++++++++++++
+++ Setting up connections +++
+++ Collecting cluster information +++
Avahi installed
[127.0.0.1] dbus already running
[127.0.0.1] avahi-daemon already running
No clusters found. Make a selection please:
1: Create a new cluster
2: Join a cluster
Select Nr: 1
Select the public IP address of localhost. Make a selection please:
1: 192.168.122.1
2: 192.168.3.30
Select Nr: 2
Please enter the cluster name: aa
+++ Preparing node +++
Setting up and exchanging SSH keys
Updating hosts file
+++ Setting up first node +++
Setting up configuration management
Use an external cluster? (y/[n]):
Setting up configuration Arakoon
Enable RDMA? (y/[n]):
ERROR: Failed to setup first node
ERROR: Could not determine the Arakoon master node
+++++++++++++++++++++++++++++++++++++++++++++++++++++
+++ An unexpected error occurred: +++
+++ Could not determine the Arakoon master node +++
+++++++++++++++++++++++++++++++++++++++++++++++++++++
The log
2017-12-13 17:20:57 69400 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 0 - INFO - Open vStorage Setup
2017-12-13 17:20:57 69400 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 1 - INFO - Setting up connections
2017-12-13 17:20:57 71800 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 2 - INFO - Collecting cluster information
2017-12-13 17:20:57 72600 +0800 - localhost.localdomain - 20039/140322542499648 - lib/nodeinstallation.py - setup_node - 3 - DEBUG - Current host: localhost
2017-12-13 17:20:57 72600 +0800 - localhost.localdomain - 20039/140322542499648 - lib/nodeinstallation.py - setup_node - 4 - DEBUG - Cluster selection
2017-12-13 17:20:57 73000 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 5 - INFO - Avahi installed
2017-12-13 17:21:09 02400 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 6 - INFO - Preparing node
2017-12-13 17:21:09 02500 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 7 - INFO - Setting up and exchanging SSH keys
2017-12-13 17:21:09 73700 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 8 - INFO - Updating hosts file
2017-12-13 17:21:09 76100 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 9 - INFO - Setting up first node
2017-12-13 17:21:09 76500 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 10 - INFO - Setting up configuration management
2017-12-13 17:21:15 24400 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 11 - INFO - Setting up configuration Arakoon
2017-12-13 17:21:22 45200 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 25 - ERROR - Failed to setup first node
Traceback (most recent call last):
File "ovs/lib/nodeinstallation.py", line 469, in setup_node
rdma=rdma)
File "ovs/lib/nodeinstallation.py", line 771, in _setup_first_node
metadata = ArakoonInstaller.get_unused_arakoon_metadata_and_claim(cluster_type=ServiceType.ARAKOON_CLUSTER_TYPES.FWK)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/arakooninstaller.py", line 684, in get_unused_arakoon_metadata_and_claim
if arakoon_client.exists(ArakoonInstaller.METADATA_KEY):
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 44, in new_function
return f(self, *args, **kw)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 167, in exists
return PyrakoonClient._try(self._identifier, self._client.exists, key)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 213, in _try
return_value = method(*args, **kwargs)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 160, in wrapped
return fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 143, in wrapped
return fun(*new_args)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 246, in exists
return self._client.exists(key, consistency = consistency)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/client/utils.py", line 99, in wrapped
return self._process(message) #pylint: disable=W0212
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1121, in _process
connection = self._send_to_master(bytes_)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1180, in _send_to_master
self.determine_master()
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1223, in determine_master
raise ArakoonNoMaster
ArakoonNoMaster: Could not determine the Arakoon master node
2017-12-13 17:21:22 45400 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 26 - ERROR - Could not determine the Arakoon master node
Traceback (most recent call last):
File "ovs/lib/nodeinstallation.py", line 469, in setup_node
rdma=rdma)
File "ovs/lib/nodeinstallation.py", line 771, in _setup_first_node
metadata = ArakoonInstaller.get_unused_arakoon_metadata_and_claim(cluster_type=ServiceType.ARAKOON_CLUSTER_TYPES.FWK)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/arakooninstaller.py", line 684, in get_unused_arakoon_metadata_and_claim
if arakoon_client.exists(ArakoonInstaller.METADATA_KEY):
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 44, in new_function
return f(self, *args, **kw)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 167, in exists
return PyrakoonClient._try(self._identifier, self._client.exists, key)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 213, in _try
return_value = method(*args, **kwargs)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 160, in wrapped
return fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 143, in wrapped
return fun(*new_args)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 246, in exists
return self._client.exists(key, consistency = consistency)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/client/utils.py", line 99, in wrapped
return self._process(message) #pylint: disable=W0212
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1121, in _process
connection = self._send_to_master(bytes_)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1180, in _send_to_master
self.determine_master()
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1223, in determine_master
raise ArakoonNoMaster
ArakoonNoMaster: Could not determine the Arakoon master node
2017-12-13 17:21:22 46100 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 27 - INFO -
2017-12-13 17:21:22 46100 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 28 - ERROR - An unexpected error occurred:
Traceback (most recent call last):
File "ovs/lib/nodeinstallation.py", line 469, in setup_node
rdma=rdma)
File "ovs/lib/nodeinstallation.py", line 771, in _setup_first_node
metadata = ArakoonInstaller.get_unused_arakoon_metadata_and_claim(cluster_type=ServiceType.ARAKOON_CLUSTER_TYPES.FWK)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/arakooninstaller.py", line 684, in get_unused_arakoon_metadata_and_claim
if arakoon_client.exists(ArakoonInstaller.METADATA_KEY):
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 44, in new_function
return f(self, *args, **kw)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 167, in exists
return PyrakoonClient._try(self._identifier, self._client.exists, key)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 213, in _try
return_value = method(*args, **kwargs)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 160, in wrapped
return fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 143, in wrapped
return fun(*new_args)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 246, in exists
return self._client.exists(key, consistency = consistency)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/client/utils.py", line 99, in wrapped
return self._process(message) #pylint: disable=W0212
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1121, in _process
connection = self._send_to_master(bytes_)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1180, in _send_to_master
self.determine_master()
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1223, in determine_master
raise ArakoonNoMaster
ArakoonNoMaster: Could not determine the Arakoon master node
2017-12-13 17:21:22 46200 +0800 - localhost.localdomain - 20039/140322542499648 - lib/toolbox.py - log - 29 - ERROR - Could not determine the Arakoon master node
Traceback (most recent call last):
File "ovs/lib/nodeinstallation.py", line 469, in setup_node
rdma=rdma)
File "ovs/lib/nodeinstallation.py", line 771, in _setup_first_node
metadata = ArakoonInstaller.get_unused_arakoon_metadata_and_claim(cluster_type=ServiceType.ARAKOON_CLUSTER_TYPES.FWK)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/arakooninstaller.py", line 684, in get_unused_arakoon_metadata_and_claim
if arakoon_client.exists(ArakoonInstaller.METADATA_KEY):
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 44, in new_function
return f(self, *args, **kw)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 167, in exists
return PyrakoonClient._try(self._identifier, self._client.exists, key)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/client.py", line 213, in _try
return_value = method(*args, **kwargs)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 160, in wrapped
return fun(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 143, in wrapped
return fun(*new_args)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 246, in exists
return self._client.exists(key, consistency = consistency)
File "<update_argspec>", line 5, in exists
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/client/utils.py", line 99, in wrapped
return self._process(message) #pylint: disable=W0212
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1121, in _process
connection = self._send_to_master(bytes_)
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1180, in _send_to_master
self.determine_master()
File "/usr/lib/python2.7/site-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1223, in determine_master
raise ArakoonNoMaster
ArakoonNoMaster: Could not determine the Arakoon master node
how can i solve this problem
thanks
| process | could not determine the arakoon master node hi when i first run ovs setup error ovs setup open vstorage setup setting up connections collecting cluster information avahi installed dbus already running avahi daemon already running no clusters found make a selection please create a new cluster join a cluster select nr select the public ip address of localhost make a selection please select nr please enter the cluster name aa preparing node setting up and exchanging ssh keys updating hosts file setting up first node setting up configuration management use an external cluster y setting up configuration arakoon enable rdma y error failed to setup first node error could not determine the arakoon master node an unexpected error occurred could not determine the arakoon master node the log localhost localdomain lib toolbox py log info open vstorage setup localhost localdomain lib toolbox py log info setting up connections localhost localdomain lib toolbox py log info collecting cluster information localhost localdomain lib nodeinstallation py setup node debug current host localhost localhost localdomain lib nodeinstallation py setup node debug cluster selection localhost localdomain lib toolbox py log info avahi installed localhost localdomain lib toolbox py log info preparing node localhost localdomain lib toolbox py log info setting up and exchanging ssh keys localhost localdomain lib toolbox py log info updating hosts file localhost localdomain lib toolbox py log info setting up first node localhost localdomain lib toolbox py log info setting up configuration management localhost localdomain lib toolbox py log info setting up configuration arakoon localhost localdomain lib toolbox py log error failed to setup first node traceback most recent call last file ovs lib nodeinstallation py line in setup node rdma rdma file ovs lib nodeinstallation py line in setup first node metadata arakooninstaller get unused arakoon metadata and claim cluster type servicetype arakoon cluster types fwk file usr lib site packages ovs extensions db arakoon arakooninstaller py line in get unused arakoon metadata and claim if arakoon client exists arakooninstaller metadata key file usr lib site packages ovs extensions db arakoon pyrakoon client py line in new function return f self args kw file usr lib site packages ovs extensions db arakoon pyrakoon client py line in exists return pyrakoonclient try self identifier self client exists key file usr lib site packages ovs extensions db arakoon pyrakoon client py line in try return value method args kwargs file line in exists file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in wrapped return fun args kwargs file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in wrapped return fun new args file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in exists return self client exists key consistency consistency file line in exists file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon client utils py line in wrapped return self process message pylint disable file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in process connection self send to master bytes file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in send to master self determine master file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in determine master raise arakoonnomaster arakoonnomaster could not determine the arakoon master node localhost localdomain lib toolbox py log error could not determine the arakoon master node traceback most recent call last file ovs lib nodeinstallation py line in setup node rdma rdma file ovs lib nodeinstallation py line in setup first node metadata arakooninstaller get unused arakoon metadata and claim cluster type servicetype arakoon cluster types fwk file usr lib site packages ovs extensions db arakoon arakooninstaller py line in get unused arakoon metadata and claim if arakoon client exists arakooninstaller metadata key file usr lib site packages ovs extensions db arakoon pyrakoon client py line in new function return f self args kw file usr lib site packages ovs extensions db arakoon pyrakoon client py line in exists return pyrakoonclient try self identifier self client exists key file usr lib site packages ovs extensions db arakoon pyrakoon client py line in try return value method args kwargs file line in exists file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in wrapped return fun args kwargs file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in wrapped return fun new args file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in exists return self client exists key consistency consistency file line in exists file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon client utils py line in wrapped return self process message pylint disable file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in process connection self send to master bytes file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in send to master self determine master file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in determine master raise arakoonnomaster arakoonnomaster could not determine the arakoon master node localhost localdomain lib toolbox py log info localhost localdomain lib toolbox py log error an unexpected error occurred traceback most recent call last file ovs lib nodeinstallation py line in setup node rdma rdma file ovs lib nodeinstallation py line in setup first node metadata arakooninstaller get unused arakoon metadata and claim cluster type servicetype arakoon cluster types fwk file usr lib site packages ovs extensions db arakoon arakooninstaller py line in get unused arakoon metadata and claim if arakoon client exists arakooninstaller metadata key file usr lib site packages ovs extensions db arakoon pyrakoon client py line in new function return f self args kw file usr lib site packages ovs extensions db arakoon pyrakoon client py line in exists return pyrakoonclient try self identifier self client exists key file usr lib site packages ovs extensions db arakoon pyrakoon client py line in try return value method args kwargs file line in exists file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in wrapped return fun args kwargs file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in wrapped return fun new args file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in exists return self client exists key consistency consistency file line in exists file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon client utils py line in wrapped return self process message pylint disable file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in process connection self send to master bytes file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in send to master self determine master file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in determine master raise arakoonnomaster arakoonnomaster could not determine the arakoon master node localhost localdomain lib toolbox py log error could not determine the arakoon master node traceback most recent call last file ovs lib nodeinstallation py line in setup node rdma rdma file ovs lib nodeinstallation py line in setup first node metadata arakooninstaller get unused arakoon metadata and claim cluster type servicetype arakoon cluster types fwk file usr lib site packages ovs extensions db arakoon arakooninstaller py line in get unused arakoon metadata and claim if arakoon client exists arakooninstaller metadata key file usr lib site packages ovs extensions db arakoon pyrakoon client py line in new function return f self args kw file usr lib site packages ovs extensions db arakoon pyrakoon client py line in exists return pyrakoonclient try self identifier self client exists key file usr lib site packages ovs extensions db arakoon pyrakoon client py line in try return value method args kwargs file line in exists file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in wrapped return fun args kwargs file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in wrapped return fun new args file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in exists return self client exists key consistency consistency file line in exists file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon client utils py line in wrapped return self process message pylint disable file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in process connection self send to master bytes file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in send to master self determine master file usr lib site packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in determine master raise arakoonnomaster arakoonnomaster could not determine the arakoon master node how can i solve this problem thanks | 1 |
3,233 | 6,186,354,014 | IssuesEvent | 2017-07-04 01:56:24 | checkstyle/checkstyle | https://api.github.com/repos/checkstyle/checkstyle | closed | api: LocalizedMessages class should be removed | approved breaking compatibility checkstyle8 | https://github.com/checkstyle/checkstyle/blob/master/src/main/java/com/puppycrawl/tools/checkstyle/api/LocalizedMessages.java
just need to be removed. Simple collection should be used in class that want to keep violations. | True | api: LocalizedMessages class should be removed - https://github.com/checkstyle/checkstyle/blob/master/src/main/java/com/puppycrawl/tools/checkstyle/api/LocalizedMessages.java
just need to be removed. Simple collection should be used in class that want to keep violations. | non_process | api localizedmessages class should be removed just need to be removed simple collection should be used in class that want to keep violations | 0 |
438,449 | 12,627,959,311 | IssuesEvent | 2020-06-15 00:22:27 | juancri/covid19-animation-generator | https://api.github.com/repos/juancri/covid19-animation-generator | closed | Add chart type: stacked area | high priority work in progress | - Add stacked area as an addition to line
- It will allow us to generate the graph chile/upc with master | 1.0 | Add chart type: stacked area - - Add stacked area as an addition to line
- It will allow us to generate the graph chile/upc with master | non_process | add chart type stacked area add stacked area as an addition to line it will allow us to generate the graph chile upc with master | 0 |
7,874 | 11,045,602,902 | IssuesEvent | 2019-12-09 15:23:50 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | Question 'response to defenses of other organism involved in symbiotic interaction' | multi-species process | Is 'response to defenses of other organism involved in symbiotic interaction' really different from any 'response to defenses of other organism'?
I think we should drop the 'involved in symbiotic interaction' from the term, as well as from the definition 'Any process that results in a change in state or activity of a cell or an organism (in terms of movement, secretion, enzyme production, gene expression, etc.) as a result of detecting the defenses of a second organism, ~where the two organisms are in a symbiotic interaction.~'
@mgiglio99 @ValWood @addiehl
What do you think ?
Thanks, Pascale
| 1.0 | Question 'response to defenses of other organism involved in symbiotic interaction' - Is 'response to defenses of other organism involved in symbiotic interaction' really different from any 'response to defenses of other organism'?
I think we should drop the 'involved in symbiotic interaction' from the term, as well as from the definition 'Any process that results in a change in state or activity of a cell or an organism (in terms of movement, secretion, enzyme production, gene expression, etc.) as a result of detecting the defenses of a second organism, ~where the two organisms are in a symbiotic interaction.~'
@mgiglio99 @ValWood @addiehl
What do you think ?
Thanks, Pascale
| process | question response to defenses of other organism involved in symbiotic interaction is response to defenses of other organism involved in symbiotic interaction really different from any response to defenses of other organism i think we should drop the involved in symbiotic interaction from the term as well as from the definition any process that results in a change in state or activity of a cell or an organism in terms of movement secretion enzyme production gene expression etc as a result of detecting the defenses of a second organism where the two organisms are in a symbiotic interaction valwood addiehl what do you think thanks pascale | 1 |
11,805 | 14,627,548,775 | IssuesEvent | 2020-12-23 12:30:16 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Mobile apps] Unable to enroll into combined study | Bug P0 Participant datastore Process: Fixed Process: Tested dev | User is unable to enroll into Combined Study(Token and Eligibility) in android mobile

Instance- DEV
| 2.0 | [Mobile apps] Unable to enroll into combined study - User is unable to enroll into Combined Study(Token and Eligibility) in android mobile

Instance- DEV
| process | unable to enroll into combined study user is unable to enroll into combined study token and eligibility in android mobile instance dev | 1 |
122,976 | 10,242,543,271 | IssuesEvent | 2019-08-20 05:27:32 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | seek (forward or backward) on vimeo video messes up entire AC table | QA/Test-Plan-Specified QA/Yes bug feature/rewards release-notes/exclude release/blocking | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Found while testing https://github.com/brave/brave-browser/issues/4017
If you watch a vimeo video normally, your AC table is fine. However, if you start seeking (forward or backward), the AC table gets really messed up.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Clean profile on 0.68.x
2. Enable Rewards (keep this open in one window)
3. Open a second window and visit some sites - a mix: regular sites, youtube videos, etc.
4. Watch a vimeo video. See it added to AC table. Note its percentage. If you check `publisher_info_db` at this point, you see normal values for duration, score, etc.

5. Once it's added to the table, move the slider bar forward (essentially fast forwarding the video)
6. See the % for vimeo video increased on brave://rewards page without actually watching since you fast forwarded. This shouldn't happen.

7. Move slider bar close to the beginning of the video. See vimeo video removed from AC table. If you check `publisher_info_db` at this point, you could see a negative value for duration (depending on how much of the video you had watched). This shouldn't happen.


8. Watch vimeo video for a bit. It will get added back to AC table but it is now at 100%, all other sites that were there previously are gone. If you check `publisher_info_db` at this point, you will see a normal looking duration, but your score is extremely high.


9. If you try to add sites to your auto contribute table at this point, they won't add. I'm guessing bc the score for the vimeo video is so high.
Note - I could not reproduce this with a YouTube video in place of vimeo video.
## Actual result:
Seek (forward/backward) on vimeo video screws up ac table.
## Expected result:
AC table not wiped out, duration to be accurately recorded.
## Reproduces how often:
Seems to reproduce easily with seek forward/backward
## Brave version (brave://version info)
Brave | 0.68.130 Chromium: 76.0.3809.100 (Official Build) (64-bit)
-- | --
Revision | ed9d447d30203dc5069e540f05079e493fc1c132-refs/branch-heads/3809@{#990}
OS | Mac OS X
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? 0.67.x n/a, 0.68.x yes
- Can you reproduce this issue with the beta channel? unsure
- Can you reproduce this issue with the dev channel? unsure
- Can you reproduce this issue with the nightly channel? unsure
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? n/a
- Does the issue resolve itself when disabling Brave Rewards? n/a
- Is the issue reproducible on the latest version of Chrome? n/a
## Miscellaneous Information:
If you watch vimeo normally this doesn't appear to happen.
| 1.0 | seek (forward or backward) on vimeo video messes up entire AC table - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Found while testing https://github.com/brave/brave-browser/issues/4017
If you watch a vimeo video normally, your AC table is fine. However, if you start seeking (forward or backward), the AC table gets really messed up.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Clean profile on 0.68.x
2. Enable Rewards (keep this open in one window)
3. Open a second window and visit some sites - a mix: regular sites, youtube videos, etc.
4. Watch a vimeo video. See it added to AC table. Note its percentage. If you check `publisher_info_db` at this point, you see normal values for duration, score, etc.

5. Once it's added to the table, move the slider bar forward (essentially fast forwarding the video)
6. See the % for vimeo video increased on brave://rewards page without actually watching since you fast forwarded. This shouldn't happen.

7. Move slider bar close to the beginning of the video. See vimeo video removed from AC table. If you check `publisher_info_db` at this point, you could see a negative value for duration (depending on how much of the video you had watched). This shouldn't happen.


8. Watch vimeo video for a bit. It will get added back to AC table but it is now at 100%, all other sites that were there previously are gone. If you check `publisher_info_db` at this point, you will see a normal looking duration, but your score is extremely high.


9. If you try to add sites to your auto contribute table at this point, they won't add. I'm guessing bc the score for the vimeo video is so high.
Note - I could not reproduce this with a YouTube video in place of vimeo video.
## Actual result:
Seek (forward/backward) on vimeo video screws up ac table.
## Expected result:
AC table not wiped out, duration to be accurately recorded.
## Reproduces how often:
Seems to reproduce easily with seek forward/backward
## Brave version (brave://version info)
Brave | 0.68.130 Chromium: 76.0.3809.100 (Official Build) (64-bit)
-- | --
Revision | ed9d447d30203dc5069e540f05079e493fc1c132-refs/branch-heads/3809@{#990}
OS | Mac OS X
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? 0.67.x n/a, 0.68.x yes
- Can you reproduce this issue with the beta channel? unsure
- Can you reproduce this issue with the dev channel? unsure
- Can you reproduce this issue with the nightly channel? unsure
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? n/a
- Does the issue resolve itself when disabling Brave Rewards? n/a
- Is the issue reproducible on the latest version of Chrome? n/a
## Miscellaneous Information:
If you watch vimeo normally this doesn't appear to happen.
| non_process | seek forward or backward on vimeo video messes up entire ac table have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description found while testing if you watch a vimeo video normally your ac table is fine however if you start seeking forward or backward the ac table gets really messed up steps to reproduce clean profile on x enable rewards keep this open in one window open a second window and visit some sites a mix regular sites youtube videos etc watch a vimeo video see it added to ac table note its percentage if you check publisher info db at this point you see normal values for duration score etc once it s added to the table move the slider bar forward essentially fast forwarding the video see the for vimeo video increased on brave rewards page without actually watching since you fast forwarded this shouldn t happen move slider bar close to the beginning of the video see vimeo video removed from ac table if you check publisher info db at this point you could see a negative value for duration depending on how much of the video you had watched this shouldn t happen watch vimeo video for a bit it will get added back to ac table but it is now at all other sites that were there previously are gone if you check publisher info db at this point you will see a normal looking duration but your score is extremely high if you try to add sites to your auto contribute table at this point they won t add i m guessing bc the score for the vimeo video is so high note i could not reproduce this with a youtube video in place of vimeo video actual result seek forward backward on vimeo video screws up ac table expected result ac table not wiped out duration to be accurately recorded reproduces how often seems to reproduce easily with seek forward backward brave version brave version info brave chromium official build bit revision refs branch heads os mac os x version channel information can you reproduce this issue with the current release x n a x yes can you reproduce this issue with the beta channel unsure can you reproduce this issue with the dev channel unsure can you reproduce this issue with the nightly channel unsure other additional information does the issue resolve itself when disabling brave shields n a does the issue resolve itself when disabling brave rewards n a is the issue reproducible on the latest version of chrome n a miscellaneous information if you watch vimeo normally this doesn t appear to happen | 0 |
34,711 | 14,498,575,485 | IssuesEvent | 2020-12-11 15:40:30 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Explain the corresponding LUIS app portal/authoring region mapping to publishing region a little further | cognitive-services/svc cxp doc-enhancement language-understanding/subsvc triaged | section "Authoring key creation limits", statement "Make sure you create an app in the authoring region that corresponds to the publishing region where you want your client application to be located."
Someone may take this to mean that there should be a 1-1 mapping between authoring and publishing regions. For example, if your publishing region is in eastus, they may think the authoring region should also be eastus (when in fact there are only 3 authoring regions available worldwide, so that's not possible).
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-regions#luis-authoring-regions bridges the gap.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 407d2781-1fe1-38a6-dc18-37d4e37d651c
* Version Independent ID: 919a2206-13f9-4f51-2e4f-45fe062d7d43
* Content: [Using authoring and runtime keys - LUIS - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription#luis-resources)
* Content Source: [articles/cognitive-services/LUIS/luis-how-to-azure-subscription.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/cognitive-services/LUIS/luis-how-to-azure-subscription.md)
* Service: **cognitive-services**
* Sub-service: **language-understanding**
* GitHub Login: @aahill
* Microsoft Alias: **aahi** | 1.0 | Explain the corresponding LUIS app portal/authoring region mapping to publishing region a little further - section "Authoring key creation limits", statement "Make sure you create an app in the authoring region that corresponds to the publishing region where you want your client application to be located."
Someone may take this to mean that there should be a 1-1 mapping between authoring and publishing regions. For example, if your publishing region is in eastus, they may think the authoring region should also be eastus (when in fact there are only 3 authoring regions available worldwide, so that's not possible).
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-regions#luis-authoring-regions bridges the gap.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 407d2781-1fe1-38a6-dc18-37d4e37d651c
* Version Independent ID: 919a2206-13f9-4f51-2e4f-45fe062d7d43
* Content: [Using authoring and runtime keys - LUIS - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription#luis-resources)
* Content Source: [articles/cognitive-services/LUIS/luis-how-to-azure-subscription.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/cognitive-services/LUIS/luis-how-to-azure-subscription.md)
* Service: **cognitive-services**
* Sub-service: **language-understanding**
* GitHub Login: @aahill
* Microsoft Alias: **aahi** | non_process | explain the corresponding luis app portal authoring region mapping to publishing region a little further section authoring key creation limits statement make sure you create an app in the authoring region that corresponds to the publishing region where you want your client application to be located someone may take this to mean that there should be a mapping between authoring and publishing regions for example if your publishing region is in eastus they may think the authoring region should also be eastus when in fact there are only authoring regions available worldwide so that s not possible bridges the gap document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cognitive services sub service language understanding github login aahill microsoft alias aahi | 0 |
121,076 | 15,836,752,866 | IssuesEvent | 2021-04-06 19:47:24 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | CLP - Back to Top alignment | design frontend needs-grooming planned-work vsa vsa-public-websites | ## Issue Description
_Based on recent design review post Staging review it was identified that the back to top component, need to be align the button the right-most "column" example below. Also discussed rounding out the right side but will need @rtwell review and approval in support_
Example: Alignment for CLP Back to Top:

[Practice Area Feedback Ticket](https://github.com/department-of-veterans-affairs/va.gov-team/issues/20883)
## Acceptance Criteria
- [ ] Designer comments on this issue to approve page styling
- [ ] Back to top is aligned per guidance
- [ ] Validated and tested - Back to top feature is visual and accessible per desired approach.
- [ ] End-to-end tests show 0 violations.
## Appendix
<details>
<summary>Ensure your code changes are covered by E2E tests (expand)</summary>
- Add E2E tests if none exist for this addition/change.
- Update existing E2E tests if this code will change functionality.
- Include axe checks, including hidden content
</details>
<details>
<summary> Run axe checks using the Chrome or Firefox browser plugin (expand)</summary>
- Ensure no heading levels are skipped.
- Ensure all buttons and labeled inputs use semantic HTML elements.
- Ensure all buttons, labeled elements and images are identified using HTML semantic markers or ARIA roles.
- Ensure form fields have clearly defined boundaries or outlines.
- Ensure form fields do not use placeholder text.
- Ensure form fields have highly visible and specific error states.
</details>
<details>
<summary> Test for color contrast and color blindness issues (expand) </summary>
- All text has appropriate contrast.
</details>
<details>
<summary> Zoom layouts to 400% at 1280px width (expand)</summary>
- Ensure readability and usability are supported when zoomed up to 400% at 1280px browser width
- Ensure no content gets focused offscreen or is hidden from view.
</details>
<details>
<summary> Test with 1 or 2 screen readers (expand)</summary>
- Ensure the page includes a skip navigation link.
- Ensure all links are properly descriptive.
- Ensure screen reader users can hear the text equivalent for each image conveying information.
- Ensure screen reader users can hear the text equivalent for each image button.
- Ensure screen reader users can hear labels and instructions for inputs.
- Ensure purely decorative images are not announced by the screenreader.
</details>
<details>
<summary>Navigate using the keyboard only (expand)</summary>
- Ensure all links (navigation, text and/or image), form controls and page functions can be reached with the tab key in a logical order.
- Ensure all links (navigation, text and/or image), form controls and page functions can be triggered with the spacebar, enter key, or arrow keys.
- Ensure all interactive elements can be reached with the tab key in a logical order
- Ensure all interactive elements can be triggered with the spacebar, enter key, or arrow keys.
- Ensure focus is always visible and appears in logical order.
- Ensure each interactive element has visible focus state which appears in logical order.
</details>
---
## How to configure this issue
- [ ] **Attached to a Milestone** (when will this be completed?)
- [ ] **Attached to an Epic** (what body of work is this a part of?)
- [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
| 1.0 | CLP - Back to Top alignment - ## Issue Description
_Based on recent design review post Staging review it was identified that the back to top component, need to be align the button the right-most "column" example below. Also discussed rounding out the right side but will need @rtwell review and approval in support_
Example: Alignment for CLP Back to Top:

[Practice Area Feedback Ticket](https://github.com/department-of-veterans-affairs/va.gov-team/issues/20883)
## Acceptance Criteria
- [ ] Designer comments on this issue to approve page styling
- [ ] Back to top is aligned per guidance
- [ ] Validated and tested - Back to top feature is visual and accessible per desired approach.
- [ ] End-to-end tests show 0 violations.
## Appendix
<details>
<summary>Ensure your code changes are covered by E2E tests (expand)</summary>
- Add E2E tests if none exist for this addition/change.
- Update existing E2E tests if this code will change functionality.
- Include axe checks, including hidden content
</details>
<details>
<summary> Run axe checks using the Chrome or Firefox browser plugin (expand)</summary>
- Ensure no heading levels are skipped.
- Ensure all buttons and labeled inputs use semantic HTML elements.
- Ensure all buttons, labeled elements and images are identified using HTML semantic markers or ARIA roles.
- Ensure form fields have clearly defined boundaries or outlines.
- Ensure form fields do not use placeholder text.
- Ensure form fields have highly visible and specific error states.
</details>
<details>
<summary> Test for color contrast and color blindness issues (expand) </summary>
- All text has appropriate contrast.
</details>
<details>
<summary> Zoom layouts to 400% at 1280px width (expand)</summary>
- Ensure readability and usability are supported when zoomed up to 400% at 1280px browser width
- Ensure no content gets focused offscreen or is hidden from view.
</details>
<details>
<summary> Test with 1 or 2 screen readers (expand)</summary>
- Ensure the page includes a skip navigation link.
- Ensure all links are properly descriptive.
- Ensure screen reader users can hear the text equivalent for each image conveying information.
- Ensure screen reader users can hear the text equivalent for each image button.
- Ensure screen reader users can hear labels and instructions for inputs.
- Ensure purely decorative images are not announced by the screenreader.
</details>
<details>
<summary>Navigate using the keyboard only (expand)</summary>
- Ensure all links (navigation, text and/or image), form controls and page functions can be reached with the tab key in a logical order.
- Ensure all links (navigation, text and/or image), form controls and page functions can be triggered with the spacebar, enter key, or arrow keys.
- Ensure all interactive elements can be reached with the tab key in a logical order
- Ensure all interactive elements can be triggered with the spacebar, enter key, or arrow keys.
- Ensure focus is always visible and appears in logical order.
- Ensure each interactive element has visible focus state which appears in logical order.
</details>
---
## How to configure this issue
- [ ] **Attached to a Milestone** (when will this be completed?)
- [ ] **Attached to an Epic** (what body of work is this a part of?)
- [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
| non_process | clp back to top alignment issue description based on recent design review post staging review it was identified that the back to top component need to be align the button the right most column example below also discussed rounding out the right side but will need rtwell review and approval in support example alignment for clp back to top acceptance criteria designer comments on this issue to approve page styling back to top is aligned per guidance validated and tested back to top feature is visual and accessible per desired approach end to end tests show violations appendix ensure your code changes are covered by tests expand add tests if none exist for this addition change update existing tests if this code will change functionality include axe checks including hidden content run axe checks using the chrome or firefox browser plugin expand ensure no heading levels are skipped ensure all buttons and labeled inputs use semantic html elements ensure all buttons labeled elements and images are identified using html semantic markers or aria roles ensure form fields have clearly defined boundaries or outlines ensure form fields do not use placeholder text ensure form fields have highly visible and specific error states test for color contrast and color blindness issues expand all text has appropriate contrast zoom layouts to at width expand ensure readability and usability are supported when zoomed up to at browser width ensure no content gets focused offscreen or is hidden from view test with or screen readers expand ensure the page includes a skip navigation link ensure all links are properly descriptive ensure screen reader users can hear the text equivalent for each image conveying information ensure screen reader users can hear the text equivalent for each image button ensure screen reader users can hear labels and instructions for inputs ensure purely decorative images are not announced by the screenreader navigate using the keyboard only expand ensure all links navigation text and or image form controls and page functions can be reached with the tab key in a logical order ensure all links navigation text and or image form controls and page functions can be triggered with the spacebar enter key or arrow keys ensure all interactive elements can be reached with the tab key in a logical order ensure all interactive elements can be triggered with the spacebar enter key or arrow keys ensure focus is always visible and appears in logical order ensure each interactive element has visible focus state which appears in logical order how to configure this issue attached to a milestone when will this be completed attached to an epic what body of work is this a part of labeled with team product support analytics insights operations service design tools be tools fe labeled with practice area backend frontend devops design research product ia qa analytics contact center research accessibility content labeled with type bug request discovery documentation etc | 0 |
19,753 | 26,114,032,598 | IssuesEvent | 2022-12-28 02:04:06 | openvinotoolkit/openvino | https://api.github.com/repos/openvinotoolkit/openvino | closed | How to reshape and transpose tensor in openvino? | category: preprocessing support_request | Hi~
I have a request to handle data in openvino, such as reshape and transpose tensor, and the information about it is limited. Anyone can provide some useful tips, please?
I try the set_shape method, but it can't work.
| 1.0 | How to reshape and transpose tensor in openvino? - Hi~
I have a request to handle data in openvino, such as reshape and transpose tensor, and the information about it is limited. Anyone can provide some useful tips, please?
I try the set_shape method, but it can't work.
| process | how to reshape and transpose tensor in openvino hi i have a request to handle data in openvino such as reshape and transpose tensor and the information about it is limited anyone can provide some useful tips please i try the set shape method but it can t work | 1 |
10,048 | 3,351,690,905 | IssuesEvent | 2015-11-17 19:36:19 | RasppleII/a2server | https://api.github.com/repos/RasppleII/a2server | closed | Info: Apple II archive formats | documentation | Binary II wrapper for a file contains ProDOS metadata about file, e.g. file type/auxtype. Binary II is not a compression format, though it can wrap compressed files, including the "squeezed" format supported by BLU (Binary Library Utility) which makes Binary II files.
NuFX is a multifile (archive) format that can contain metadata, compressed files, and disk images.
BinScii is a 6-bit encoding format like Base64.
.BNY Binary II wrapped file(s)
.QQ Squeezed file
.BQY Binary II wrapped squeezed file(s) (like .QQ.BNY)
.BNX same as BQY?
.SHK NuFX archive
.SDK NuFX archive containing a single disk image
.BXY Binary II wrapped NuFX archive (like .SHK.BNY or .SDK.BNY)
.SEA GS/OS self-extracting NuFX archive
.BSE Binary II wrapped GS/OS self-extracting archive (like .SEA.BNY)
.BSC BinScii file [use BinScii or GScii]
.BSQ BinScii encoded NuFX archive (like .SHK.BSC or .SDK.BSC) [use BinScii, then ShrinkIt on the output]
A Binary II file can contain multiple files; it just concatenates each individual BNY wrapped file within to create an archive. If it's a squeezed file, it ends in .QQ within the Binary II wrapper.
BLU will not make standalone squeezed files, it will only squeeze them when making a Binary II file. You can choose files to be either squeezed or not when making the Binary II file.
ShrinkIt can process all of the above formats except BinSCII.
GS-Shrinkit (aka GSHK) cannot work with 5.25" disk images in NuFX archives.
Nulib2 will also process all of the above formats except BinScii, but it won't unsqueeze a QQ unless it's already inside a Binary II file. 'usq' can unsqueeze a standalone QQ file. 'sciibin' can unencode a BinScii file.
ref:
http://mirrors.apple2.org.za/ftp.gno.org/unix.tools/
http://www.chebucto.ns.ca/Services/PDA/AppleIICompression.shtml
| 1.0 | Info: Apple II archive formats - Binary II wrapper for a file contains ProDOS metadata about file, e.g. file type/auxtype. Binary II is not a compression format, though it can wrap compressed files, including the "squeezed" format supported by BLU (Binary Library Utility) which makes Binary II files.
NuFX is a multifile (archive) format that can contain metadata, compressed files, and disk images.
BinScii is a 6-bit encoding format like Base64.
.BNY Binary II wrapped file(s)
.QQ Squeezed file
.BQY Binary II wrapped squeezed file(s) (like .QQ.BNY)
.BNX same as BQY?
.SHK NuFX archive
.SDK NuFX archive containing a single disk image
.BXY Binary II wrapped NuFX archive (like .SHK.BNY or .SDK.BNY)
.SEA GS/OS self-extracting NuFX archive
.BSE Binary II wrapped GS/OS self-extracting archive (like .SEA.BNY)
.BSC BinScii file [use BinScii or GScii]
.BSQ BinScii encoded NuFX archive (like .SHK.BSC or .SDK.BSC) [use BinScii, then ShrinkIt on the output]
A Binary II file can contain multiple files; it just concatenates each individual BNY wrapped file within to create an archive. If it's a squeezed file, it ends in .QQ within the Binary II wrapper.
BLU will not make standalone squeezed files, it will only squeeze them when making a Binary II file. You can choose files to be either squeezed or not when making the Binary II file.
ShrinkIt can process all of the above formats except BinSCII.
GS-Shrinkit (aka GSHK) cannot work with 5.25" disk images in NuFX archives.
Nulib2 will also process all of the above formats except BinScii, but it won't unsqueeze a QQ unless it's already inside a Binary II file. 'usq' can unsqueeze a standalone QQ file. 'sciibin' can unencode a BinScii file.
ref:
http://mirrors.apple2.org.za/ftp.gno.org/unix.tools/
http://www.chebucto.ns.ca/Services/PDA/AppleIICompression.shtml
| non_process | info apple ii archive formats binary ii wrapper for a file contains prodos metadata about file e g file type auxtype binary ii is not a compression format though it can wrap compressed files including the squeezed format supported by blu binary library utility which makes binary ii files nufx is a multifile archive format that can contain metadata compressed files and disk images binscii is a bit encoding format like bny binary ii wrapped file s qq squeezed file bqy binary ii wrapped squeezed file s like qq bny bnx same as bqy shk nufx archive sdk nufx archive containing a single disk image bxy binary ii wrapped nufx archive like shk bny or sdk bny sea gs os self extracting nufx archive bse binary ii wrapped gs os self extracting archive like sea bny bsc binscii file bsq binscii encoded nufx archive like shk bsc or sdk bsc a binary ii file can contain multiple files it just concatenates each individual bny wrapped file within to create an archive if it s a squeezed file it ends in qq within the binary ii wrapper blu will not make standalone squeezed files it will only squeeze them when making a binary ii file you can choose files to be either squeezed or not when making the binary ii file shrinkit can process all of the above formats except binscii gs shrinkit aka gshk cannot work with disk images in nufx archives will also process all of the above formats except binscii but it won t unsqueeze a qq unless it s already inside a binary ii file usq can unsqueeze a standalone qq file sciibin can unencode a binscii file ref | 0 |
16,550 | 21,568,599,121 | IssuesEvent | 2022-05-02 04:17:57 | lynnandtonic/nestflix.fun | https://api.github.com/repos/lynnandtonic/nestflix.fun | closed | Irma Vep from Irma Vep | suggested title in process | Please add as much of the following info as you can:
Title: Irma Vep
Type (film/tv show): Film
Film or show in which it appears: Irma Vep (1996) (apparently being remade as a limited series)
Is the parent film/show streaming anywhere? Not sure (prev. Criterion Channel)
About when in the parent film/show does it appear? Throughout, but final product at the end
Actual footage of the film/show can be seen (yes/no)? Yes
| 1.0 | Irma Vep from Irma Vep - Please add as much of the following info as you can:
Title: Irma Vep
Type (film/tv show): Film
Film or show in which it appears: Irma Vep (1996) (apparently being remade as a limited series)
Is the parent film/show streaming anywhere? Not sure (prev. Criterion Channel)
About when in the parent film/show does it appear? Throughout, but final product at the end
Actual footage of the film/show can be seen (yes/no)? Yes
| process | irma vep from irma vep please add as much of the following info as you can title irma vep type film tv show film film or show in which it appears irma vep apparently being remade as a limited series is the parent film show streaming anywhere not sure prev criterion channel about when in the parent film show does it appear throughout but final product at the end actual footage of the film show can be seen yes no yes | 1 |
319,681 | 27,393,082,462 | IssuesEvent | 2023-02-28 17:33:09 | akkadotnet/akka.net | https://api.github.com/repos/akkadotnet/akka.net | closed | Akka.TestKit: accidental breaking changes in v1.5.0-beta3 | confirmed bug akka-testkit | **Version Information**
Version of Akka.NET? v1.5.0-beta3
Which Akka.NET Modules? All TestKit distributions.
**Describe the bug**
The following API signatures were broken by accident - and we should add them back before v1.5 ships:
- [x] `ExpectMsgAllOf` used to accept `params object[]` - no longer does.
- [x] `AwaitConditionAsync` used to accept a `Func<bool>` - now it only accepts a `Func<Task<bool>>`; should still accept both.
- [x] `ExpectOneAsync` used to accept an `Action` - now it only accepts `Func<Task>` | 1.0 | Akka.TestKit: accidental breaking changes in v1.5.0-beta3 - **Version Information**
Version of Akka.NET? v1.5.0-beta3
Which Akka.NET Modules? All TestKit distributions.
**Describe the bug**
The following API signatures were broken by accident - and we should add them back before v1.5 ships:
- [x] `ExpectMsgAllOf` used to accept `params object[]` - no longer does.
- [x] `AwaitConditionAsync` used to accept a `Func<bool>` - now it only accepts a `Func<Task<bool>>`; should still accept both.
- [x] `ExpectOneAsync` used to accept an `Action` - now it only accepts `Func<Task>` | non_process | akka testkit accidental breaking changes in version information version of akka net which akka net modules all testkit distributions describe the bug the following api signatures were broken by accident and we should add them back before ships expectmsgallof used to accept params object no longer does awaitconditionasync used to accept a func now it only accepts a func should still accept both expectoneasync used to accept an action now it only accepts func | 0 |
6,975 | 10,122,201,198 | IssuesEvent | 2019-07-31 17:22:32 | material-components/material-components-ios | https://api.github.com/repos/material-components/material-components-ios | closed | [ActionSheet] Internal issue: b/138203076 | [ActionSheet] type:Process | This was filed as an internal issue. If you are a Googler, please visit [b/138203076](http://b/138203076) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/138203076](http://b/138203076)
- Blocked by: https://github.com/material-components/material-components-ios/issues/8025 | 1.0 | [ActionSheet] Internal issue: b/138203076 - This was filed as an internal issue. If you are a Googler, please visit [b/138203076](http://b/138203076) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/138203076](http://b/138203076)
- Blocked by: https://github.com/material-components/material-components-ios/issues/8025 | process | internal issue b this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug blocked by | 1 |
29,522 | 13,131,444,137 | IssuesEvent | 2020-08-06 17:01:42 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | Unable to create S3 buckets on local mocks since 3.0.0 | bug service/s3 | <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_s3_bucket
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
provider "aws" {
region = "eu-west-1"
access_key = "fakeKey"
secret_key = "fakeKey"
skip_credentials_validation = true
skip_metadata_api_check = true
s3_force_path_style = true
skip_requesting_account_id = true
endpoints {
s3 = "http://localhost:9090"
}
}
resource "aws_s3_bucket" "test" {
bucket = "test-bucket"
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
Bucket should have been created on s3 mock
### Actual Behavior
`Error: error getting S3 Bucket location: RequestError: send request failed
caused by: dial tcp: lookup test-bucket.localhost on 192.168.1.1:53: no such host`
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `docker run -p 9090:9090 -t adobe/s3mock`
2. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? --->
- Started failing in 3.0 after new GetBucketLocation code was merged (https://github.com/terraform-providers/terraform-provider-aws/issues/14217)
- I would assume it has to do with `aws.Bool(false)` being set to false [here](https://github.com/terraform-providers/terraform-provider-aws/pull/14221/files#diff-39447b7f5c6fe8241bbfd2e9e8e7cb8cR98) rather than respecting `s3_force_path_style = true` in the provider
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #14217
| 1.0 | Unable to create S3 buckets on local mocks since 3.0.0 - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_s3_bucket
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
provider "aws" {
region = "eu-west-1"
access_key = "fakeKey"
secret_key = "fakeKey"
skip_credentials_validation = true
skip_metadata_api_check = true
s3_force_path_style = true
skip_requesting_account_id = true
endpoints {
s3 = "http://localhost:9090"
}
}
resource "aws_s3_bucket" "test" {
bucket = "test-bucket"
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
Bucket should have been created on s3 mock
### Actual Behavior
`Error: error getting S3 Bucket location: RequestError: send request failed
caused by: dial tcp: lookup test-bucket.localhost on 192.168.1.1:53: no such host`
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `docker run -p 9090:9090 -t adobe/s3mock`
2. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? --->
- Started failing in 3.0 after new GetBucketLocation code was merged (https://github.com/terraform-providers/terraform-provider-aws/issues/14217)
- I would assume it has to do with `aws.Bool(false)` being set to false [here](https://github.com/terraform-providers/terraform-provider-aws/pull/14221/files#diff-39447b7f5c6fe8241bbfd2e9e8e7cb8cR98) rather than respecting `s3_force_path_style = true` in the provider
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #14217
| non_process | unable to create buckets on local mocks since please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform cli and terraform aws provider version affected resource s aws bucket terraform configuration files hcl provider aws region eu west access key fakekey secret key fakekey skip credentials validation true skip metadata api check true force path style true skip requesting account id true endpoints resource aws bucket test bucket test bucket debug output please provide a link to a github gist containing the complete debug output please do not paste the debug output in the issue just paste a link to the gist to obtain the debug output see the panic output expected behavior bucket should have been created on mock actual behavior error error getting bucket location requesterror send request failed caused by dial tcp lookup test bucket localhost on no such host steps to reproduce docker run p t adobe terraform apply important factoids started failing in after new getbucketlocation code was merged i would assume it has to do with aws bool false being set to false rather than respecting force path style true in the provider references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor documentation for example | 0 |
196,028 | 15,571,920,086 | IssuesEvent | 2021-03-17 06:02:48 | Anthrasite/BotPumpkin | https://api.github.com/repos/Anthrasite/BotPumpkin | opened | Add detailed information on how to setup the bot | documentation | Some detailed information should be written up on how to setup this bot, including AWS and Discord configuration. This will help others who may discover and try to use this bot, as well as to keep a record of how to set up everything in case it ever needs to be done again. | 1.0 | Add detailed information on how to setup the bot - Some detailed information should be written up on how to setup this bot, including AWS and Discord configuration. This will help others who may discover and try to use this bot, as well as to keep a record of how to set up everything in case it ever needs to be done again. | non_process | add detailed information on how to setup the bot some detailed information should be written up on how to setup this bot including aws and discord configuration this will help others who may discover and try to use this bot as well as to keep a record of how to set up everything in case it ever needs to be done again | 0 |
17,489 | 23,302,726,543 | IssuesEvent | 2022-08-07 15:11:15 | Battle-s/battle-school-backend | https://api.github.com/repos/Battle-s/battle-school-backend | closed | [FEAT] 예외처리 advice 세팅 | feature :computer: processing :hourglass_flowing_sand: | ## 설명
> 이슈에 대한 설명을 작성합니다. 담당자도 함께 작성하면 좋습니다.
## 체크사항
> 이슈를 close하기 위해 필요한 조건들을 체크박스로 나열합니다.
- [ ] 예외처리 advice 개념 복습 후 코드 작성
- [ ] 당장 필요한 exception 생성
- [ ] 만들어진 코드들에 대해 exception 적용
## 참고자료
> 이슈를 해결하기 위해 필요한 참고자료가 있다면 추가합니다.
## 관련 논의
> 이슈에 대한 논의가 있었다면 논의 내용을 간략하게 추가합니다.
| 1.0 | [FEAT] 예외처리 advice 세팅 - ## 설명
> 이슈에 대한 설명을 작성합니다. 담당자도 함께 작성하면 좋습니다.
## 체크사항
> 이슈를 close하기 위해 필요한 조건들을 체크박스로 나열합니다.
- [ ] 예외처리 advice 개념 복습 후 코드 작성
- [ ] 당장 필요한 exception 생성
- [ ] 만들어진 코드들에 대해 exception 적용
## 참고자료
> 이슈를 해결하기 위해 필요한 참고자료가 있다면 추가합니다.
## 관련 논의
> 이슈에 대한 논의가 있었다면 논의 내용을 간략하게 추가합니다.
| process | 예외처리 advice 세팅 설명 이슈에 대한 설명을 작성합니다 담당자도 함께 작성하면 좋습니다 체크사항 이슈를 close하기 위해 필요한 조건들을 체크박스로 나열합니다 예외처리 advice 개념 복습 후 코드 작성 당장 필요한 exception 생성 만들어진 코드들에 대해 exception 적용 참고자료 이슈를 해결하기 위해 필요한 참고자료가 있다면 추가합니다 관련 논의 이슈에 대한 논의가 있었다면 논의 내용을 간략하게 추가합니다 | 1 |
234,159 | 17,935,493,565 | IssuesEvent | 2021-09-10 14:50:07 | AzureAD/microsoft-authentication-library-for-js | https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-js | opened | acquireTokenSilent refreshes page when React.Suspense is wrapped inside MsalProvider from @azure/msal-react | question documentation | ### Core Library
MSAL.js v2 (@azure/msal-browser)
### Wrapper Library
MSAL React (@azure/msal-react)
### Documentation Location
https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/getting-started.md
### Description
Libraries
`@azure/msal-browser versions tested 2.17.0, 2.11.2` (probbaly also true for `2.16`)
`@azure/msal-react versions tested 11.0.2, 1.0.1, 1.0.0`
In my application where I had just integrated the `@azure/msal-browse` and `@azure/msal-react` libraries, my calls to `acquireTokenSilent`, `acquireTokenPopup` and `acquireTokenRedirect` were causing my current app page to reload. I found issue [#526](https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/526) which was originally concerned with Vue and then Angular but not React.
So, after some hit and trial, I discovered the following problem:
Wrapping `React.Suspense` inside the `MsalProvider` was causing the current page to reload on calls to `acquireTokenSilent`, `acquireTokenPopup` and `acquireTokenRedirect`. Swapping the wrapping order to make `React.Suspense` the parent of `MsalProvider` fixes the issue.
So, this is my `App.js`' relevant code which causes the problem of reloading on token calls
```
<MsalProvider instance={msalInstance}>
<React.Suspense fallback={<Loader />}>
// Routes etc
</React.Suspense>
</MsalProvider>
```
When I switched the wrapping order like this, the issue was resolved
```
<React.Suspense fallback={<Loader />}>
<MsalProvider instance={msalInstance}>
// Routes etc
</MsalProvider>
</React.Suspense>
```
So here is my suggestion, include this info on the Getting started document where the wrapping part is shown.
PS: I have tried to understand the reason for why it is happening and based on the linked issue, it is possibly related to iframes and how React.Suspense works but I don't really understand it.
| 1.0 | acquireTokenSilent refreshes page when React.Suspense is wrapped inside MsalProvider from @azure/msal-react - ### Core Library
MSAL.js v2 (@azure/msal-browser)
### Wrapper Library
MSAL React (@azure/msal-react)
### Documentation Location
https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/getting-started.md
### Description
Libraries
`@azure/msal-browser versions tested 2.17.0, 2.11.2` (probbaly also true for `2.16`)
`@azure/msal-react versions tested 11.0.2, 1.0.1, 1.0.0`
In my application where I had just integrated the `@azure/msal-browse` and `@azure/msal-react` libraries, my calls to `acquireTokenSilent`, `acquireTokenPopup` and `acquireTokenRedirect` were causing my current app page to reload. I found issue [#526](https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/526) which was originally concerned with Vue and then Angular but not React.
So, after some hit and trial, I discovered the following problem:
Wrapping `React.Suspense` inside the `MsalProvider` was causing the current page to reload on calls to `acquireTokenSilent`, `acquireTokenPopup` and `acquireTokenRedirect`. Swapping the wrapping order to make `React.Suspense` the parent of `MsalProvider` fixes the issue.
So, this is my `App.js`' relevant code which causes the problem of reloading on token calls
```
<MsalProvider instance={msalInstance}>
<React.Suspense fallback={<Loader />}>
// Routes etc
</React.Suspense>
</MsalProvider>
```
When I switched the wrapping order like this, the issue was resolved
```
<React.Suspense fallback={<Loader />}>
<MsalProvider instance={msalInstance}>
// Routes etc
</MsalProvider>
</React.Suspense>
```
So here is my suggestion, include this info on the Getting started document where the wrapping part is shown.
PS: I have tried to understand the reason for why it is happening and based on the linked issue, it is possibly related to iframes and how React.Suspense works but I don't really understand it.
| non_process | acquiretokensilent refreshes page when react suspense is wrapped inside msalprovider from azure msal react core library msal js azure msal browser wrapper library msal react azure msal react documentation location description libraries azure msal browser versions tested probbaly also true for azure msal react versions tested in my application where i had just integrated the azure msal browse and azure msal react libraries my calls to acquiretokensilent acquiretokenpopup and acquiretokenredirect were causing my current app page to reload i found issue which was originally concerned with vue and then angular but not react so after some hit and trial i discovered the following problem wrapping react suspense inside the msalprovider was causing the current page to reload on calls to acquiretokensilent acquiretokenpopup and acquiretokenredirect swapping the wrapping order to make react suspense the parent of msalprovider fixes the issue so this is my app js relevant code which causes the problem of reloading on token calls routes etc when i switched the wrapping order like this the issue was resolved routes etc so here is my suggestion include this info on the getting started document where the wrapping part is shown ps i have tried to understand the reason for why it is happening and based on the linked issue it is possibly related to iframes and how react suspense works but i don t really understand it | 0 |
25,103 | 5,132,711,892 | IssuesEvent | 2017-01-11 00:03:57 | OfficeDev/office-ui-fabric-core | https://api.github.com/repos/OfficeDev/office-ui-fabric-core | closed | Document how to override the base font-family to use another font | documentation | Separating this work item out from #852. There are limitations on the use of Segoe UI, which prevent developers from using Fabric for projects that aren't specifically for Office, or from hosting their own fonts without relying on the CDN.
The request here is to document how to override the base `font-family` to set a different font stack, which would prevent Fabric from pulling Segoe UI from the CDN as usual. | 1.0 | Document how to override the base font-family to use another font - Separating this work item out from #852. There are limitations on the use of Segoe UI, which prevent developers from using Fabric for projects that aren't specifically for Office, or from hosting their own fonts without relying on the CDN.
The request here is to document how to override the base `font-family` to set a different font stack, which would prevent Fabric from pulling Segoe UI from the CDN as usual. | non_process | document how to override the base font family to use another font separating this work item out from there are limitations on the use of segoe ui which prevent developers from using fabric for projects that aren t specifically for office or from hosting their own fonts without relying on the cdn the request here is to document how to override the base font family to set a different font stack which would prevent fabric from pulling segoe ui from the cdn as usual | 0 |
170,436 | 13,187,086,977 | IssuesEvent | 2020-08-13 02:18:28 | KMcNickel/HH-DMX-Console-Hardware | https://api.github.com/repos/KMcNickel/HH-DMX-Console-Hardware | closed | PWRON pull-up needs to be removed/modified | bodge tested | Removing the pull up prevents the capacitance from re-enabling the LTC3101 after the MCU shuts off.
If a shorter button press is desired to turn the unit on, the PWRON pin can be shorted to the RESET pin and a pull-up can be added. HOWEVER, an investigation must be done to determine if there will be an issue with the SWD reset connection.
One thought is this: If the PWRON is entirely dependent on the MCU, the power button must be held down during programming (SWD or USB). If the PWRON is tied to reset, the button will only need to be held during SWD programming, the unit should stay on by itself during USB programming. | 1.0 | PWRON pull-up needs to be removed/modified - Removing the pull up prevents the capacitance from re-enabling the LTC3101 after the MCU shuts off.
If a shorter button press is desired to turn the unit on, the PWRON pin can be shorted to the RESET pin and a pull-up can be added. HOWEVER, an investigation must be done to determine if there will be an issue with the SWD reset connection.
One thought is this: If the PWRON is entirely dependent on the MCU, the power button must be held down during programming (SWD or USB). If the PWRON is tied to reset, the button will only need to be held during SWD programming, the unit should stay on by itself during USB programming. | non_process | pwron pull up needs to be removed modified removing the pull up prevents the capacitance from re enabling the after the mcu shuts off if a shorter button press is desired to turn the unit on the pwron pin can be shorted to the reset pin and a pull up can be added however an investigation must be done to determine if there will be an issue with the swd reset connection one thought is this if the pwron is entirely dependent on the mcu the power button must be held down during programming swd or usb if the pwron is tied to reset the button will only need to be held during swd programming the unit should stay on by itself during usb programming | 0 |
13,713 | 16,471,603,041 | IssuesEvent | 2021-05-23 14:26:22 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Crash deserializing IPC message using advanced serialization | child_process confirmed-bug v8 module | * **Version**: 14.8.0
* **Platform**: MacOS
* **Subsystem**: child_process
### What steps will reproduce the bug?
I start a child process using `fork()` and `{ serialization: 'advanced' }`. In the child process I synchronously call `process.send()` until it returns `false`. I keep `process.channel` referenced until all my `send()` callbacks have been called. Every so often, the main process crashes. Other times it exits gracefully, though it does not log all received messages. Presumably because the child process exits without flushing its IPC buffer. That's manageable and probably not a bug.
`main.js`:
```js
const childProcess = require('child_process')
const channel = childProcess.fork('./child.js', [], { serialization: 'advanced' })
channel.on('message', message => {
console.error('main:', message.count)
})
```
`child.js`:
```js
// Keep the process alive until all messages have been sent.
process.channel.ref()
let pending = 0
const drain = () => {
if (--pending === 0) {
console.error('all messages sent')
if (!process.connected) {
console.error('already disconnected')
} else {
console.error('unref channel')
// Allow the process to exit.
process.channel.unref()
}
}
}
// Fill up any internal buffers.
const filler = Buffer.alloc(2 ** 12, 1)
// Enqueue as many messages as possible until we're told to back off.
let count = 0
let ok
do {
pending++
ok = process.send({count: ++count, filler}, drain)
console.error('child:', count, ok)
} while (ok)
```
And then run `node main.js`.
### How often does it reproduce? Is there a required condition?
It's intermittent.
### What is the expected behavior?
The main process does not crash.
### What do you see instead?
The main process crashes with the following error:
```
internal/child_process/serialization.js:69
deserializer.readHeader();
^
Error: Unable to deserialize cloned data due to invalid or unsupported version.
at parseChannelMessages (internal/child_process/serialization.js:69:20)
at parseChannelMessages.next (<anonymous>)
at Pipe.channel.onread (internal/child_process.js:595:18)
```
### Additional information
If I modify `child.process` to schedule sends using `setImmediate()` there is no crash, and the main process receives all messages:
```js
let count = 0
do {
pending++
setImmediate(() => {
process.send({count: ++count, filler}, drain)
console.error('child:', count)
})
} while (pending < 100)
``` | 1.0 | Crash deserializing IPC message using advanced serialization - * **Version**: 14.8.0
* **Platform**: MacOS
* **Subsystem**: child_process
### What steps will reproduce the bug?
I start a child process using `fork()` and `{ serialization: 'advanced' }`. In the child process I synchronously call `process.send()` until it returns `false`. I keep `process.channel` referenced until all my `send()` callbacks have been called. Every so often, the main process crashes. Other times it exits gracefully, though it does not log all received messages. Presumably because the child process exits without flushing its IPC buffer. That's manageable and probably not a bug.
`main.js`:
```js
const childProcess = require('child_process')
const channel = childProcess.fork('./child.js', [], { serialization: 'advanced' })
channel.on('message', message => {
console.error('main:', message.count)
})
```
`child.js`:
```js
// Keep the process alive until all messages have been sent.
process.channel.ref()
let pending = 0
const drain = () => {
if (--pending === 0) {
console.error('all messages sent')
if (!process.connected) {
console.error('already disconnected')
} else {
console.error('unref channel')
// Allow the process to exit.
process.channel.unref()
}
}
}
// Fill up any internal buffers.
const filler = Buffer.alloc(2 ** 12, 1)
// Enqueue as many messages as possible until we're told to back off.
let count = 0
let ok
do {
pending++
ok = process.send({count: ++count, filler}, drain)
console.error('child:', count, ok)
} while (ok)
```
And then run `node main.js`.
### How often does it reproduce? Is there a required condition?
It's intermittent.
### What is the expected behavior?
The main process does not crash.
### What do you see instead?
The main process crashes with the following error:
```
internal/child_process/serialization.js:69
deserializer.readHeader();
^
Error: Unable to deserialize cloned data due to invalid or unsupported version.
at parseChannelMessages (internal/child_process/serialization.js:69:20)
at parseChannelMessages.next (<anonymous>)
at Pipe.channel.onread (internal/child_process.js:595:18)
```
### Additional information
If I modify `child.process` to schedule sends using `setImmediate()` there is no crash, and the main process receives all messages:
```js
let count = 0
do {
pending++
setImmediate(() => {
process.send({count: ++count, filler}, drain)
console.error('child:', count)
})
} while (pending < 100)
``` | process | crash deserializing ipc message using advanced serialization version platform macos subsystem child process what steps will reproduce the bug i start a child process using fork and serialization advanced in the child process i synchronously call process send until it returns false i keep process channel referenced until all my send callbacks have been called every so often the main process crashes other times it exits gracefully though it does not log all received messages presumably because the child process exits without flushing its ipc buffer that s manageable and probably not a bug main js js const childprocess require child process const channel childprocess fork child js serialization advanced channel on message message console error main message count child js js keep the process alive until all messages have been sent process channel ref let pending const drain if pending console error all messages sent if process connected console error already disconnected else console error unref channel allow the process to exit process channel unref fill up any internal buffers const filler buffer alloc enqueue as many messages as possible until we re told to back off let count let ok do pending ok process send count count filler drain console error child count ok while ok and then run node main js how often does it reproduce is there a required condition it s intermittent what is the expected behavior the main process does not crash what do you see instead the main process crashes with the following error internal child process serialization js deserializer readheader error unable to deserialize cloned data due to invalid or unsupported version at parsechannelmessages internal child process serialization js at parsechannelmessages next at pipe channel onread internal child process js additional information if i modify child process to schedule sends using setimmediate there is no crash and the main process receives all messages js let count do pending setimmediate process send count count filler drain console error child count while pending | 1 |
18,417 | 24,544,763,226 | IssuesEvent | 2022-10-12 07:56:07 | home-climate-control/dz | https://api.github.com/repos/home-climate-control/dz | opened | Economizer: implement Swing visualization combined with Zone | enhancement usability visualization UX process control reactive-only | Reference: [HVAC Device: Economizer](https://github.com/home-climate-control/dz/wiki/HVAC-Device:-Economizer)
Depends on: #239
### Acceptance Criteria
* Zone panel/cell pair displays zone status against the following variables in a way consistent with the display in the case when Economizer is not configured:
* Indoor temperature
* Setpoint
* Changeover delta
* Target temperature
* Ambient temperature
* Zone panel allows to control changeover delta, target temperature, keep HVAC running on/off, and economizer enabled/disabled state along with currently existing zone controls | 1.0 | Economizer: implement Swing visualization combined with Zone - Reference: [HVAC Device: Economizer](https://github.com/home-climate-control/dz/wiki/HVAC-Device:-Economizer)
Depends on: #239
### Acceptance Criteria
* Zone panel/cell pair displays zone status against the following variables in a way consistent with the display in the case when Economizer is not configured:
* Indoor temperature
* Setpoint
* Changeover delta
* Target temperature
* Ambient temperature
* Zone panel allows to control changeover delta, target temperature, keep HVAC running on/off, and economizer enabled/disabled state along with currently existing zone controls | process | economizer implement swing visualization combined with zone reference depends on acceptance criteria zone panel cell pair displays zone status against the following variables in a way consistent with the display in the case when economizer is not configured indoor temperature setpoint changeover delta target temperature ambient temperature zone panel allows to control changeover delta target temperature keep hvac running on off and economizer enabled disabled state along with currently existing zone controls | 1 |
12,129 | 14,740,890,047 | IssuesEvent | 2021-01-07 09:46:58 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Missing credit card info | anc-process anp-important ant-bug has attachment | In GitLab by @kdjstudios on Dec 12, 2018, 13:34
**Submitted by:** "Arianna Screen" <arianna.screen@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-12-12-81793/conversation
**Server:** Internal (All)
**Client/Site:** Santa Rosa (All)
**Account:** NA
**Issue:**
As I'm doing the credit card charges for this month I am using the saved credit card in SAB.
I noticed that when I put in the credit card info myself I can see the type of credit card used. When I use the saved credit card option the type of card is not recorded as you can see in the attachment. Also, if there is no credit card type, it doesn’t show up on the Credit Card Reporting web page.
 | 1.0 | Missing credit card info - In GitLab by @kdjstudios on Dec 12, 2018, 13:34
**Submitted by:** "Arianna Screen" <arianna.screen@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-12-12-81793/conversation
**Server:** Internal (All)
**Client/Site:** Santa Rosa (All)
**Account:** NA
**Issue:**
As I'm doing the credit card charges for this month I am using the saved credit card in SAB.
I noticed that when I put in the credit card info myself I can see the type of credit card used. When I use the saved credit card option the type of card is not recorded as you can see in the attachment. Also, if there is no credit card type, it doesn’t show up on the Credit Card Reporting web page.
 | process | missing credit card info in gitlab by kdjstudios on dec submitted by arianna screen helpdesk server internal all client site santa rosa all account na issue as i m doing the credit card charges for this month i am using the saved credit card in sab i noticed that when i put in the credit card info myself i can see the type of credit card used when i use the saved credit card option the type of card is not recorded as you can see in the attachment also if there is no credit card type it doesn’t show up on the credit card reporting web page uploads cc no info png | 1 |
152,081 | 13,444,146,395 | IssuesEvent | 2020-09-08 09:23:08 | aarhusstadsarkiv/digiarch | https://api.github.com/repos/aarhusstadsarkiv/digiarch | closed | Fix data.py documentation! | documentation | <!--
Hi! :)
Documentation issues encompass improvements and/or additions to our current documentation. If there is an issue, please provide a direct link.
The markdown syntax for adding links to text is `[text](url)`
-->
Several functions in [`data.py`](https://github.com/aarhusstadsarkiv/digital-archive/blob/master/digital_archive/data.py) are not properly documented - some lack docstrings entirely. | 1.0 | Fix data.py documentation! - <!--
Hi! :)
Documentation issues encompass improvements and/or additions to our current documentation. If there is an issue, please provide a direct link.
The markdown syntax for adding links to text is `[text](url)`
-->
Several functions in [`data.py`](https://github.com/aarhusstadsarkiv/digital-archive/blob/master/digital_archive/data.py) are not properly documented - some lack docstrings entirely. | non_process | fix data py documentation hi documentation issues encompass improvements and or additions to our current documentation if there is an issue please provide a direct link the markdown syntax for adding links to text is url several functions in are not properly documented some lack docstrings entirely | 0 |
5,048 | 7,859,833,909 | IssuesEvent | 2018-06-21 17:53:35 | leg2015/Aagos | https://api.github.com/repos/leg2015/Aagos | reopened | Create Python Script to Clean Data | data processing | - [ ] Python script to clean data
* should loop through each directory for each mutation combo and add to one large csv | 1.0 | Create Python Script to Clean Data - - [ ] Python script to clean data
* should loop through each directory for each mutation combo and add to one large csv | process | create python script to clean data python script to clean data should loop through each directory for each mutation combo and add to one large csv | 1 |
14,043 | 16,849,587,421 | IssuesEvent | 2021-06-20 08:14:12 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | Update rasterize.py (Request in QGIS) | 3.20 Processing Alg | ### Request for documentation
From pull request QGIS/qgis#41905
Author: @talledodiego
QGIS version: 3.20
**Update rasterize.py**
### PR Description:
## Description
Included the possibility to use Z of feature to extract burn values (fixes issue reported in #41896)
## Detailed description
Referring to latest documentation available [here](https://docs.qgis.org/3.16/en/docs/user_manual/processing_algs/gdal/vectorconversion.html#rasterize-vector-to-raster), this PR includes a new parameter to the plugin.
The PR affects [rasterize (vector to layer)](https://github.com/qgis/QGIS-Documentation/blob/master/docs/user_manual/processing_algs/gdal/vectorconversion.rst#parameters-3) with the user having the possibility to specify of using Z value of features to extract burn values:
The description of the parameter (to be inserted after the parameter [BURN]) is:
**Label**: Burn value extracted from the "Z" values of the feature [Optional]
**Name**: USE_Z
**Type**: [boolean] Default False
**Description**: Indicates that a burn value should be extracted from the “Z” values of the feature. Works with points and lines (linear interpolation along each segment). For polygons, works properly only if the are flat (same Z value for all vertices)
### Commits tagged with [need-docs] or [FEATURE] | 1.0 | Update rasterize.py (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#41905
Author: @talledodiego
QGIS version: 3.20
**Update rasterize.py**
### PR Description:
## Description
Included the possibility to use Z of feature to extract burn values (fixes issue reported in #41896)
## Detailed description
Referring to latest documentation available [here](https://docs.qgis.org/3.16/en/docs/user_manual/processing_algs/gdal/vectorconversion.html#rasterize-vector-to-raster), this PR includes a new parameter to the plugin.
The PR affects [rasterize (vector to layer)](https://github.com/qgis/QGIS-Documentation/blob/master/docs/user_manual/processing_algs/gdal/vectorconversion.rst#parameters-3) with the user having the possibility to specify of using Z value of features to extract burn values:
The description of the parameter (to be inserted after the parameter [BURN]) is:
**Label**: Burn value extracted from the "Z" values of the feature [Optional]
**Name**: USE_Z
**Type**: [boolean] Default False
**Description**: Indicates that a burn value should be extracted from the “Z” values of the feature. Works with points and lines (linear interpolation along each segment). For polygons, works properly only if the are flat (same Z value for all vertices)
### Commits tagged with [need-docs] or [FEATURE] | process | update rasterize py request in qgis request for documentation from pull request qgis qgis author talledodiego qgis version update rasterize py pr description description included the possibility to use z of feature to extract burn values fixes issue reported in detailed description referring to latest documentation available this pr includes a new parameter to the plugin the pr affects with the user having the possibility to specify of using z value of features to extract burn values the description of the parameter to be inserted after the parameter is label burn value extracted from the z values of the feature name use z type default false description indicates that a burn value should be extracted from the “z” values of the feature works with points and lines linear interpolation along each segment for polygons works properly only if the are flat same z value for all vertices commits tagged with or | 1 |
14,175 | 17,088,660,823 | IssuesEvent | 2021-07-08 14:46:19 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [PM] Password expired error message is displayed when logging in with default superadmin credentials | Bug P1 Participant manager Process: Reopened | Steps:
1. Login with default superadmin crdentials (updated by script)
2. Observe the error message
Actual: Password expired error message is displayed when logging in with default superadmin credentials
Expected: Should login without any error message | 1.0 | [PM] Password expired error message is displayed when logging in with default superadmin credentials - Steps:
1. Login with default superadmin crdentials (updated by script)
2. Observe the error message
Actual: Password expired error message is displayed when logging in with default superadmin credentials
Expected: Should login without any error message | process | password expired error message is displayed when logging in with default superadmin credentials steps login with default superadmin crdentials updated by script observe the error message actual password expired error message is displayed when logging in with default superadmin credentials expected should login without any error message | 1 |
7,948 | 11,137,529,173 | IssuesEvent | 2019-12-20 19:36:37 | openopps/openopps-platform | https://api.github.com/repos/openopps/openopps-platform | closed | Update applicant status labels | Apply Process Approved Landing page Requirements Ready State Dept. | Who: State applicants
What: applicant status updates
Why: To be more clear and consistent with state language
Acceptance Criteria:
State has requested the following updates to application status:
Selected should be changed to Primary Select
Alternate should be changed to Alternate Select
The help center will be updated separately
| 1.0 | Update applicant status labels - Who: State applicants
What: applicant status updates
Why: To be more clear and consistent with state language
Acceptance Criteria:
State has requested the following updates to application status:
Selected should be changed to Primary Select
Alternate should be changed to Alternate Select
The help center will be updated separately
| process | update applicant status labels who state applicants what applicant status updates why to be more clear and consistent with state language acceptance criteria state has requested the following updates to application status selected should be changed to primary select alternate should be changed to alternate select the help center will be updated separately | 1 |
249,242 | 18,858,174,878 | IssuesEvent | 2021-11-12 09:28:07 | VimuthM/pe | https://api.github.com/repos/VimuthM/pe | opened | [DG] Sequence diagram not readable - Too small | severity.Medium type.DocumentationBug | I cannot see any details, as it is too small.

<!--session: 1636702805811-5c3d0947-abd0-4638-baeb-f5c2670f042b-->
<!--Version: Web v3.4.1--> | 1.0 | [DG] Sequence diagram not readable - Too small - I cannot see any details, as it is too small.

<!--session: 1636702805811-5c3d0947-abd0-4638-baeb-f5c2670f042b-->
<!--Version: Web v3.4.1--> | non_process | sequence diagram not readable too small i cannot see any details as it is too small | 0 |
163,297 | 25,786,825,206 | IssuesEvent | 2022-12-09 21:29:54 | flix/flix | https://api.github.com/repos/flix/flix | closed | Re-design `par` construct | student-programmer language-design | I would like to re-design the `par` construct as follows:
- [x] We keep the `par(exp1, exp2, ... expn)` expression.
- [x] The compiler should be smart enough that if `exp_i` is a variable or literal, no thread is spawned.
- [ ] The compiler should only spawn `n - 1` threads.
- [x] We drop the `par exp1(exp2, exp3, ..., expn)` expression (i.e. `ParApply`).
- [x] We add a new expression of the form `par (x <- exp1, y <- exp2, ..., expn) yield exp`.
- [ ] The compiler should spawn `n - 1` threads.
I suggest to implement each of these changes in small and incremental PRs.
EDIT: I suggest to start by removing features and tidying up. Then adding new stuff. | 1.0 | Re-design `par` construct - I would like to re-design the `par` construct as follows:
- [x] We keep the `par(exp1, exp2, ... expn)` expression.
- [x] The compiler should be smart enough that if `exp_i` is a variable or literal, no thread is spawned.
- [ ] The compiler should only spawn `n - 1` threads.
- [x] We drop the `par exp1(exp2, exp3, ..., expn)` expression (i.e. `ParApply`).
- [x] We add a new expression of the form `par (x <- exp1, y <- exp2, ..., expn) yield exp`.
- [ ] The compiler should spawn `n - 1` threads.
I suggest to implement each of these changes in small and incremental PRs.
EDIT: I suggest to start by removing features and tidying up. Then adding new stuff. | non_process | re design par construct i would like to re design the par construct as follows we keep the par expn expression the compiler should be smart enough that if exp i is a variable or literal no thread is spawned the compiler should only spawn n threads we drop the par expn expression i e parapply we add a new expression of the form par x y expn yield exp the compiler should spawn n threads i suggest to implement each of these changes in small and incremental prs edit i suggest to start by removing features and tidying up then adding new stuff | 0 |
136,286 | 5,279,383,175 | IssuesEvent | 2017-02-07 11:06:05 | ow2-proactive/studio | https://api.github.com/repos/ow2-proactive/studio | closed | Increase text fields length for Data Management | priority:low severity:trivial type:improvement | The popup that appears to configure _includes_ and _excludes_ patterns with dataspace management is large. However, the text fields used to enter patterns have a small width. Since these last are used to set paths, it would be really convenient to increase their width.

| 1.0 | Increase text fields length for Data Management - The popup that appears to configure _includes_ and _excludes_ patterns with dataspace management is large. However, the text fields used to enter patterns have a small width. Since these last are used to set paths, it would be really convenient to increase their width.

| non_process | increase text fields length for data management the popup that appears to configure includes and excludes patterns with dataspace management is large however the text fields used to enter patterns have a small width since these last are used to set paths it would be really convenient to increase their width | 0 |
847 | 3,315,473,352 | IssuesEvent | 2015-11-06 12:18:00 | superroma/testcafe-hammerhead | https://api.github.com/repos/superroma/testcafe-hammerhead | closed | Scripts passed to the WebWorker via Blob or data-uri doesn't have processing header | AREA: client AREA: server SYSTEM: resource processing TYPE: bug | We should always have processing header in WebWorkers since we need to provide instruction fallbacks. But if script was provided via Blob object or data-uri we don't explicitly add processing header which results in error.
**Refs:**
About WebWorker Blob : http://stackoverflow.com/questions/10343913/how-to-create-a-web-worker-from-a-string
Failing page: http://www.homedepot.com/c/HomePageRegional3?s_tnt=56312:1:0
(in HHM: https://testcafe-hhhm.devexpress.com/history?timestamp=1446404591534&url=http://www.homedepot.com/c/HomePageRegional3?s_tnt=56312:1:0) | 1.0 | Scripts passed to the WebWorker via Blob or data-uri doesn't have processing header - We should always have processing header in WebWorkers since we need to provide instruction fallbacks. But if script was provided via Blob object or data-uri we don't explicitly add processing header which results in error.
**Refs:**
About WebWorker Blob : http://stackoverflow.com/questions/10343913/how-to-create-a-web-worker-from-a-string
Failing page: http://www.homedepot.com/c/HomePageRegional3?s_tnt=56312:1:0
(in HHM: https://testcafe-hhhm.devexpress.com/history?timestamp=1446404591534&url=http://www.homedepot.com/c/HomePageRegional3?s_tnt=56312:1:0) | process | scripts passed to the webworker via blob or data uri doesn t have processing header we should always have processing header in webworkers since we need to provide instruction fallbacks but if script was provided via blob object or data uri we don t explicitly add processing header which results in error refs about webworker blob failing page in hhm | 1 |
21,850 | 30,320,764,015 | IssuesEvent | 2023-07-10 19:03:14 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | showing incorrect count for just for one api | question log-processing | hi im using this goaccess tool and i observe that the for one api count of hits is missmatch. i check the in log file it showing the total count and after that if i m seeing in generated html report it showing less hit count of that api help me with it | 1.0 | showing incorrect count for just for one api - hi im using this goaccess tool and i observe that the for one api count of hits is missmatch. i check the in log file it showing the total count and after that if i m seeing in generated html report it showing less hit count of that api help me with it | process | showing incorrect count for just for one api hi im using this goaccess tool and i observe that the for one api count of hits is missmatch i check the in log file it showing the total count and after that if i m seeing in generated html report it showing less hit count of that api help me with it | 1 |
9,778 | 12,797,289,472 | IssuesEvent | 2020-07-02 12:03:20 | nanoframework/Home | https://api.github.com/repos/nanoframework/Home | closed | Compilation error when multidimensional array is defined | Area: Metadata Processor Status: FIXED Type: Bug | ### Details about Problem
**nanoFramework area:** C# code
**VS version:** 2019
**VS extension version:** 2019.2.0.17
**Target:** ESP32-WROOM_32
**Device capabilities output<!--(if relevant)-->:**
### Description
I've tried to define a matrix as int[,] arr = new int[10,10] and got a strange error during compilation.
Severity Code Description Project File Line Suppression State
Error Exception minimizing assembly: Value cannot be null.
Parameter name: key. NFApp1 C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets 223
If I remove this line from the code, the compilation process and deployment are going fine. It is easy to reproduce this bug - just create a blank project and try to define an array with more than one dimension.
### Detailed repro steps so we can see the same problem
1. Create a blank solution
2. Define an array as int[,] arr = new int[10,10]
3. Compile
<!--- bug-report-tools DO NOT REMOVE -->
| 1.0 | Compilation error when multidimensional array is defined - ### Details about Problem
**nanoFramework area:** C# code
**VS version:** 2019
**VS extension version:** 2019.2.0.17
**Target:** ESP32-WROOM_32
**Device capabilities output<!--(if relevant)-->:**
### Description
I've tried to define a matrix as int[,] arr = new int[10,10] and got a strange error during compilation.
Severity Code Description Project File Line Suppression State
Error Exception minimizing assembly: Value cannot be null.
Parameter name: key. NFApp1 C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\nanoFramework\v1.0\NFProjectSystem.MDP.targets 223
If I remove this line from the code, the compilation process and deployment are going fine. It is easy to reproduce this bug - just create a blank project and try to define an array with more than one dimension.
### Detailed repro steps so we can see the same problem
1. Create a blank solution
2. Define an array as int[,] arr = new int[10,10]
3. Compile
<!--- bug-report-tools DO NOT REMOVE -->
| process | compilation error when multidimensional array is defined details about problem nanoframework area c code vs version vs extension version target wroom device capabilities output description i ve tried to define a matrix as int arr new int and got a strange error during compilation severity code description project file line suppression state error exception minimizing assembly value cannot be null parameter name key c program files microsoft visual studio community msbuild nanoframework nfprojectsystem mdp targets if i remove this line from the code the compilation process and deployment are going fine it is easy to reproduce this bug just create a blank project and try to define an array with more than one dimension detailed repro steps so we can see the same problem create a blank solution define an array as int arr new int compile | 1 |
226,946 | 18,045,970,406 | IssuesEvent | 2021-09-18 22:41:08 | logicmoo/logicmoo_workspace | https://api.github.com/repos/logicmoo/logicmoo_workspace | opened | logicmoo.pfc.test.sanity_base.BC_01C JUnit | Test_9999 logicmoo.pfc.test.sanity_base unit_test BC_01C | (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc)
GH_MASTER_ISSUE_FINFO=
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ABC_01C
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/bc_01c.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/BC_01C/logicmoo_pfc_test_sanity_base_BC_01C_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/BC_01C/logicmoo_pfc_test_sanity_base_BC_01C_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/bc_01c.pfc
```
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/bc_01c.pfc'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- dynamic(bc_q/1).
:- dynamic(bc_p/1).
:- ain((bc_q(N) <- bc_p(N))).
bc_p(a).
bc_p(b).
:- mpred_test(call_u(bc_p(b))).
%= nothing cached
%~ mpred_test("Test_0001_Line_0000__B",baseKB:call_u(bc_p(b)))
/*~
%~ mpred_test("Test_0001_Line_0000__B",baseKB:call_u(bc_p(b)))
passed=info(why_was_true(baseKB:call_u(bc_p(b))))
no_proof_for(call_u(bc_p(b))).
no_proof_for(call_u(bc_p(b))).
no_proof_for(call_u(bc_p(b))).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0001_Line_0000__B'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0001_Line_0000__B-junit.xml
~*/
%= nothing cached
:- mpred_test(\+ clause(bc_q(_),true)).
%~ mpred_test("Test_0002_Line_0000__naf_bc_q_1",baseKB:(\+clause(bc_q(_454),true)))
/*~
%~ mpred_test("Test_0002_Line_0000__naf_bc_q_1",baseKB:(\+clause(bc_q(_454),true)))
passed=info(why_was_true(baseKB:(\+clause(bc_q(_454),true))))
no_proof_for(\+clause(bc_q(Q),true)).
no_proof_for(\+clause(bc_q(Q),true)).
no_proof_for(\+clause(bc_q(Q),true)).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0002_Line_0000__naf_bc_q_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0002_Line_0000__naf_bc_q_1-junit.xml
~*/
:- mpred_test(((bc_q(b)))).
%= something cached
%~ mpred_test("Test_0003_Line_0000__B",baseKB:bc_q(b))
/*~
%~ mpred_test("Test_0003_Line_0000__B",baseKB:bc_q(b))
^ Call: (117) [pfc_lib] lookup_spft('$bt'(bc_q(b), _52480), _52482, _52484)
^ Unify: (117) [pfc_lib] lookup_spft('$bt'(bc_q(b), _52480), _52482, _52484)
^ Fail: (117) [pfc_lib] lookup_spft('$bt'(bc_q(b), _52480), _52482, _52484)
^ Call: (117) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (117) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Call: (88) [pfc_lib] lookup_spft('$bt'(bc_q(b), _58586), _58588, _58590)
^ Unify: (88) [pfc_lib] lookup_spft('$bt'(bc_q(b), _58586), _58588, _58590)
^ Fail: (88) [pfc_lib] lookup_spft('$bt'(bc_q(b), _58586), _58588, _58590)
^ Call: (88) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (88) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Call: (172) [pfc_lib] lookup_spft('$bt'(bc_q(b), _69772), _69774, _69776)
^ Unify: (172) [pfc_lib] lookup_spft('$bt'(bc_q(b), _69772), _69774, _69776)
^ Fail: (172) [pfc_lib] lookup_spft('$bt'(bc_q(b), _69772), _69774, _69776)
^ Call: (172) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (172) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Call: (143) [pfc_lib] lookup_spft('$bt'(bc_q(b), _75878), _75880, _75882)
^ Unify: (143) [pfc_lib] lookup_spft('$bt'(bc_q(b), _75878), _75880, _75882)
^ Fail: (143) [pfc_lib] lookup_spft('$bt'(bc_q(b), _75878), _75880, _75882)
^ Call: (143) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (143) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Call: (68) [baseKB] baseKB:bc_q(b)
^ Unify: (68) [baseKB] baseKB:bc_q(b)
^ Call: (69) [baseKB] awc
^ Unify: (69) [baseKB] awc
^ Exit: (69) [baseKB] awc
^ Call: (69) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Unify: (69) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Call: (72) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1069, baseKB:fail)
^ Unify: (72) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1069, baseKB:fail)
Call: (73) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (73) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (73) [loop_check] prolog_frame_attribute(1069, parent_goal, loop_check_term_frame(_91974, info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, _91980, _91982))
^ Fail: (73) [loop_check] prolog_frame_attribute(1069, parent_goal, loop_check_term_frame(_91974, info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, _91980, _91982))
^ Redo: (72) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1069, baseKB:fail)
^ Call: (73) [baseKB] mpred_call_only_facts(bc_q(b))
^ Unify: (73) [baseKB] mpred_call_only_facts(bc_q(b))
^ Call: (80) [loop_check] loop_check:loop_check_term_frame(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, 1227, pfc_lib:trace_or_throw(looped(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))))))
^ Unify: (80) [loop_check] loop_check:loop_check_term_frame(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, 1227, pfc_lib:trace_or_throw(looped(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))))))
Call: (81) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (81) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (81) [loop_check] prolog_frame_attribute(1227, parent_goal, loop_check_term_frame(_97826, info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, _97832, _97834))
^ Fail: (81) [loop_check] prolog_frame_attribute(1227, parent_goal, loop_check_term_frame(_97826, info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, _97832, _97834))
^ Redo: (80) [loop_check] loop_check:loop_check_term_frame(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, 1227, pfc_lib:trace_or_throw(looped(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))))))
^ Call: (81) [baseKB] locally_each:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b)))
^ Unify: (81) [baseKB] locally_each:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b)))
^ Call: (82) [baseKB] locally(t_l:infAssertedOnly(bc_q(b)), call_u(bc_q(b)))
^ Unify: (82) [locally_each] locally(t_l:infAssertedOnly(bc_q(b)), baseKB:call_u(bc_q(b)))
^ Call: (85) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Unify: (85) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Exit: (85) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Call: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Unify: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Exit: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Call: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Unify: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Exit: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Call: (86) [locally_each] locally_each:clause_true(t_l, t_l:infAssertedOnly(bc_q(b)))
^ Unify: (86) [locally_each] locally_each:clause_true(t_l, t_l:infAssertedOnly(bc_q(b)))
Call: (87) [system] copy_term(t_l:infAssertedOnly(bc_q(b)), _110582)
Exit: (87) [system] copy_term(t_l:infAssertedOnly(bc_q(b)), t_l:infAssertedOnly(bc_q(b)))
^ Call: (87) [t_l] clause(t_l:infAssertedOnly(bc_q(b)), true)
^ Fail: (87) [t_l] clause(infAssertedOnly(bc_q(b)), true)
^ Fail: (86) [locally_each] locally_each:clause_true(t_l, t_l:infAssertedOnly(bc_q(b)))
^ Call: (92) [locally_each] locally_each:key_asserta(t_l, t_l:infAssertedOnly(bc_q(b)))
^ Unify: (92) [locally_each] locally_each:key_asserta(t_l, t_l:infAssertedOnly(bc_q(b)))
^ Call: (93) [t_l] asserta(t_l:infAssertedOnly(bc_q(b)), _115036)
^ Exit: (93) [t_l] asserta(t_l:infAssertedOnly(bc_q(b)), <clause>(0x55cff648d2d0))
Call: (93) [system] nb_current('$w_tl_e', _116274)
Exit: (93) [system] nb_current('$w_tl_e', [<clause>(0x55cffa96c2a0)])
Call: (93) [system] nb_linkval('$w_tl_e', [<clause>(0x55cff648d2d0), <clause>(0x55cffa96c2a0)])
Exit: (93) [system] nb_linkval('$w_tl_e', [<clause>(0x55cff648d2d0), <clause>(0x55cffa96c2a0)])
^ Exit: (92) [locally_each] locally_each:key_asserta(t_l, t_l:infAssertedOnly(bc_q(b)))
^ Call: (91) [baseKB] call_u(bc_q(b))
^ Unify: (91) [pfc_lib] call_u(baseKB:bc_q(b))
^ Call: (95) [baseKB] baseKB:bc_q(b)
^ Unify: (95) [baseKB] baseKB:bc_q(b)
^ Call: (96) [baseKB] awc
^ Unify: (96) [baseKB] awc
^ Exit: (96) [baseKB] awc
^ Call: (96) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Unify: (96) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Call: (99) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1684, baseKB:fail)
^ Unify: (99) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1684, baseKB:fail)
Call: (100) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (100) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (100) [loop_check] prolog_frame_attribute(1684, parent_goal, loop_check_term_frame(_127666, info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, _127672, _127674))
^ Exit: (100) [loop_check] prolog_frame_attribute(1684, parent_goal, loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1069, baseKB:fail))
Call: (100) [system] fail
Fail: (100) [system] fail
^ Fail: (99) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1684, baseKB:fail)
^ Call: (106) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1833, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Unify: (106) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1833, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
Call: (107) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (107) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (107) [loop_check] prolog_frame_attribute(1833, parent_goal, loop_check_term_frame(_4130, pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, _4136, _4138))
^ Fail: (107) [loop_check] prolog_frame_attribute(1833, parent_goal, loop_check_term_frame(_4130, pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, _4136, _4138))
^ Redo: (106) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1833, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Call: (107) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Unify: (107) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Unify: (107) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
Call: (119) [$autoload] leave_sandbox(_8006)
Unify: (119) [$autoload] leave_sandbox(_8006)
Exit: (119) [$autoload] leave_sandbox(false)
Call: (118) [$autoload] restore_sandbox(false)
Unify: (118) [$autoload] restore_sandbox(false)
Exit: (118) [$autoload] restore_sandbox(false)
^ Unify: (107) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Call: (108) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Unify: (108) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Redo: (108) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Exit: (108) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Call: (114) [pfc_lib] lookup_spft('$bt'(bc_q(b), _15924), _15926, _15928)
^ Unify: (114) [pfc_lib] lookup_spft('$bt'(bc_q(b), _15924), _15926, _15928)
^ Fail: (114) [pfc_lib] lookup_spft('$bt'(bc_q(b), _15924), _15926, _15928)
^ Call: (114) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Unify: (114) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Exit: (114) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Exit: (107) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Exit: (106) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1833, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Call: (103) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _23772)
^ Unify: (103) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _23772)
^ Call: (104) [system] clause(pfc_lib:bc_q(b), true, _23772)
^ Fail: (104) [system] clause(pfc_lib:bc_q(b), true, _23772)
^ Fail: (103) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _23772)
^ Call: (98) [baseKB] inherit_above(baseKB, bc_q(b))
^ Unify: (98) [baseKB] inherit_above(baseKB, bc_q(b))
Call: (99) [system] baseKB\=baseKB
Fail: (99) [system] baseKB\=baseKB
^ Fail: (98) [baseKB] inherit_above(baseKB, bc_q(b))
^ Fail: (96) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Fail: (95) [baseKB] baseKB:bc_q(b)
^ Fail: (91) [pfc_lib] call_u(baseKB:bc_q(b))
^ Call: (92) [locally_each] locally_each:key_erase(t_l)
^ Unify: (92) [locally_each] locally_each:key_erase(t_l)
Call: (93) [system] nb_current('$w_tl_e', [_33152|_33154])
Exit: (93) [system] nb_current('$w_tl_e', [<clause>(0x55cff648d2d0), <clause>(0x55cffa96c2a0)])
Call: (93) [system] nb_linkval('$w_tl_e', [<clause>(0x55cffa96c2a0)])
Exit: (93) [system] nb_linkval('$w_tl_e', [<clause>(0x55cffa96c2a0)])
Call: (94) [system] erase(<clause>(0x55cff648d2d0))
Exit: (94) [system] erase(<clause>(0x55cff648d2d0))
Call: (93) [system] true
Exit: (93) [system] true
Call: (93) [system] true
Exit: (93) [system] true
^ Exit: (92) [locally_each] locally_each:key_erase(t_l)
^ Fail: (82) [locally_each] locally(t_l:infAssertedOnly(bc_q(b)), baseKB:call_u(bc_q(b)))
^ Fail: (81) [baseKB] locally_each:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b)))
^ Fail: (80) [loop_check] loop_check:loop_check_term_frame(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, 1227, pfc_lib:trace_or_throw(looped(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))))))
^ Fail: (73) [baseKB] mpred_call_only_facts(bc_q(b))
^ Fail: (72) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1069, baseKB:fail)
^ Call: (79) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1218, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Unify: (79) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1218, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
Call: (80) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (80) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (80) [loop_check] prolog_frame_attribute(1218, parent_goal, loop_check_term_frame(_45542, pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, _45548, _45550))
^ Fail: (80) [loop_check] prolog_frame_attribute(1218, parent_goal, loop_check_term_frame(_45542, pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, _45548, _45550))
^ Redo: (79) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1218, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Call: (80) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Unify: (80) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Unify: (80) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
Call: (92) [$autoload] leave_sandbox(_49418)
Unify: (92) [$autoload] leave_sandbox(_49418)
Exit: (92) [$autoload] leave_sandbox(false)
Call: (91) [$autoload] restore_sandbox(false)
Unify: (91) [$autoload] restore_sandbox(false)
Exit: (91) [$autoload] restore_sandbox(false)
^ Unify: (80) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Call: (81) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Unify: (81) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Redo: (81) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Exit: (81) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Call: (87) [pfc_lib] lookup_spft('$bt'(bc_q(b), _57336), _57338, _57340)
^ Unify: (87) [pfc_lib] lookup_spft('$bt'(bc_q(b), _57336), _57338, _57340)
^ Fail: (87) [pfc_lib] lookup_spft('$bt'(bc_q(b), _57336), _57338, _57340)
^ Call: (87) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Unify: (87) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Exit: (87) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Exit: (80) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Exit: (79) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1218, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Call: (76) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _65184)
^ Unify: (76) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _65184)
^ Call: (77) [system] clause(pfc_lib:bc_q(b), true, _65184)
^ Fail: (77) [system] clause(pfc_lib:bc_q(b), true, _65184)
^ Fail: (76) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _65184)
^ Call: (71) [baseKB] inherit_above(baseKB, bc_q(b))
^ Unify: (71) [baseKB] inherit_above(baseKB, bc_q(b))
Call: (72) [system] baseKB\=baseKB
Fail: (72) [system] baseKB\=baseKB
^ Fail: (71) [baseKB] inherit_above(baseKB, bc_q(b))
^ Fail: (69) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Fail: (68) [baseKB] baseKB:bc_q(b)
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+bc_q(b))),rtrace(baseKB:bc_q(b))))
no_proof_for(\+bc_q(b)).
no_proof_for(\+bc_q(b)).
no_proof_for(\+bc_q(b)).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0003_Line_0000__B'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0003_Line_0000__B-junit.xml
~*/
%= something cached
:- mpred_test( clause(bc_q(_),true)).
% Are we cleaning up backchains?
%~ mpred_test("Test_0004_Line_0000__bc_q_1",baseKB:clause(bc_q(_31714),true))
/*~
%~ mpred_test("Test_0004_Line_0000__bc_q_1",baseKB:clause(bc_q(_31714),true))
^ Call: (68) [baseKB] clause(bc_q(_31714), true)
^ Fail: (68) [baseKB] clause(bc_q(_31714), true)
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+clause(bc_q(_31714),true))),rtrace(baseKB:clause(bc_q(_31714),true))))
no_proof_for(\+clause(bc_q(Q),true)).
no_proof_for(\+clause(bc_q(Q),true)).
no_proof_for(\+clause(bc_q(Q),true)).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0004_Line_0000__bc_q_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0004_Line_0000__bc_q_1-junit.xml
~*/
% Are we cleaning up backchains?
:- ignore((mpred_info((((bc_q(N) <- bc_p(N))))))).
%~ =======================================================================
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/bc_01c.pfc#L40
%~ =======================================================================
%~ =======================================================================
/*~
Justifications for bc_q(N)<-bc_p(N):
[36m 1.1 mfl4(['N'=_],baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/bc_01c.pfc#L19 ',19) [0m
==================
mpred_db_type = rule(bwc).
==================
fail = mpred_child(s,v).
%~ -mpred_axiom.
%~ -well_founded.
%~ -mpred_assumption.
==================
:- dynamic (<-)/2.
:- multifile (<-)/2.
:- public (<-)/2.
:- module_transparent (<-)/2.
bc_q(A)<-bc_p(A).
%~ -get_mpred_is_tracing.
==================
get_tms_mode=full.
%~ +( mpred_supported(local,(bc_q(P_Q)<-bc_p(P_Q)))).
%~ +( mpred_supported(full,(bc_q(P_Q)<-bc_p(P_Q)))).
%~ -( mpred_supported(none,(bc_q(P_Q)<-bc_p(P_Q)))).
~*/
:- mpred_info(((bct(bc_q(_6600462), pt(bc_p(_6600462), rhs([bc_q(_6600462)])))))).
%~ =======================================================================
%~ make_dynamic_here(baseKB,bct(bc_q(_431654),pt(bc_p(_431654),rhs([bc_q(_431654)]))))
%~ =======================================================================
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/bc_01c.pfc:43
%~ =======================================================================
/*~
no_proof_for(bct(bc_q(_6600462),pt(bc_p(_6600462),rhs([ bc_q(_6600462)])))).
no_proof_for(bct(bc_q(_6600462),pt(bc_p(_6600462),rhs([ bc_q(_6600462)])))).
==================
mpred_db_type = fact(Fact).
==================
fail = mpred_child(s,v).
%~ -mpred_axiom.
%~ -well_founded.
%~ -mpred_assumption.
==================
:- dynamic bct/2.
%~ -get_mpred_is_tracing.
==================
get_tms_mode=full.
%~ +( mpred_supported(local,bct(bc_q(P_Q),pt(bc_p(P_Q),rhs([bc_q(P_Q)]))))).
%~ +( mpred_supported(full,bct(bc_q(P_Q),pt(bc_p(P_Q),rhs([bc_q(P_Q)]))))).
%~ -( mpred_supported(none,bct(bc_q(P_Q),pt(bc_p(P_Q),rhs([bc_q(P_Q)]))))).
~*/
:- mpred_test(((mpred_withdraw(((bc_q(N) <- bc_p(N))))),\+ clause(bc_q(_),true))).
%:- mpred_test(((mpred_undo1(((bc_q(N) <- bc_p(N))))),\+ clause(bc_q(_),true))).
%:- mpred_test(((mpred_retract(((bc_q(N) <- bc_p(N))))),\+ clause(bc_q(_),true))).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/bc_01c.pfc:45
%~ mpred_test( "Test_0005_Line_0000__bc_q_1",
%~ baseKB : mpred_withdraw((bc_q(N)<-bc_p(N))),\+clause(bc_q(Q),true))
/*~
%~ mpred_test("Test_0005_Line_0000__bc_q_1",baseKB:(mpred_withdraw((bc_q(_42648)<-bc_p(_42648))),\+clause(bc_q(_42670),true)))
passed=info(why_was_true(baseKB:(mpred_withdraw((bc_q(_42648)<-bc_p(_42648))),\+clause(bc_q(_42670),true))))
no_proof_for((mpred_withdraw((bc_q(N)<-bc_p(N))),\+clause(bc_q(Q),true))).
no_proof_for((mpred_withdraw((bc_q(N)<-bc_p(N))),\+clause(bc_q(Q),true))).
no_proof_for((mpred_withdraw((bc_q(N)<-bc_p(N))),\+clause(bc_q(Q),true))).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0005_Line_0000__bc_q_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0005_Line_0000__bc_q_1-junit.xml
~*/
%:- mpred_test(((mpred_undo1(((bc_q(N) <- bc_p(N))))),\+ clause(bc_q(_),true))).
%:- mpred_test(((mpred_retract(((bc_q(N) <- bc_p(N))))),\+ clause(bc_q(_),true))).
:- get_bc_clause(bc_q(_A),BackChainClause),
mpred_why(( BackChainClause)).
/*~
no_proof_for((bc_q(_A):-awc,!,mpred_bc_and_with_pfc(bc_q(_A)))).
no_proof_for((bc_q(_A):-awc,!,mpred_bc_and_with_pfc(bc_q(_A)))).
~*/
:- get_bc_clause(bc_q(_A),H,B),
mpred_test(\+ clause(H,B)).
%~ mpred_test( "Test_0006_Line_0000__naf_bc_q_1",
%~ baseKB : \+( clause(bc_q(_A),(awc,!,mpred_bc_and_with_pfc(bc_q(_A))))))
/*~
%~ mpred_test("Test_0006_Line_0000__naf_bc_q_1",baseKB:(\+clause(bc_q(_452456),(awc,!,mpred_bc_and_with_pfc(bc_q(_452456))))))
passed=info(why_was_true(baseKB:(\+clause(bc_q(_452456),(awc,!,mpred_bc_and_with_pfc(bc_q(_452456)))))))
no_proof_for(\+clause(bc_q(_A),(awc,!,mpred_bc_and_with_pfc(bc_q(_A))))).
no_proof_for(\+clause(bc_q(_A),(awc,!,mpred_bc_and_with_pfc(bc_q(_A))))).
no_proof_for(\+clause(bc_q(_A),(awc,!,mpred_bc_and_with_pfc(bc_q(_A))))).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0006_Line_0000__naf_bc_q_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0006_Line_0000__naf_bc_q_1-junit.xml
~*/
%~ unused(save_junit_results)
%~ test_completed_exit(8)
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
```
totalTime=1
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ABC_01C
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/bc_01c.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/BC_01C/logicmoo_pfc_test_sanity_base_BC_01C_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/BC_01C/logicmoo_pfc_test_sanity_base_BC_01C_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/bc_01c.pfc
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k bc_01c.pfc (returned 8)
| 3.0 | logicmoo.pfc.test.sanity_base.BC_01C JUnit - (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc)
GH_MASTER_ISSUE_FINFO=
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ABC_01C
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/bc_01c.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/BC_01C/logicmoo_pfc_test_sanity_base_BC_01C_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/BC_01C/logicmoo_pfc_test_sanity_base_BC_01C_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/bc_01c.pfc
```
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/bc_01c.pfc'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- dynamic(bc_q/1).
:- dynamic(bc_p/1).
:- ain((bc_q(N) <- bc_p(N))).
bc_p(a).
bc_p(b).
:- mpred_test(call_u(bc_p(b))).
%= nothing cached
%~ mpred_test("Test_0001_Line_0000__B",baseKB:call_u(bc_p(b)))
/*~
%~ mpred_test("Test_0001_Line_0000__B",baseKB:call_u(bc_p(b)))
passed=info(why_was_true(baseKB:call_u(bc_p(b))))
no_proof_for(call_u(bc_p(b))).
no_proof_for(call_u(bc_p(b))).
no_proof_for(call_u(bc_p(b))).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0001_Line_0000__B'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0001_Line_0000__B-junit.xml
~*/
%= nothing cached
:- mpred_test(\+ clause(bc_q(_),true)).
%~ mpred_test("Test_0002_Line_0000__naf_bc_q_1",baseKB:(\+clause(bc_q(_454),true)))
/*~
%~ mpred_test("Test_0002_Line_0000__naf_bc_q_1",baseKB:(\+clause(bc_q(_454),true)))
passed=info(why_was_true(baseKB:(\+clause(bc_q(_454),true))))
no_proof_for(\+clause(bc_q(Q),true)).
no_proof_for(\+clause(bc_q(Q),true)).
no_proof_for(\+clause(bc_q(Q),true)).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0002_Line_0000__naf_bc_q_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0002_Line_0000__naf_bc_q_1-junit.xml
~*/
:- mpred_test(((bc_q(b)))).
%= something cached
%~ mpred_test("Test_0003_Line_0000__B",baseKB:bc_q(b))
/*~
%~ mpred_test("Test_0003_Line_0000__B",baseKB:bc_q(b))
^ Call: (117) [pfc_lib] lookup_spft('$bt'(bc_q(b), _52480), _52482, _52484)
^ Unify: (117) [pfc_lib] lookup_spft('$bt'(bc_q(b), _52480), _52482, _52484)
^ Fail: (117) [pfc_lib] lookup_spft('$bt'(bc_q(b), _52480), _52482, _52484)
^ Call: (117) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (117) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Call: (88) [pfc_lib] lookup_spft('$bt'(bc_q(b), _58586), _58588, _58590)
^ Unify: (88) [pfc_lib] lookup_spft('$bt'(bc_q(b), _58586), _58588, _58590)
^ Fail: (88) [pfc_lib] lookup_spft('$bt'(bc_q(b), _58586), _58588, _58590)
^ Call: (88) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (88) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Call: (172) [pfc_lib] lookup_spft('$bt'(bc_q(b), _69772), _69774, _69776)
^ Unify: (172) [pfc_lib] lookup_spft('$bt'(bc_q(b), _69772), _69774, _69776)
^ Fail: (172) [pfc_lib] lookup_spft('$bt'(bc_q(b), _69772), _69774, _69776)
^ Call: (172) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (172) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Call: (143) [pfc_lib] lookup_spft('$bt'(bc_q(b), _75878), _75880, _75882)
^ Unify: (143) [pfc_lib] lookup_spft('$bt'(bc_q(b), _75878), _75880, _75882)
^ Fail: (143) [pfc_lib] lookup_spft('$bt'(bc_q(b), _75878), _75880, _75882)
^ Call: (143) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (143) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Call: (68) [baseKB] baseKB:bc_q(b)
^ Unify: (68) [baseKB] baseKB:bc_q(b)
^ Call: (69) [baseKB] awc
^ Unify: (69) [baseKB] awc
^ Exit: (69) [baseKB] awc
^ Call: (69) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Unify: (69) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Call: (72) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1069, baseKB:fail)
^ Unify: (72) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1069, baseKB:fail)
Call: (73) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (73) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (73) [loop_check] prolog_frame_attribute(1069, parent_goal, loop_check_term_frame(_91974, info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, _91980, _91982))
^ Fail: (73) [loop_check] prolog_frame_attribute(1069, parent_goal, loop_check_term_frame(_91974, info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, _91980, _91982))
^ Redo: (72) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1069, baseKB:fail)
^ Call: (73) [baseKB] mpred_call_only_facts(bc_q(b))
^ Unify: (73) [baseKB] mpred_call_only_facts(bc_q(b))
^ Call: (80) [loop_check] loop_check:loop_check_term_frame(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, 1227, pfc_lib:trace_or_throw(looped(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))))))
^ Unify: (80) [loop_check] loop_check:loop_check_term_frame(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, 1227, pfc_lib:trace_or_throw(looped(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))))))
Call: (81) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (81) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (81) [loop_check] prolog_frame_attribute(1227, parent_goal, loop_check_term_frame(_97826, info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, _97832, _97834))
^ Fail: (81) [loop_check] prolog_frame_attribute(1227, parent_goal, loop_check_term_frame(_97826, info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, _97832, _97834))
^ Redo: (80) [loop_check] loop_check:loop_check_term_frame(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, 1227, pfc_lib:trace_or_throw(looped(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))))))
^ Call: (81) [baseKB] locally_each:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b)))
^ Unify: (81) [baseKB] locally_each:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b)))
^ Call: (82) [baseKB] locally(t_l:infAssertedOnly(bc_q(b)), call_u(bc_q(b)))
^ Unify: (82) [locally_each] locally(t_l:infAssertedOnly(bc_q(b)), baseKB:call_u(bc_q(b)))
^ Call: (85) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Unify: (85) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Exit: (85) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Call: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Unify: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Exit: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Call: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Unify: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Exit: (86) [must_sanity] must_sanity:mquietly_if(false, rtrace:tAt_rtrace)
^ Call: (86) [locally_each] locally_each:clause_true(t_l, t_l:infAssertedOnly(bc_q(b)))
^ Unify: (86) [locally_each] locally_each:clause_true(t_l, t_l:infAssertedOnly(bc_q(b)))
Call: (87) [system] copy_term(t_l:infAssertedOnly(bc_q(b)), _110582)
Exit: (87) [system] copy_term(t_l:infAssertedOnly(bc_q(b)), t_l:infAssertedOnly(bc_q(b)))
^ Call: (87) [t_l] clause(t_l:infAssertedOnly(bc_q(b)), true)
^ Fail: (87) [t_l] clause(infAssertedOnly(bc_q(b)), true)
^ Fail: (86) [locally_each] locally_each:clause_true(t_l, t_l:infAssertedOnly(bc_q(b)))
^ Call: (92) [locally_each] locally_each:key_asserta(t_l, t_l:infAssertedOnly(bc_q(b)))
^ Unify: (92) [locally_each] locally_each:key_asserta(t_l, t_l:infAssertedOnly(bc_q(b)))
^ Call: (93) [t_l] asserta(t_l:infAssertedOnly(bc_q(b)), _115036)
^ Exit: (93) [t_l] asserta(t_l:infAssertedOnly(bc_q(b)), <clause>(0x55cff648d2d0))
Call: (93) [system] nb_current('$w_tl_e', _116274)
Exit: (93) [system] nb_current('$w_tl_e', [<clause>(0x55cffa96c2a0)])
Call: (93) [system] nb_linkval('$w_tl_e', [<clause>(0x55cff648d2d0), <clause>(0x55cffa96c2a0)])
Exit: (93) [system] nb_linkval('$w_tl_e', [<clause>(0x55cff648d2d0), <clause>(0x55cffa96c2a0)])
^ Exit: (92) [locally_each] locally_each:key_asserta(t_l, t_l:infAssertedOnly(bc_q(b)))
^ Call: (91) [baseKB] call_u(bc_q(b))
^ Unify: (91) [pfc_lib] call_u(baseKB:bc_q(b))
^ Call: (95) [baseKB] baseKB:bc_q(b)
^ Unify: (95) [baseKB] baseKB:bc_q(b)
^ Call: (96) [baseKB] awc
^ Unify: (96) [baseKB] awc
^ Exit: (96) [baseKB] awc
^ Call: (96) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Unify: (96) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Call: (99) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1684, baseKB:fail)
^ Unify: (99) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1684, baseKB:fail)
Call: (100) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (100) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (100) [loop_check] prolog_frame_attribute(1684, parent_goal, loop_check_term_frame(_127666, info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, _127672, _127674))
^ Exit: (100) [loop_check] prolog_frame_attribute(1684, parent_goal, loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1069, baseKB:fail))
Call: (100) [system] fail
Fail: (100) [system] fail
^ Fail: (99) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1684, baseKB:fail)
^ Call: (106) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1833, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Unify: (106) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1833, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
Call: (107) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (107) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (107) [loop_check] prolog_frame_attribute(1833, parent_goal, loop_check_term_frame(_4130, pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, _4136, _4138))
^ Fail: (107) [loop_check] prolog_frame_attribute(1833, parent_goal, loop_check_term_frame(_4130, pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, _4136, _4138))
^ Redo: (106) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1833, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Call: (107) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Unify: (107) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Unify: (107) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
Call: (119) [$autoload] leave_sandbox(_8006)
Unify: (119) [$autoload] leave_sandbox(_8006)
Exit: (119) [$autoload] leave_sandbox(false)
Call: (118) [$autoload] restore_sandbox(false)
Unify: (118) [$autoload] restore_sandbox(false)
Exit: (118) [$autoload] restore_sandbox(false)
^ Unify: (107) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Call: (108) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Unify: (108) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Redo: (108) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Exit: (108) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Call: (114) [pfc_lib] lookup_spft('$bt'(bc_q(b), _15924), _15926, _15928)
^ Unify: (114) [pfc_lib] lookup_spft('$bt'(bc_q(b), _15924), _15926, _15928)
^ Fail: (114) [pfc_lib] lookup_spft('$bt'(bc_q(b), _15924), _15926, _15928)
^ Call: (114) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Unify: (114) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Exit: (114) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Exit: (107) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Exit: (106) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1833, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Call: (103) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _23772)
^ Unify: (103) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _23772)
^ Call: (104) [system] clause(pfc_lib:bc_q(b), true, _23772)
^ Fail: (104) [system] clause(pfc_lib:bc_q(b), true, _23772)
^ Fail: (103) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _23772)
^ Call: (98) [baseKB] inherit_above(baseKB, bc_q(b))
^ Unify: (98) [baseKB] inherit_above(baseKB, bc_q(b))
Call: (99) [system] baseKB\=baseKB
Fail: (99) [system] baseKB\=baseKB
^ Fail: (98) [baseKB] inherit_above(baseKB, bc_q(b))
^ Fail: (96) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Fail: (95) [baseKB] baseKB:bc_q(b)
^ Fail: (91) [pfc_lib] call_u(baseKB:bc_q(b))
^ Call: (92) [locally_each] locally_each:key_erase(t_l)
^ Unify: (92) [locally_each] locally_each:key_erase(t_l)
Call: (93) [system] nb_current('$w_tl_e', [_33152|_33154])
Exit: (93) [system] nb_current('$w_tl_e', [<clause>(0x55cff648d2d0), <clause>(0x55cffa96c2a0)])
Call: (93) [system] nb_linkval('$w_tl_e', [<clause>(0x55cffa96c2a0)])
Exit: (93) [system] nb_linkval('$w_tl_e', [<clause>(0x55cffa96c2a0)])
Call: (94) [system] erase(<clause>(0x55cff648d2d0))
Exit: (94) [system] erase(<clause>(0x55cff648d2d0))
Call: (93) [system] true
Exit: (93) [system] true
Call: (93) [system] true
Exit: (93) [system] true
^ Exit: (92) [locally_each] locally_each:key_erase(t_l)
^ Fail: (82) [locally_each] locally(t_l:infAssertedOnly(bc_q(b)), baseKB:call_u(bc_q(b)))
^ Fail: (81) [baseKB] locally_each:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b)))
^ Fail: (80) [loop_check] loop_check:loop_check_term_frame(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), info(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))), 'mpred_core.pl':273), 1, 1227, pfc_lib:trace_or_throw(looped(baseKB:locally_tl(infAssertedOnly(bc_q(b)), call_u(bc_q(b))))))
^ Fail: (73) [baseKB] mpred_call_only_facts(bc_q(b))
^ Fail: (72) [loop_check] loop_check:loop_check_term_frame(baseKB:mpred_call_only_facts(bc_q(b)), info(mpred_call_only_facts(bc_q(b)), 'mpred_database.pl':1356), 1, 1069, baseKB:fail)
^ Call: (79) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1218, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Unify: (79) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1218, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
Call: (80) [system] set_prolog_flag(last_call_optimisation, false)
Exit: (80) [system] set_prolog_flag(last_call_optimisation, false)
^ Call: (80) [loop_check] prolog_frame_attribute(1218, parent_goal, loop_check_term_frame(_45542, pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, _45548, _45550))
^ Fail: (80) [loop_check] prolog_frame_attribute(1218, parent_goal, loop_check_term_frame(_45542, pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, _45548, _45550))
^ Redo: (79) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1218, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Call: (80) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Unify: (80) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Unify: (80) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
Call: (92) [$autoload] leave_sandbox(_49418)
Unify: (92) [$autoload] leave_sandbox(_49418)
Exit: (92) [$autoload] leave_sandbox(false)
Call: (91) [$autoload] restore_sandbox(false)
Unify: (91) [$autoload] restore_sandbox(false)
Exit: (91) [$autoload] restore_sandbox(false)
^ Unify: (80) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Call: (81) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Unify: (81) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Redo: (81) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Exit: (81) [pfc_lib] loop_check:cyclic_break(bc_q(b))
^ Call: (87) [pfc_lib] lookup_spft('$bt'(bc_q(b), _57336), _57338, _57340)
^ Unify: (87) [pfc_lib] lookup_spft('$bt'(bc_q(b), _57336), _57338, _57340)
^ Fail: (87) [pfc_lib] lookup_spft('$bt'(bc_q(b), _57336), _57338, _57340)
^ Call: (87) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Unify: (87) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Exit: (87) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_rtrace)
^ Exit: (80) [pfc_lib] mpred_BC_CACHE0(bc_q(b), bc_q(b))
^ Exit: (79) [loop_check] loop_check:loop_check_term_frame(pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), pfc_lib:mpred_BC_CACHE0(bc_q(b), bc_q(b)), 1, 1218, pfc_lib:trace_or_throw(looped(mpred_BC_CACHE(bc_q(b), bc_q(b)))))
^ Call: (76) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _65184)
^ Unify: (76) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _65184)
^ Call: (77) [system] clause(pfc_lib:bc_q(b), true, _65184)
^ Fail: (77) [system] clause(pfc_lib:bc_q(b), true, _65184)
^ Fail: (76) [pfc_lib] hook_database:clause_i(pfc_lib:bc_q(b), true, _65184)
^ Call: (71) [baseKB] inherit_above(baseKB, bc_q(b))
^ Unify: (71) [baseKB] inherit_above(baseKB, bc_q(b))
Call: (72) [system] baseKB\=baseKB
Fail: (72) [system] baseKB\=baseKB
^ Fail: (71) [baseKB] inherit_above(baseKB, bc_q(b))
^ Fail: (69) [baseKB] mpred_bc_and_with_pfc(bc_q(b))
^ Fail: (68) [baseKB] baseKB:bc_q(b)
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+bc_q(b))),rtrace(baseKB:bc_q(b))))
no_proof_for(\+bc_q(b)).
no_proof_for(\+bc_q(b)).
no_proof_for(\+bc_q(b)).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0003_Line_0000__B'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0003_Line_0000__B-junit.xml
~*/
%= something cached
:- mpred_test( clause(bc_q(_),true)).
% Are we cleaning up backchains?
%~ mpred_test("Test_0004_Line_0000__bc_q_1",baseKB:clause(bc_q(_31714),true))
/*~
%~ mpred_test("Test_0004_Line_0000__bc_q_1",baseKB:clause(bc_q(_31714),true))
^ Call: (68) [baseKB] clause(bc_q(_31714), true)
^ Fail: (68) [baseKB] clause(bc_q(_31714), true)
^ Call: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
^ Unify: (68) [must_sanity] must_sanity:mquietly_if(true, rtrace:tAt_normal)
failure=info((why_was_true(baseKB:(\+clause(bc_q(_31714),true))),rtrace(baseKB:clause(bc_q(_31714),true))))
no_proof_for(\+clause(bc_q(Q),true)).
no_proof_for(\+clause(bc_q(Q),true)).
no_proof_for(\+clause(bc_q(Q),true)).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0004_Line_0000__bc_q_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0004_Line_0000__bc_q_1-junit.xml
~*/
% Are we cleaning up backchains?
:- ignore((mpred_info((((bc_q(N) <- bc_p(N))))))).
%~ =======================================================================
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/bc_01c.pfc#L40
%~ =======================================================================
%~ =======================================================================
/*~
Justifications for bc_q(N)<-bc_p(N):
[36m 1.1 mfl4(['N'=_],baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/bc_01c.pfc#L19 ',19) [0m
==================
mpred_db_type = rule(bwc).
==================
fail = mpred_child(s,v).
%~ -mpred_axiom.
%~ -well_founded.
%~ -mpred_assumption.
==================
:- dynamic (<-)/2.
:- multifile (<-)/2.
:- public (<-)/2.
:- module_transparent (<-)/2.
bc_q(A)<-bc_p(A).
%~ -get_mpred_is_tracing.
==================
get_tms_mode=full.
%~ +( mpred_supported(local,(bc_q(P_Q)<-bc_p(P_Q)))).
%~ +( mpred_supported(full,(bc_q(P_Q)<-bc_p(P_Q)))).
%~ -( mpred_supported(none,(bc_q(P_Q)<-bc_p(P_Q)))).
~*/
:- mpred_info(((bct(bc_q(_6600462), pt(bc_p(_6600462), rhs([bc_q(_6600462)])))))).
%~ =======================================================================
%~ make_dynamic_here(baseKB,bct(bc_q(_431654),pt(bc_p(_431654),rhs([bc_q(_431654)]))))
%~ =======================================================================
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/bc_01c.pfc:43
%~ =======================================================================
/*~
no_proof_for(bct(bc_q(_6600462),pt(bc_p(_6600462),rhs([ bc_q(_6600462)])))).
no_proof_for(bct(bc_q(_6600462),pt(bc_p(_6600462),rhs([ bc_q(_6600462)])))).
==================
mpred_db_type = fact(Fact).
==================
fail = mpred_child(s,v).
%~ -mpred_axiom.
%~ -well_founded.
%~ -mpred_assumption.
==================
:- dynamic bct/2.
%~ -get_mpred_is_tracing.
==================
get_tms_mode=full.
%~ +( mpred_supported(local,bct(bc_q(P_Q),pt(bc_p(P_Q),rhs([bc_q(P_Q)]))))).
%~ +( mpred_supported(full,bct(bc_q(P_Q),pt(bc_p(P_Q),rhs([bc_q(P_Q)]))))).
%~ -( mpred_supported(none,bct(bc_q(P_Q),pt(bc_p(P_Q),rhs([bc_q(P_Q)]))))).
~*/
:- mpred_test(((mpred_withdraw(((bc_q(N) <- bc_p(N))))),\+ clause(bc_q(_),true))).
%:- mpred_test(((mpred_undo1(((bc_q(N) <- bc_p(N))))),\+ clause(bc_q(_),true))).
%:- mpred_test(((mpred_retract(((bc_q(N) <- bc_p(N))))),\+ clause(bc_q(_),true))).
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/bc_01c.pfc:45
%~ mpred_test( "Test_0005_Line_0000__bc_q_1",
%~ baseKB : mpred_withdraw((bc_q(N)<-bc_p(N))),\+clause(bc_q(Q),true))
/*~
%~ mpred_test("Test_0005_Line_0000__bc_q_1",baseKB:(mpred_withdraw((bc_q(_42648)<-bc_p(_42648))),\+clause(bc_q(_42670),true)))
passed=info(why_was_true(baseKB:(mpred_withdraw((bc_q(_42648)<-bc_p(_42648))),\+clause(bc_q(_42670),true))))
no_proof_for((mpred_withdraw((bc_q(N)<-bc_p(N))),\+clause(bc_q(Q),true))).
no_proof_for((mpred_withdraw((bc_q(N)<-bc_p(N))),\+clause(bc_q(Q),true))).
no_proof_for((mpred_withdraw((bc_q(N)<-bc_p(N))),\+clause(bc_q(Q),true))).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0005_Line_0000__bc_q_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0005_Line_0000__bc_q_1-junit.xml
~*/
%:- mpred_test(((mpred_undo1(((bc_q(N) <- bc_p(N))))),\+ clause(bc_q(_),true))).
%:- mpred_test(((mpred_retract(((bc_q(N) <- bc_p(N))))),\+ clause(bc_q(_),true))).
:- get_bc_clause(bc_q(_A),BackChainClause),
mpred_why(( BackChainClause)).
/*~
no_proof_for((bc_q(_A):-awc,!,mpred_bc_and_with_pfc(bc_q(_A)))).
no_proof_for((bc_q(_A):-awc,!,mpred_bc_and_with_pfc(bc_q(_A)))).
~*/
:- get_bc_clause(bc_q(_A),H,B),
mpred_test(\+ clause(H,B)).
%~ mpred_test( "Test_0006_Line_0000__naf_bc_q_1",
%~ baseKB : \+( clause(bc_q(_A),(awc,!,mpred_bc_and_with_pfc(bc_q(_A))))))
/*~
%~ mpred_test("Test_0006_Line_0000__naf_bc_q_1",baseKB:(\+clause(bc_q(_452456),(awc,!,mpred_bc_and_with_pfc(bc_q(_452456))))))
passed=info(why_was_true(baseKB:(\+clause(bc_q(_452456),(awc,!,mpred_bc_and_with_pfc(bc_q(_452456)))))))
no_proof_for(\+clause(bc_q(_A),(awc,!,mpred_bc_and_with_pfc(bc_q(_A))))).
no_proof_for(\+clause(bc_q(_A),(awc,!,mpred_bc_and_with_pfc(bc_q(_A))))).
no_proof_for(\+clause(bc_q(_A),(awc,!,mpred_bc_and_with_pfc(bc_q(_A))))).
name ='logicmoo.pfc.test.sanity_base.BC_01C-Test_0006_Line_0000__naf_bc_q_1'.
JUNIT_CLASSNAME ='logicmoo.pfc.test.sanity_base.BC_01C'.
JUNIT_CMD ='timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif bc_01c.pfc'.
% saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-junit-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.BC_01C-Test_0006_Line_0000__naf_bc_q_1-junit.xml
~*/
%~ unused(save_junit_results)
%~ test_completed_exit(8)
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
:- dynamic junit_prop/3.
```
totalTime=1
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ABC_01C
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/bc_01c.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/BC_01C/logicmoo_pfc_test_sanity_base_BC_01C_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/BC_01C/logicmoo_pfc_test_sanity_base_BC_01C_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/bc_01c.pfc
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k bc_01c.pfc (returned 8)
| non_process | logicmoo pfc test sanity base bc junit cd var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base timeout foreground preserve status s sigkill k lmoo clif bc pfc gh master issue finfo issue search gitlab latest this build github running var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base bc pfc this test might need use module library logicmoo plarkc dynamic bc q dynamic bc p ain bc q n bc p n bc p a bc p b mpred test call u bc p b nothing cached mpred test test line b basekb call u bc p b mpred test test line b basekb call u bc p b passed info why was true basekb call u bc p b no proof for call u bc p b no proof for call u bc p b no proof for call u bc p b name logicmoo pfc test sanity base bc test line b junit classname logicmoo pfc test sanity base bc junit cmd timeout foreground preserve status s sigkill k lmoo clif bc pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base bc test line b junit xml nothing cached mpred test clause bc q true mpred test test line naf bc q basekb clause bc q true mpred test test line naf bc q basekb clause bc q true passed info why was true basekb clause bc q true no proof for clause bc q q true no proof for clause bc q q true no proof for clause bc q q true name logicmoo pfc test sanity base bc test line naf bc q junit classname logicmoo pfc test sanity base bc junit cmd timeout foreground preserve status s sigkill k lmoo clif bc pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base bc test line naf bc q junit xml mpred test bc q b something cached mpred test test line b basekb bc q b mpred test test line b basekb bc q b call lookup spft bt bc q b unify lookup spft bt bc q b fail lookup spft bt bc q b call must sanity mquietly if true rtrace tat normal unify must sanity mquietly if true rtrace tat normal call lookup spft bt bc q b unify lookup spft bt bc q b fail lookup spft bt bc q b call must sanity mquietly if true rtrace tat normal unify must sanity mquietly if true rtrace tat normal call lookup spft bt bc q b unify lookup spft bt bc q b fail lookup spft bt bc q b call must sanity mquietly if true rtrace tat normal unify must sanity mquietly if true rtrace tat normal call lookup spft bt bc q b unify lookup spft bt bc q b fail lookup spft bt bc q b call must sanity mquietly if true rtrace tat normal unify must sanity mquietly if true rtrace tat normal call basekb bc q b unify basekb bc q b call awc unify awc exit awc call mpred bc and with pfc bc q b unify mpred bc and with pfc bc q b call loop check loop check term frame basekb mpred call only facts bc q b info mpred call only facts bc q b mpred database pl basekb fail unify loop check loop check term frame basekb mpred call only facts bc q b info mpred call only facts bc q b mpred database pl basekb fail call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info mpred call only facts bc q b mpred database pl fail prolog frame attribute parent goal loop check term frame info mpred call only facts bc q b mpred database pl redo loop check loop check term frame basekb mpred call only facts bc q b info mpred call only facts bc q b mpred database pl basekb fail call mpred call only facts bc q b unify mpred call only facts bc q b call loop check loop check term frame basekb locally tl infassertedonly bc q b call u bc q b info basekb locally tl infassertedonly bc q b call u bc q b mpred core pl pfc lib trace or throw looped basekb locally tl infassertedonly bc q b call u bc q b unify loop check loop check term frame basekb locally tl infassertedonly bc q b call u bc q b info basekb locally tl infassertedonly bc q b call u bc q b mpred core pl pfc lib trace or throw looped basekb locally tl infassertedonly bc q b call u bc q b call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info basekb locally tl infassertedonly bc q b call u bc q b mpred core pl fail prolog frame attribute parent goal loop check term frame info basekb locally tl infassertedonly bc q b call u bc q b mpred core pl redo loop check loop check term frame basekb locally tl infassertedonly bc q b call u bc q b info basekb locally tl infassertedonly bc q b call u bc q b mpred core pl pfc lib trace or throw looped basekb locally tl infassertedonly bc q b call u bc q b call locally each locally tl infassertedonly bc q b call u bc q b unify locally each locally tl infassertedonly bc q b call u bc q b call locally t l infassertedonly bc q b call u bc q b unify locally t l infassertedonly bc q b basekb call u bc q b call must sanity mquietly if false rtrace tat rtrace unify must sanity mquietly if false rtrace tat rtrace exit must sanity mquietly if false rtrace tat rtrace call must sanity mquietly if false rtrace tat rtrace unify must sanity mquietly if false rtrace tat rtrace exit must sanity mquietly if false rtrace tat rtrace call must sanity mquietly if false rtrace tat rtrace unify must sanity mquietly if false rtrace tat rtrace exit must sanity mquietly if false rtrace tat rtrace call locally each clause true t l t l infassertedonly bc q b unify locally each clause true t l t l infassertedonly bc q b call copy term t l infassertedonly bc q b exit copy term t l infassertedonly bc q b t l infassertedonly bc q b call clause t l infassertedonly bc q b true fail clause infassertedonly bc q b true fail locally each clause true t l t l infassertedonly bc q b call locally each key asserta t l t l infassertedonly bc q b unify locally each key asserta t l t l infassertedonly bc q b call asserta t l infassertedonly bc q b exit asserta t l infassertedonly bc q b call nb current w tl e exit nb current w tl e call nb linkval w tl e exit nb linkval w tl e exit locally each key asserta t l t l infassertedonly bc q b call call u bc q b unify call u basekb bc q b call basekb bc q b unify basekb bc q b call awc unify awc exit awc call mpred bc and with pfc bc q b unify mpred bc and with pfc bc q b call loop check loop check term frame basekb mpred call only facts bc q b info mpred call only facts bc q b mpred database pl basekb fail unify loop check loop check term frame basekb mpred call only facts bc q b info mpred call only facts bc q b mpred database pl basekb fail call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame info mpred call only facts bc q b mpred database pl exit prolog frame attribute parent goal loop check term frame basekb mpred call only facts bc q b info mpred call only facts bc q b mpred database pl basekb fail call fail fail fail fail loop check loop check term frame basekb mpred call only facts bc q b info mpred call only facts bc q b mpred database pl basekb fail call loop check loop check term frame pfc lib mpred bc bc q b bc q b pfc lib mpred bc bc q b bc q b pfc lib trace or throw looped mpred bc cache bc q b bc q b unify loop check loop check term frame pfc lib mpred bc bc q b bc q b pfc lib mpred bc bc q b bc q b pfc lib trace or throw looped mpred bc cache bc q b bc q b call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame pfc lib mpred bc bc q b bc q b fail prolog frame attribute parent goal loop check term frame pfc lib mpred bc bc q b bc q b redo loop check loop check term frame pfc lib mpred bc bc q b bc q b pfc lib mpred bc bc q b bc q b pfc lib trace or throw looped mpred bc cache bc q b bc q b call mpred bc bc q b bc q b unify mpred bc bc q b bc q b unify mpred bc bc q b bc q b call leave sandbox unify leave sandbox exit leave sandbox false call restore sandbox false unify restore sandbox false exit restore sandbox false unify mpred bc bc q b bc q b call loop check cyclic break bc q b unify loop check cyclic break bc q b redo loop check cyclic break bc q b exit loop check cyclic break bc q b call lookup spft bt bc q b unify lookup spft bt bc q b fail lookup spft bt bc q b call must sanity mquietly if true rtrace tat rtrace unify must sanity mquietly if true rtrace tat rtrace exit must sanity mquietly if true rtrace tat rtrace exit mpred bc bc q b bc q b exit loop check loop check term frame pfc lib mpred bc bc q b bc q b pfc lib mpred bc bc q b bc q b pfc lib trace or throw looped mpred bc cache bc q b bc q b call hook database clause i pfc lib bc q b true unify hook database clause i pfc lib bc q b true call clause pfc lib bc q b true fail clause pfc lib bc q b true fail hook database clause i pfc lib bc q b true call inherit above basekb bc q b unify inherit above basekb bc q b call basekb basekb fail basekb basekb fail inherit above basekb bc q b fail mpred bc and with pfc bc q b fail basekb bc q b fail call u basekb bc q b call locally each key erase t l unify locally each key erase t l call nb current w tl e exit nb current w tl e call nb linkval w tl e exit nb linkval w tl e call erase exit erase call true exit true call true exit true exit locally each key erase t l fail locally t l infassertedonly bc q b basekb call u bc q b fail locally each locally tl infassertedonly bc q b call u bc q b fail loop check loop check term frame basekb locally tl infassertedonly bc q b call u bc q b info basekb locally tl infassertedonly bc q b call u bc q b mpred core pl pfc lib trace or throw looped basekb locally tl infassertedonly bc q b call u bc q b fail mpred call only facts bc q b fail loop check loop check term frame basekb mpred call only facts bc q b info mpred call only facts bc q b mpred database pl basekb fail call loop check loop check term frame pfc lib mpred bc bc q b bc q b pfc lib mpred bc bc q b bc q b pfc lib trace or throw looped mpred bc cache bc q b bc q b unify loop check loop check term frame pfc lib mpred bc bc q b bc q b pfc lib mpred bc bc q b bc q b pfc lib trace or throw looped mpred bc cache bc q b bc q b call set prolog flag last call optimisation false exit set prolog flag last call optimisation false call prolog frame attribute parent goal loop check term frame pfc lib mpred bc bc q b bc q b fail prolog frame attribute parent goal loop check term frame pfc lib mpred bc bc q b bc q b redo loop check loop check term frame pfc lib mpred bc bc q b bc q b pfc lib mpred bc bc q b bc q b pfc lib trace or throw looped mpred bc cache bc q b bc q b call mpred bc bc q b bc q b unify mpred bc bc q b bc q b unify mpred bc bc q b bc q b call leave sandbox unify leave sandbox exit leave sandbox false call restore sandbox false unify restore sandbox false exit restore sandbox false unify mpred bc bc q b bc q b call loop check cyclic break bc q b unify loop check cyclic break bc q b redo loop check cyclic break bc q b exit loop check cyclic break bc q b call lookup spft bt bc q b unify lookup spft bt bc q b fail lookup spft bt bc q b call must sanity mquietly if true rtrace tat rtrace unify must sanity mquietly if true rtrace tat rtrace exit must sanity mquietly if true rtrace tat rtrace exit mpred bc bc q b bc q b exit loop check loop check term frame pfc lib mpred bc bc q b bc q b pfc lib mpred bc bc q b bc q b pfc lib trace or throw looped mpred bc cache bc q b bc q b call hook database clause i pfc lib bc q b true unify hook database clause i pfc lib bc q b true call clause pfc lib bc q b true fail clause pfc lib bc q b true fail hook database clause i pfc lib bc q b true call inherit above basekb bc q b unify inherit above basekb bc q b call basekb basekb fail basekb basekb fail inherit above basekb bc q b fail mpred bc and with pfc bc q b fail basekb bc q b call must sanity mquietly if true rtrace tat normal unify must sanity mquietly if true rtrace tat normal failure info why was true basekb bc q b rtrace basekb bc q b no proof for bc q b no proof for bc q b no proof for bc q b name logicmoo pfc test sanity base bc test line b junit classname logicmoo pfc test sanity base bc junit cmd timeout foreground preserve status s sigkill k lmoo clif bc pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base bc test line b junit xml something cached mpred test clause bc q true are we cleaning up backchains mpred test test line bc q basekb clause bc q true mpred test test line bc q basekb clause bc q true call clause bc q true fail clause bc q true call must sanity mquietly if true rtrace tat normal unify must sanity mquietly if true rtrace tat normal failure info why was true basekb clause bc q true rtrace basekb clause bc q true no proof for clause bc q q true no proof for clause bc q q true no proof for clause bc q q true name logicmoo pfc test sanity base bc test line bc q junit classname logicmoo pfc test sanity base bc junit cmd timeout foreground preserve status s sigkill k lmoo clif bc pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base bc test line bc q junit xml are we cleaning up backchains ignore mpred info bc q n bc p n file justifications for bc q n bc p n basekb mpred db type rule bwc fail mpred child s v mpred axiom well founded mpred assumption dynamic multifile public module transparent bc q a bc p a get mpred is tracing get tms mode full mpred supported local bc q p q bc p p q mpred supported full bc q p q bc p p q mpred supported none bc q p q bc p p q mpred info bct bc q pt bc p rhs make dynamic here basekb bct bc q pt bc p rhs var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base bc pfc no proof for bct bc q pt bc p rhs no proof for bct bc q pt bc p rhs mpred db type fact fact fail mpred child s v mpred axiom well founded mpred assumption dynamic bct get mpred is tracing get tms mode full mpred supported local bct bc q p q pt bc p p q rhs mpred supported full bct bc q p q pt bc p p q rhs mpred supported none bct bc q p q pt bc p p q rhs mpred test mpred withdraw bc q n bc p n clause bc q true mpred test mpred bc q n bc p n clause bc q true mpred test mpred retract bc q n bc p n clause bc q true var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base bc pfc mpred test test line bc q basekb mpred withdraw bc q n bc p n clause bc q q true mpred test test line bc q basekb mpred withdraw bc q bc p clause bc q true passed info why was true basekb mpred withdraw bc q bc p clause bc q true no proof for mpred withdraw bc q n bc p n clause bc q q true no proof for mpred withdraw bc q n bc p n clause bc q q true no proof for mpred withdraw bc q n bc p n clause bc q q true name logicmoo pfc test sanity base bc test line bc q junit classname logicmoo pfc test sanity base bc junit cmd timeout foreground preserve status s sigkill k lmoo clif bc pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base bc test line bc q junit xml mpred test mpred bc q n bc p n clause bc q true mpred test mpred retract bc q n bc p n clause bc q true get bc clause bc q a backchainclause mpred why backchainclause no proof for bc q a awc mpred bc and with pfc bc q a no proof for bc q a awc mpred bc and with pfc bc q a get bc clause bc q a h b mpred test clause h b mpred test test line naf bc q basekb clause bc q a awc mpred bc and with pfc bc q a mpred test test line naf bc q basekb clause bc q awc mpred bc and with pfc bc q passed info why was true basekb clause bc q awc mpred bc and with pfc bc q no proof for clause bc q a awc mpred bc and with pfc bc q a no proof for clause bc q a awc mpred bc and with pfc bc q a no proof for clause bc q a awc mpred bc and with pfc bc q a name logicmoo pfc test sanity base bc test line naf bc q junit classname logicmoo pfc test sanity base bc junit cmd timeout foreground preserve status s sigkill k lmoo clif bc pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo junit test sanity base units logicmoo pfc test sanity base bc test line naf bc q junit xml unused save junit results test completed exit dynamic junit prop dynamic junit prop dynamic junit prop totaltime issue search gitlab latest this build github failed var lib jenkins workspace logicmoo workspace bin lmoo junit minor k bc pfc returned | 0 |
67,552 | 9,064,826,435 | IssuesEvent | 2019-02-14 02:46:46 | USGS-Astrogeology/ISIS3 | https://api.github.com/repos/USGS-Astrogeology/ISIS3 | closed | Add Look Direction terms to the Glossary | documentation | ---
Author Name: **Tammy Becker** (Tammy Becker)
Original Assignee: Tammy Becker
---
Campt will now report the look direction unit vectors in Body Fixed, J2000, and Camera Coordinate Systems.
These new terms should be added to the Glossary (accessible from campt user documentation):
LookDirectionBodyFixed
LookDirectionJ2000
LookDirectionCamera
A brief description for a 'use-case' would be very helpful.
| 1.0 | Add Look Direction terms to the Glossary - ---
Author Name: **Tammy Becker** (Tammy Becker)
Original Assignee: Tammy Becker
---
Campt will now report the look direction unit vectors in Body Fixed, J2000, and Camera Coordinate Systems.
These new terms should be added to the Glossary (accessible from campt user documentation):
LookDirectionBodyFixed
LookDirectionJ2000
LookDirectionCamera
A brief description for a 'use-case' would be very helpful.
| non_process | add look direction terms to the glossary author name tammy becker tammy becker original assignee tammy becker campt will now report the look direction unit vectors in body fixed and camera coordinate systems these new terms should be added to the glossary accessible from campt user documentation lookdirectionbodyfixed lookdirectioncamera a brief description for a use case would be very helpful | 0 |
357,440 | 10,606,398,589 | IssuesEvent | 2019-10-10 23:13:31 | kubernetes/website | https://api.github.com/repos/kubernetes/website | closed | Issue with k8s.io/docs/concepts/ | kind/feature lifecycle/rotten priority/backlog | **This is a Bug Report**
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**Problem:**
Navigation of docs is pretty difficult. Once I go through the entire doc, I have to scroll back again to the top and click on the next topic from the left pane. It's cumbersome and looses the flow
**Proposed Solution:**
Solution involves two parts,
- Have a pane at the bottom and top of the docs pane with "**Next**" and "**Previous**" buttons
- The list of contents pane on the left of the document pane should be visible at all times. It would be easy to jump in to the topic right from the point I'm reading rather than scrolling to the top and then clicking on the topic of interest
**Page to Update:**
https://kubernetes.io/docs/
<!--Optional Information (remove the comment tags around information you would like to include)-->
**Kubernetes Version:**
1.14
<!--Additional Information:-->
| 1.0 | Issue with k8s.io/docs/concepts/ - **This is a Bug Report**
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**Problem:**
Navigation of docs is pretty difficult. Once I go through the entire doc, I have to scroll back again to the top and click on the next topic from the left pane. It's cumbersome and looses the flow
**Proposed Solution:**
Solution involves two parts,
- Have a pane at the bottom and top of the docs pane with "**Next**" and "**Previous**" buttons
- The list of contents pane on the left of the document pane should be visible at all times. It would be easy to jump in to the topic right from the point I'm reading rather than scrolling to the top and then clicking on the topic of interest
**Page to Update:**
https://kubernetes.io/docs/
<!--Optional Information (remove the comment tags around information you would like to include)-->
**Kubernetes Version:**
1.14
<!--Additional Information:-->
| non_process | issue with io docs concepts this is a bug report problem navigation of docs is pretty difficult once i go through the entire doc i have to scroll back again to the top and click on the next topic from the left pane it s cumbersome and looses the flow proposed solution solution involves two parts have a pane at the bottom and top of the docs pane with next and previous buttons the list of contents pane on the left of the document pane should be visible at all times it would be easy to jump in to the topic right from the point i m reading rather than scrolling to the top and then clicking on the topic of interest page to update kubernetes version | 0 |
594,956 | 18,058,006,497 | IssuesEvent | 2021-09-20 10:43:37 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | closed | [Paladin's] Improved Judgement decreased cd not working properly | Class-Paladin Priority-Medium Confirmed ChromieCraft Generic | ### What client do you play on?
enUS
### Faction
- [X] Alliance
- [X] Horde
### Content Phase:
- [x] Generic
- [ ] 1-19
- [ ] 20-29
- [ ] 30-39
- [ ] 40-49
- [ ] 50-59
### Current Behaviour
Original report: https://github.com/chromiecraft/chromiecraft/issues/1608
If I open with JoW, my CD finishes on JoL but it says spell not ready, taking an extra 2 seconds until I can use it. Same will happen if i open with JoL and want to use JoW after.
It seems that in the above scenario, Improved Judgements and the minus 2 seconds cast isn't taking effect.
### Expected Blizzlike Behaviour
If i open with JoW, my CD finishes on JoL it should be useable imadiatly when CD ends without any delay
### Source
_No response_
### Steps to reproduce the problem
1. I have improved judgement talent 2/2
2. Useing Seal of Rigtheousness and Devotion Aura
3. I useing Judgement of Light
4. CD goes off
5. I useing Judgement of Wisdom
6. Same happen if i useing Judgement of Wisdom first then Judgement of Light
7. Bug doesnt happen if I useing Judgement of Light only, or i useing Judgement of Wisdom only without mixing Judgements in my rotation
### Extra Notes
_No response_
### AC rev. hash/commit
https://github.com/chromiecraft/azerothcore-wotlk/commit/595bb6adccbabc714469f3935541978283b8bdfb
### Operating system
Ubuntu 20.04
### Modules
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna-lua-engine](https://github.com/azerothcore/mod-eluna-lua-engine)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker)
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-server-auto-shutdown](https://github.com/azerothcore/mod-server-auto-shutdown)
- [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-custom-corldboss](https://github.com/55Honey/Acore_CustomWorldboss)
- [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements)
- [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck)
### Customizations
None
### Server
ChromieCraft
Confirmed on local AC (edb5f78).
When using JoW, JoJ or JoL and afterwards another judgement with the talent "Improved Judgement", the decreased cd is not applied correctly (even though the cd timer on the spells is reduced).
In this video I opened with JoW and wanted to use JoL afterwards, but had to wait 2 seconds longer after the spell was already available again:
https://user-images.githubusercontent.com/68868567/131999384-9cbdda1e-f547-46a9-878d-5c451a2d4b46.mp4
However, spamming the same Judgement in a row works as expected.
| 1.0 | [Paladin's] Improved Judgement decreased cd not working properly - ### What client do you play on?
enUS
### Faction
- [X] Alliance
- [X] Horde
### Content Phase:
- [x] Generic
- [ ] 1-19
- [ ] 20-29
- [ ] 30-39
- [ ] 40-49
- [ ] 50-59
### Current Behaviour
Original report: https://github.com/chromiecraft/chromiecraft/issues/1608
If I open with JoW, my CD finishes on JoL but it says spell not ready, taking an extra 2 seconds until I can use it. Same will happen if i open with JoL and want to use JoW after.
It seems that in the above scenario, Improved Judgements and the minus 2 seconds cast isn't taking effect.
### Expected Blizzlike Behaviour
If i open with JoW, my CD finishes on JoL it should be useable imadiatly when CD ends without any delay
### Source
_No response_
### Steps to reproduce the problem
1. I have improved judgement talent 2/2
2. Useing Seal of Rigtheousness and Devotion Aura
3. I useing Judgement of Light
4. CD goes off
5. I useing Judgement of Wisdom
6. Same happen if i useing Judgement of Wisdom first then Judgement of Light
7. Bug doesnt happen if I useing Judgement of Light only, or i useing Judgement of Wisdom only without mixing Judgements in my rotation
### Extra Notes
_No response_
### AC rev. hash/commit
https://github.com/chromiecraft/azerothcore-wotlk/commit/595bb6adccbabc714469f3935541978283b8bdfb
### Operating system
Ubuntu 20.04
### Modules
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna-lua-engine](https://github.com/azerothcore/mod-eluna-lua-engine)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker)
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-server-auto-shutdown](https://github.com/azerothcore/mod-server-auto-shutdown)
- [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-custom-corldboss](https://github.com/55Honey/Acore_CustomWorldboss)
- [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements)
- [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck)
### Customizations
None
### Server
ChromieCraft
Confirmed on local AC (edb5f78).
When using JoW, JoJ or JoL and afterwards another judgement with the talent "Improved Judgement", the decreased cd is not applied correctly (even though the cd timer on the spells is reduced).
In this video I opened with JoW and wanted to use JoL afterwards, but had to wait 2 seconds longer after the spell was already available again:
https://user-images.githubusercontent.com/68868567/131999384-9cbdda1e-f547-46a9-878d-5c451a2d4b46.mp4
However, spamming the same Judgement in a row works as expected.
| non_process | improved judgement decreased cd not working properly what client do you play on enus faction alliance horde content phase generic current behaviour original report if i open with jow my cd finishes on jol but it says spell not ready taking an extra seconds until i can use it same will happen if i open with jol and want to use jow after it seems that in the above scenario improved judgements and the minus seconds cast isn t taking effect expected blizzlike behaviour if i open with jow my cd finishes on jol it should be useable imadiatly when cd ends without any delay source no response steps to reproduce the problem i have improved judgement talent useing seal of rigtheousness and devotion aura i useing judgement of light cd goes off i useing judgement of wisdom same happen if i useing judgement of wisdom first then judgement of light bug doesnt happen if i useing judgement of light only or i useing judgement of wisdom only without mixing judgements in my rotation extra notes no response ac rev hash commit operating system ubuntu modules customizations none server chromiecraft confirmed on local ac when using jow joj or jol and afterwards another judgement with the talent improved judgement the decreased cd is not applied correctly even though the cd timer on the spells is reduced in this video i opened with jow and wanted to use jol afterwards but had to wait seconds longer after the spell was already available again however spamming the same judgement in a row works as expected | 0 |
230,973 | 18,727,411,419 | IssuesEvent | 2021-11-03 17:40:28 | mennaelkashef/eShop | https://api.github.com/repos/mennaelkashef/eShop | opened | Ahmedhatemsayed@gmail.com | Hello! RULE-GOT-APPLIED DOES-NOT-CONTAIN-STRING Rule-works-on-convert-to-bug test instabug | # :clipboard: Bug Details
>Ahmedhatemsayed@gmail.com
key | value
--|--
Reported At | 2021-11-03 17:39:43 UTC
Email | Ahmedhatemsayed@gmail.com
Categories | Suggest an improvement, test71
Tags | test, Hello!, RULE-GOT-APPLIED, DOES-NOT-CONTAIN-STRING, Rule-works-on-convert-to-bug, instabug
App Version | 1.1 (1)
Session Duration | 219
Device | Google sdk_gphone_x86, OS Level 30
Display | 1080x2280 (xhdpi)
Location | Cairo, Egypt (en)
## :point_right: [View Full Bug Report on Instabug](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8134?utm_source=github&utm_medium=integrations) :point_left:
___
# :iphone: View Hierarchy
This bug was reported from **com.instabug.sdkdoctor.presentation.networkinterception.NetworkRequestsFragment**
Find its interactive view hierarchy with all its subviews here: :point_right: **[Check View Hierarchy](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8134?show-hierarchy-view=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :chart_with_downwards_trend: Session Profiler
Here is what the app was doing right before the bug was reported:
Key | Value
--|--
Used Memory | 49.6% - 0.96/1.93 GB
Used Storage | 15.1% - 1.46/9.66 GB
Connectivity | WiFi
Battery | 100% - unplugged
Orientation | portrait
Find all the changes that happened in the parameters mentioned above during the last 60 seconds before the bug was reported here: :point_right: **[View Full Session Profiler](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8134?show-session-profiler=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :bust_in_silhouette: User Info
### User Attributes
```
key_name -2021528916: key value bla bla bla la
key_name -692250033: key value bla bla bla la
key_name -1788191881: key value bla bla bla la
key_name -520803198: key value bla bla bla la
key_name 952218427: key value bla bla bla la
key_name 952541781: key value bla bla bla la
key_name 1983723549: key value bla bla bla la
key_name -1482070590: key value bla bla bla la
key_name 875689107: key value bla bla bla la
key_name -1229869054: key value bla bla bla la
key_name 728612594: key value bla bla bla la
key_name 1657580172: key value bla bla bla la
key_name -83893761: key value bla bla bla la
key_name 2124213658: key value bla bla bla la
```
___
# :mag_right: Logs
### User Steps
Here are the last 10 steps done by the user right before the bug was reported:
```
17:39:28 Scroll in "networkRequestsList" of type "androidx.recyclerview.widget.RecyclerView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:28 Scroll in "networkRequestsList" of type "androidx.recyclerview.widget.RecyclerView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:29 Scroll in "networkRequestsList" of type "androidx.recyclerview.widget.RecyclerView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:30 Tap in "flow" of type "androidx.constraintlayout.helper.widget.Flow" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:37 Scroll in "networkRequestsList" of type "androidx.recyclerview.widget.RecyclerView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:39 Tap in "arrowIcon" of type "androidx.appcompat.widget.AppCompatImageView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:39 com.instabug.sdkdoctor.presentation.DoctorActivity was paused.
17:39:39 In activity com.instabug.sdkdoctor.presentation.DoctorActivity: fragment com.instabug.sdkdoctor.presentation.networkinterception.NetworkRequestsFragment was paused.
17:39:41 Tap in "11/3/21 7:36 PM" of type "com.google.android.material.textview.MaterialTextView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:43 Tap in "11/3/21 7:37 PM" of type "com.google.android.material.textview.MaterialTextView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
```
Find all the user steps done by the user throughout the session here: :point_right: **[View All User Steps](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8134?show-logs=user_steps&utm_source=github&utm_medium=integrations)** :point_left:
### Console Log
Here are the last 10 console logs logged right before the bug was reported:
```
17:39:46 D/LeakCanary(10726): Rescheduling check for retained objects in 2000ms because found only 1 retained objects (< 5 while app visible)
17:39:46 D/LeakCanary(10726): Setting up flushing for Thread[InsetsAnimations,5,main]
17:39:48 I/com.example.ap(10726): Explicit concurrent copying GC freed 44142(3375KB) AllocSpace objects, 17(1280KB) LOS objects, 49% free, 8866KB/17MB, paused 43us total 30.843ms
17:39:48 D/LeakCanary(10726): Rescheduling check for retained objects in 2000ms because found only 1 retained objects (< 5 while app visible)
17:39:48 V/FA (10726): Inactivity, disconnecting from the service
17:39:50 I/com.example.ap(10726): Explicit concurrent copying GC freed 25176(2250KB) AllocSpace objects, 12(920KB) LOS objects, 49% free, 8792KB/17MB, paused 34us total 42.767ms
17:39:50 D/LeakCanary(10726): Rescheduling check for retained objects in 2000ms because found only 1 retained objects (< 5 while app visible)
17:39:50 D/IB-BaseReportingPresenter(10726): checkUserEmailValid :Ahmedhatemsayed@gmail.com
17:39:50 D/IB-ActionsOrchestrator(10726): runAction
17:39:50 D/IB-AttachmentsUtility(10726): encryptAttachments
```
Find all the logged console logs throughout the session here: :point_right: **[View All Console Log](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8134?show-logs=console_log&utm_source=github&utm_medium=integrations)** :point_left:
___
# :warning: Looking for More Details?
1. **Network Log**: we are unable to capture your network requests automatically. If you are using HttpUrlConnection or Okhttp requests, [**check the details mentioned here**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations#section-network-logs).
2. **User Events**: start capturing custom User Events to send them along with each report. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations).
3. **Instabug Log**: start adding Instabug logs to see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations). | 1.0 | Ahmedhatemsayed@gmail.com - # :clipboard: Bug Details
>Ahmedhatemsayed@gmail.com
key | value
--|--
Reported At | 2021-11-03 17:39:43 UTC
Email | Ahmedhatemsayed@gmail.com
Categories | Suggest an improvement, test71
Tags | test, Hello!, RULE-GOT-APPLIED, DOES-NOT-CONTAIN-STRING, Rule-works-on-convert-to-bug, instabug
App Version | 1.1 (1)
Session Duration | 219
Device | Google sdk_gphone_x86, OS Level 30
Display | 1080x2280 (xhdpi)
Location | Cairo, Egypt (en)
## :point_right: [View Full Bug Report on Instabug](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8134?utm_source=github&utm_medium=integrations) :point_left:
___
# :iphone: View Hierarchy
This bug was reported from **com.instabug.sdkdoctor.presentation.networkinterception.NetworkRequestsFragment**
Find its interactive view hierarchy with all its subviews here: :point_right: **[Check View Hierarchy](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8134?show-hierarchy-view=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :chart_with_downwards_trend: Session Profiler
Here is what the app was doing right before the bug was reported:
Key | Value
--|--
Used Memory | 49.6% - 0.96/1.93 GB
Used Storage | 15.1% - 1.46/9.66 GB
Connectivity | WiFi
Battery | 100% - unplugged
Orientation | portrait
Find all the changes that happened in the parameters mentioned above during the last 60 seconds before the bug was reported here: :point_right: **[View Full Session Profiler](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8134?show-session-profiler=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :bust_in_silhouette: User Info
### User Attributes
```
key_name -2021528916: key value bla bla bla la
key_name -692250033: key value bla bla bla la
key_name -1788191881: key value bla bla bla la
key_name -520803198: key value bla bla bla la
key_name 952218427: key value bla bla bla la
key_name 952541781: key value bla bla bla la
key_name 1983723549: key value bla bla bla la
key_name -1482070590: key value bla bla bla la
key_name 875689107: key value bla bla bla la
key_name -1229869054: key value bla bla bla la
key_name 728612594: key value bla bla bla la
key_name 1657580172: key value bla bla bla la
key_name -83893761: key value bla bla bla la
key_name 2124213658: key value bla bla bla la
```
___
# :mag_right: Logs
### User Steps
Here are the last 10 steps done by the user right before the bug was reported:
```
17:39:28 Scroll in "networkRequestsList" of type "androidx.recyclerview.widget.RecyclerView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:28 Scroll in "networkRequestsList" of type "androidx.recyclerview.widget.RecyclerView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:29 Scroll in "networkRequestsList" of type "androidx.recyclerview.widget.RecyclerView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:30 Tap in "flow" of type "androidx.constraintlayout.helper.widget.Flow" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:37 Scroll in "networkRequestsList" of type "androidx.recyclerview.widget.RecyclerView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:39 Tap in "arrowIcon" of type "androidx.appcompat.widget.AppCompatImageView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:39 com.instabug.sdkdoctor.presentation.DoctorActivity was paused.
17:39:39 In activity com.instabug.sdkdoctor.presentation.DoctorActivity: fragment com.instabug.sdkdoctor.presentation.networkinterception.NetworkRequestsFragment was paused.
17:39:41 Tap in "11/3/21 7:36 PM" of type "com.google.android.material.textview.MaterialTextView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
17:39:43 Tap in "11/3/21 7:37 PM" of type "com.google.android.material.textview.MaterialTextView" in "com.instabug.sdkdoctor.presentation.DoctorActivity"
```
Find all the user steps done by the user throughout the session here: :point_right: **[View All User Steps](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8134?show-logs=user_steps&utm_source=github&utm_medium=integrations)** :point_left:
### Console Log
Here are the last 10 console logs logged right before the bug was reported:
```
17:39:46 D/LeakCanary(10726): Rescheduling check for retained objects in 2000ms because found only 1 retained objects (< 5 while app visible)
17:39:46 D/LeakCanary(10726): Setting up flushing for Thread[InsetsAnimations,5,main]
17:39:48 I/com.example.ap(10726): Explicit concurrent copying GC freed 44142(3375KB) AllocSpace objects, 17(1280KB) LOS objects, 49% free, 8866KB/17MB, paused 43us total 30.843ms
17:39:48 D/LeakCanary(10726): Rescheduling check for retained objects in 2000ms because found only 1 retained objects (< 5 while app visible)
17:39:48 V/FA (10726): Inactivity, disconnecting from the service
17:39:50 I/com.example.ap(10726): Explicit concurrent copying GC freed 25176(2250KB) AllocSpace objects, 12(920KB) LOS objects, 49% free, 8792KB/17MB, paused 34us total 42.767ms
17:39:50 D/LeakCanary(10726): Rescheduling check for retained objects in 2000ms because found only 1 retained objects (< 5 while app visible)
17:39:50 D/IB-BaseReportingPresenter(10726): checkUserEmailValid :Ahmedhatemsayed@gmail.com
17:39:50 D/IB-ActionsOrchestrator(10726): runAction
17:39:50 D/IB-AttachmentsUtility(10726): encryptAttachments
```
Find all the logged console logs throughout the session here: :point_right: **[View All Console Log](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8134?show-logs=console_log&utm_source=github&utm_medium=integrations)** :point_left:
___
# :warning: Looking for More Details?
1. **Network Log**: we are unable to capture your network requests automatically. If you are using HttpUrlConnection or Okhttp requests, [**check the details mentioned here**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations#section-network-logs).
2. **User Events**: start capturing custom User Events to send them along with each report. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations).
3. **Instabug Log**: start adding Instabug logs to see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations). | non_process | ahmedhatemsayed gmail com clipboard bug details ahmedhatemsayed gmail com key value reported at utc email ahmedhatemsayed gmail com categories suggest an improvement tags test hello rule got applied does not contain string rule works on convert to bug instabug app version session duration device google sdk gphone os level display xhdpi location cairo egypt en point right point left iphone view hierarchy this bug was reported from com instabug sdkdoctor presentation networkinterception networkrequestsfragment find its interactive view hierarchy with all its subviews here point right point left chart with downwards trend session profiler here is what the app was doing right before the bug was reported key value used memory gb used storage gb connectivity wifi battery unplugged orientation portrait find all the changes that happened in the parameters mentioned above during the last seconds before the bug was reported here point right point left bust in silhouette user info user attributes key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la mag right logs user steps here are the last steps done by the user right before the bug was reported scroll in networkrequestslist of type androidx recyclerview widget recyclerview in com instabug sdkdoctor presentation doctoractivity scroll in networkrequestslist of type androidx recyclerview widget recyclerview in com instabug sdkdoctor presentation doctoractivity scroll in networkrequestslist of type androidx recyclerview widget recyclerview in com instabug sdkdoctor presentation doctoractivity tap in flow of type androidx constraintlayout helper widget flow in com instabug sdkdoctor presentation doctoractivity scroll in networkrequestslist of type androidx recyclerview widget recyclerview in com instabug sdkdoctor presentation doctoractivity tap in arrowicon of type androidx appcompat widget appcompatimageview in com instabug sdkdoctor presentation doctoractivity com instabug sdkdoctor presentation doctoractivity was paused in activity com instabug sdkdoctor presentation doctoractivity fragment com instabug sdkdoctor presentation networkinterception networkrequestsfragment was paused tap in pm of type com google android material textview materialtextview in com instabug sdkdoctor presentation doctoractivity tap in pm of type com google android material textview materialtextview in com instabug sdkdoctor presentation doctoractivity find all the user steps done by the user throughout the session here point right point left console log here are the last console logs logged right before the bug was reported d leakcanary rescheduling check for retained objects in because found only retained objects while app visible d leakcanary setting up flushing for thread i com example ap explicit concurrent copying gc freed allocspace objects los objects free paused total d leakcanary rescheduling check for retained objects in because found only retained objects while app visible v fa inactivity disconnecting from the service i com example ap explicit concurrent copying gc freed allocspace objects los objects free paused total d leakcanary rescheduling check for retained objects in because found only retained objects while app visible d ib basereportingpresenter checkuseremailvalid ahmedhatemsayed gmail com d ib actionsorchestrator runaction d ib attachmentsutility encryptattachments find all the logged console logs throughout the session here point right point left warning looking for more details network log we are unable to capture your network requests automatically if you are using httpurlconnection or okhttp requests user events start capturing custom user events to send them along with each report instabug log start adding instabug logs to see them right inside each report you receive | 0 |
257,197 | 27,561,802,849 | IssuesEvent | 2023-03-07 22:47:20 | samqws-marketing/coursera_naptime | https://api.github.com/repos/samqws-marketing/coursera_naptime | closed | CVE-2019-14892 (High) detected in multiple libraries - autoclosed | Mend: dependency security vulnerability | ## CVE-2019-14892 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.3.3.jar</b>, <b>jackson-databind-2.9.0.jar</b>, <b>jackson-databind-2.8.11.4.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.3.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.3.3.jar</p>
<p>
Dependency Hierarchy:
- sbt-plugin-2.4.4.jar (Root Library)
- sbt-js-engine-1.1.3.jar
- npm_2.10-1.1.1.jar
- webjars-locator-0.26.jar
- :x: **jackson-databind-2.3.3.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.9.0.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- play-json_2.12-2.6.14.jar
- jackson-datatype-jdk8-2.8.11.jar
- :x: **jackson-databind-2.9.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.11.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.11.4.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- :x: **jackson-databind-2.8.11.4.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was discovered in jackson-databind in versions before 2.9.10, 2.8.11.5 and 2.6.7.3, where it would permit polymorphic deserialization of a malicious object using commons-configuration 1 and 2 JNDI classes. An attacker could use this flaw to execute arbitrary code.
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-14892>CVE-2019-14892</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-09-04</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.8.11.5</p>
<p>Direct dependency fix Resolution (com.typesafe.play:play-ehcache_2.12): 2.7.0</p>
</p>
</details>
<p></p>
| True | CVE-2019-14892 (High) detected in multiple libraries - autoclosed - ## CVE-2019-14892 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.3.3.jar</b>, <b>jackson-databind-2.9.0.jar</b>, <b>jackson-databind-2.8.11.4.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.3.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.3.3.jar</p>
<p>
Dependency Hierarchy:
- sbt-plugin-2.4.4.jar (Root Library)
- sbt-js-engine-1.1.3.jar
- npm_2.10-1.1.1.jar
- webjars-locator-0.26.jar
- :x: **jackson-databind-2.3.3.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.9.0.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- play-json_2.12-2.6.14.jar
- jackson-datatype-jdk8-2.8.11.jar
- :x: **jackson-databind-2.9.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.11.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.11.4.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- :x: **jackson-databind-2.8.11.4.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was discovered in jackson-databind in versions before 2.9.10, 2.8.11.5 and 2.6.7.3, where it would permit polymorphic deserialization of a malicious object using commons-configuration 1 and 2 JNDI classes. An attacker could use this flaw to execute arbitrary code.
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-14892>CVE-2019-14892</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-09-04</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.8.11.5</p>
<p>Direct dependency fix Resolution (com.typesafe.play:play-ehcache_2.12): 2.7.0</p>
</p>
</details>
<p></p>
| non_process | cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy sbt plugin jar root library sbt js engine jar npm jar webjars locator jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy play ehcache jar root library play jar play json jar jackson datatype jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy play ehcache jar root library play jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was discovered in jackson databind in versions before and where it would permit polymorphic deserialization of a malicious object using commons configuration and jndi classes an attacker could use this flaw to execute arbitrary code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution com typesafe play play ehcache | 0 |
14,797 | 3,896,831,440 | IssuesEvent | 2016-04-16 02:02:02 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | closed | Improve chart documentation | In Documentation | Not clear in current documentation when tu use lists, lists of lists, lists of lists of lists, and the outcome.
Add an independant chart example model, and add nice pictures... | 1.0 | Improve chart documentation - Not clear in current documentation when tu use lists, lists of lists, lists of lists of lists, and the outcome.
Add an independant chart example model, and add nice pictures... | non_process | improve chart documentation not clear in current documentation when tu use lists lists of lists lists of lists of lists and the outcome add an independant chart example model and add nice pictures | 0 |
392,501 | 11,592,268,567 | IssuesEvent | 2020-02-24 11:06:58 | Tascana/tascana-web | https://api.github.com/repos/Tascana/tascana-web | closed | On drag start apply scale (1.05), on drag apply "grabbing" cursor style | priority | *Требования:*
- В drag'n'drop режиме размер таск-блока должен увеличиться на 1.05
- В drag'n'drop режиме для таск-блока должно добавляться свойство `cursor: grab` | 1.0 | On drag start apply scale (1.05), on drag apply "grabbing" cursor style - *Требования:*
- В drag'n'drop режиме размер таск-блока должен увеличиться на 1.05
- В drag'n'drop режиме для таск-блока должно добавляться свойство `cursor: grab` | non_process | on drag start apply scale on drag apply grabbing cursor style требования в drag n drop режиме размер таск блока должен увеличиться на в drag n drop режиме для таск блока должно добавляться свойство cursor grab | 0 |
25,277 | 24,934,270,023 | IssuesEvent | 2022-10-31 14:03:38 | pulumi/pulumi-kubernetes-operator | https://api.github.com/repos/pulumi/pulumi-kubernetes-operator | opened | Deleting a Program before deleting a Stack blocks the deletion of the Stack | impact/usability kind/bug | ### What happened?
If deleting both a stack and a program at the same time (e.g. they are defined in the same file, or if done manually), the program will delete fine, but the stack will enter a stalled condition as it tries to resolve a reference to a program that does not exist. This hangs the deletion of the stack (and blocks your terminal, as kubectl waits).
If the program is re-created, the stack will catch up and delete, and the program can be deleted again. Doing this, as far as I can tell, appropriately deletes and cleans up resources created by the stack.
### Steps to reproduce
<details>
<summary>Define a stack and a program in the same YAML file</summary>
```yaml
apiVersion: pulumi.com/v1
kind: Stack
metadata:
name: deployment-example
spec:
stack: deployment-example
programRef:
name: deployment-example
destroyOnFinalize: true
envRefs:
PULUMI_ACCESS_TOKEN:
type: Secret
secret:
name: pulumi-api-secret
key: accessToken
---
apiVersion: pulumi.com/v1
kind: Program
metadata:
name: deployment-example
program:
resources:
random:
type: random:RandomInteger
properties:
min: 1
max: 10
nginx:
type: kubernetes:apps/v1:Deployment
properties:
metadata:
name: nginx2
spec:
selector:
matchLabels:
app: nginx
replicas: ${random.result}
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
```
</details>
Create the resources with `kubectl create -f <file>`
Delete the resources with `kubectl delete -f <file>`
The delete will hang - `kubectl describe`-ing the stack in another terminal window will show it has stalled as it tries to find the program.
If you once again create the resources with `kubectl create -f <file>`, the stack fails to create (as it already exists), the Program creates (as it has already been deleted), and the Stack finds the Program and deletes finally. You can then `kubectl delete -f <file>` again to clean up the Program.
### Expected Behavior
Stack should delete, even when it cannot find the program references.
### Actual Behavior
Delete hangs when the stack is stalled looking for the referenced program.
### Output of `pulumi about`
_No response_
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
| True | Deleting a Program before deleting a Stack blocks the deletion of the Stack - ### What happened?
If deleting both a stack and a program at the same time (e.g. they are defined in the same file, or if done manually), the program will delete fine, but the stack will enter a stalled condition as it tries to resolve a reference to a program that does not exist. This hangs the deletion of the stack (and blocks your terminal, as kubectl waits).
If the program is re-created, the stack will catch up and delete, and the program can be deleted again. Doing this, as far as I can tell, appropriately deletes and cleans up resources created by the stack.
### Steps to reproduce
<details>
<summary>Define a stack and a program in the same YAML file</summary>
```yaml
apiVersion: pulumi.com/v1
kind: Stack
metadata:
name: deployment-example
spec:
stack: deployment-example
programRef:
name: deployment-example
destroyOnFinalize: true
envRefs:
PULUMI_ACCESS_TOKEN:
type: Secret
secret:
name: pulumi-api-secret
key: accessToken
---
apiVersion: pulumi.com/v1
kind: Program
metadata:
name: deployment-example
program:
resources:
random:
type: random:RandomInteger
properties:
min: 1
max: 10
nginx:
type: kubernetes:apps/v1:Deployment
properties:
metadata:
name: nginx2
spec:
selector:
matchLabels:
app: nginx
replicas: ${random.result}
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
```
</details>
Create the resources with `kubectl create -f <file>`
Delete the resources with `kubectl delete -f <file>`
The delete will hang - `kubectl describe`-ing the stack in another terminal window will show it has stalled as it tries to find the program.
If you once again create the resources with `kubectl create -f <file>`, the stack fails to create (as it already exists), the Program creates (as it has already been deleted), and the Stack finds the Program and deletes finally. You can then `kubectl delete -f <file>` again to clean up the Program.
### Expected Behavior
Stack should delete, even when it cannot find the program references.
### Actual Behavior
Delete hangs when the stack is stalled looking for the referenced program.
### Output of `pulumi about`
_No response_
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
| non_process | deleting a program before deleting a stack blocks the deletion of the stack what happened if deleting both a stack and a program at the same time e g they are defined in the same file or if done manually the program will delete fine but the stack will enter a stalled condition as it tries to resolve a reference to a program that does not exist this hangs the deletion of the stack and blocks your terminal as kubectl waits if the program is re created the stack will catch up and delete and the program can be deleted again doing this as far as i can tell appropriately deletes and cleans up resources created by the stack steps to reproduce define a stack and a program in the same yaml file yaml apiversion pulumi com kind stack metadata name deployment example spec stack deployment example programref name deployment example destroyonfinalize true envrefs pulumi access token type secret secret name pulumi api secret key accesstoken apiversion pulumi com kind program metadata name deployment example program resources random type random randominteger properties min max nginx type kubernetes apps deployment properties metadata name spec selector matchlabels app nginx replicas random result template metadata labels app nginx spec containers name nginx image nginx create the resources with kubectl create f delete the resources with kubectl delete f the delete will hang kubectl describe ing the stack in another terminal window will show it has stalled as it tries to find the program if you once again create the resources with kubectl create f the stack fails to create as it already exists the program creates as it has already been deleted and the stack finds the program and deletes finally you can then kubectl delete f again to clean up the program expected behavior stack should delete even when it cannot find the program references actual behavior delete hangs when the stack is stalled looking for the referenced program output of pulumi about no response additional context no response contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already | 0 |
18,794 | 24,698,108,738 | IssuesEvent | 2022-10-19 13:33:13 | km4ack/pi-build | https://api.github.com/repos/km4ack/pi-build | closed | Old piardopc modem not deleted | in process Bug-Minor | Per Tim, KF7VUT, the old piardopc modem isn't deleted before the new one is installed. As a work around, the user can open the file browser and navigate to the ardop directory. Delete all instances of piardopc and then use the build a pi update tool to reinstall the ARDOP modem. | 1.0 | Old piardopc modem not deleted - Per Tim, KF7VUT, the old piardopc modem isn't deleted before the new one is installed. As a work around, the user can open the file browser and navigate to the ardop directory. Delete all instances of piardopc and then use the build a pi update tool to reinstall the ARDOP modem. | process | old piardopc modem not deleted per tim the old piardopc modem isn t deleted before the new one is installed as a work around the user can open the file browser and navigate to the ardop directory delete all instances of piardopc and then use the build a pi update tool to reinstall the ardop modem | 1 |
16,727 | 2,615,122,513 | IssuesEvent | 2015-03-01 05:49:47 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | d | auto-migrated Priority-Medium Type-Sample | ```
Which Google API and version (e.g. Google Calendar Data API version 2)?
What format (e.g. JSON, Atom)?
What Authentation (e.g. OAuth, OAuth 2, ClientLogin)?
Java environment (e.g. Java 6, Android 2.3, App Engine)?
External references, such as API reference guide?
Please provide any additional information below.
```
Original issue reported on code.google.com by `david.te...@gmail.com` on 21 Sep 2011 at 11:05 | 1.0 | d - ```
Which Google API and version (e.g. Google Calendar Data API version 2)?
What format (e.g. JSON, Atom)?
What Authentation (e.g. OAuth, OAuth 2, ClientLogin)?
Java environment (e.g. Java 6, Android 2.3, App Engine)?
External references, such as API reference guide?
Please provide any additional information below.
```
Original issue reported on code.google.com by `david.te...@gmail.com` on 21 Sep 2011 at 11:05 | non_process | d which google api and version e g google calendar data api version what format e g json atom what authentation e g oauth oauth clientlogin java environment e g java android app engine external references such as api reference guide please provide any additional information below original issue reported on code google com by david te gmail com on sep at | 0 |
3,663 | 6,694,651,357 | IssuesEvent | 2017-10-10 03:26:58 | york-region-tpss/stp | https://api.github.com/repos/york-region-tpss/stp | opened | Watering Assignment - Auto-correct Wrong User Input | enhancement process workflow | Automatically change the number to 0 when user inputs any number that's larger than the total number on the RIN or not a positive integer.
For wrongly entered on-hold numbers, flash red and don't allow assigning. | 1.0 | Watering Assignment - Auto-correct Wrong User Input - Automatically change the number to 0 when user inputs any number that's larger than the total number on the RIN or not a positive integer.
For wrongly entered on-hold numbers, flash red and don't allow assigning. | process | watering assignment auto correct wrong user input automatically change the number to when user inputs any number that s larger than the total number on the rin or not a positive integer for wrongly entered on hold numbers flash red and don t allow assigning | 1 |
230 | 2,653,386,736 | IssuesEvent | 2015-03-16 23:02:04 | camsci/meteor-pi | https://api.github.com/repos/camsci/meteor-pi | closed | WP1.1 - Camera image processing validation | domain:Hardware domain:Image Processing Top Level Functionality | Assemble enough of the camera hardware to validate that images can be captured from the camera module and processed by the main board, and that events can be detected.
Note - for this issue we don't need to detect meteors perfectly, or possibly at all, just that we're linking everything together, we don't have glitches or breaks in the data streams, no unexpected reboots etc. Basically attempting to establish a stable hardware platform and work through any glitches we find. | 1.0 | WP1.1 - Camera image processing validation - Assemble enough of the camera hardware to validate that images can be captured from the camera module and processed by the main board, and that events can be detected.
Note - for this issue we don't need to detect meteors perfectly, or possibly at all, just that we're linking everything together, we don't have glitches or breaks in the data streams, no unexpected reboots etc. Basically attempting to establish a stable hardware platform and work through any glitches we find. | process | camera image processing validation assemble enough of the camera hardware to validate that images can be captured from the camera module and processed by the main board and that events can be detected note for this issue we don t need to detect meteors perfectly or possibly at all just that we re linking everything together we don t have glitches or breaks in the data streams no unexpected reboots etc basically attempting to establish a stable hardware platform and work through any glitches we find | 1 |
78,116 | 3,509,470,376 | IssuesEvent | 2016-01-08 22:55:19 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | Visibility when logged or teleported to map (BB #886) | duplicate migrated Priority: Low Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** smoldar
**Original Date:** 19.05.2015 08:51:29 GMT+0000
**Original Priority:** minor
**Original Type:** bug
**Original State:** duplicate
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/886
<hr>
When player is logged ingame, still exist bug of loaded creatures. Only part of npcs is loaded, after some moves is loaded other near creatures, or after rotate on same position.
This problem is when player log into game or teleported to instance or another map. | 1.0 | Visibility when logged or teleported to map (BB #886) - This issue was migrated from bitbucket.
**Original Reporter:** smoldar
**Original Date:** 19.05.2015 08:51:29 GMT+0000
**Original Priority:** minor
**Original Type:** bug
**Original State:** duplicate
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/886
<hr>
When player is logged ingame, still exist bug of loaded creatures. Only part of npcs is loaded, after some moves is loaded other near creatures, or after rotate on same position.
This problem is when player log into game or teleported to instance or another map. | non_process | visibility when logged or teleported to map bb this issue was migrated from bitbucket original reporter smoldar original date gmt original priority minor original type bug original state duplicate direct link when player is logged ingame still exist bug of loaded creatures only part of npcs is loaded after some moves is loaded other near creatures or after rotate on same position this problem is when player log into game or teleported to instance or another map | 0 |
8,693 | 11,837,195,735 | IssuesEvent | 2020-03-23 13:49:34 | prisma/prisma2 | https://api.github.com/repos/prisma/prisma2 | opened | Remove enum usage from docs where sqlite is used | kind/docs process/candidate | Original Ref: https://github.com/prisma/prisma2/issues/1860
## Documentation error description
Right now certain sections of the docs are using enums with sqlite.
For example: https://github.com/prisma/prisma2/blob/master/docs/data-modeling.md#example
We have removed the shim for sqlite that used to create enums via a join and table and now we fallback to native database features for other supported databases. We should remove all usage of sqlite with enum from the docs as that can mislead our users.
Related: https://github.com/prisma/migrate/issues/377
| 1.0 | Remove enum usage from docs where sqlite is used - Original Ref: https://github.com/prisma/prisma2/issues/1860
## Documentation error description
Right now certain sections of the docs are using enums with sqlite.
For example: https://github.com/prisma/prisma2/blob/master/docs/data-modeling.md#example
We have removed the shim for sqlite that used to create enums via a join and table and now we fallback to native database features for other supported databases. We should remove all usage of sqlite with enum from the docs as that can mislead our users.
Related: https://github.com/prisma/migrate/issues/377
| process | remove enum usage from docs where sqlite is used original ref documentation error description right now certain sections of the docs are using enums with sqlite for example we have removed the shim for sqlite that used to create enums via a join and table and now we fallback to native database features for other supported databases we should remove all usage of sqlite with enum from the docs as that can mislead our users related | 1 |
26,977 | 12,501,202,474 | IssuesEvent | 2020-06-02 00:32:50 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | API_AUTHORIZED_IP_RANGE is not supported for Private Cluster | Pri2 container-service/svc | It's not clear in this page that you cannot use this command to limit access to the API when using Private Clusters.
The error reported via CLI is "--api-server-authorized-ip-ranges is not supported for private cluster"
Please update the page.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 660320d4-fad5-3307-e29f-5f4cfad4b7f7
* Version Independent ID: 85ac2e8b-633f-6e1a-e642-0645ca92e129
* Content: [API server authorized IP ranges in Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/api-server-authorized-ip-ranges)
* Content Source: [articles/aks/api-server-authorized-ip-ranges.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/api-server-authorized-ip-ranges.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned** | 1.0 | API_AUTHORIZED_IP_RANGE is not supported for Private Cluster - It's not clear in this page that you cannot use this command to limit access to the API when using Private Clusters.
The error reported via CLI is "--api-server-authorized-ip-ranges is not supported for private cluster"
Please update the page.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 660320d4-fad5-3307-e29f-5f4cfad4b7f7
* Version Independent ID: 85ac2e8b-633f-6e1a-e642-0645ca92e129
* Content: [API server authorized IP ranges in Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/api-server-authorized-ip-ranges)
* Content Source: [articles/aks/api-server-authorized-ip-ranges.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/api-server-authorized-ip-ranges.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned** | non_process | api authorized ip range is not supported for private cluster it s not clear in this page that you cannot use this command to limit access to the api when using private clusters the error reported via cli is api server authorized ip ranges is not supported for private cluster please update the page document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned | 0 |
21,064 | 6,975,126,134 | IssuesEvent | 2017-12-12 05:04:37 | caffe2/caffe2 | https://api.github.com/repos/caffe2/caffe2 | closed | caffe2 cpu-only on Mac highserria Error | build | I just want to run the cpu-only version of caffe2 on my mac. But some errors just confused me.
Command: -> cmake -DUSE_CUDA=OFF -DUSE_NNPACK=OFF ..
-- The CXX compiler identification is AppleClang 9.0.0.9000037
-- The C compiler identification is AppleClang 9.0.0.9000037
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Setting CMAKE_FIND_NO_INSTALL_PREFIX
-- Build type not set - defaulting to Release
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - found
-- Found Threads: TRUE
-- Found Protobuf: /usr/local/lib/libprotobuf.dylib (found version "3.2.0")
-- Found Git: /usr/local/bin/git (found version "2.13.0")
-- The BLAS backend of choice:Eigen
-- Found GFlags: /usr/local/include
-- Found gflags (include: /usr/local/include, library: /usr/local/lib/libgflags.dylib)
-- Found system gflags install.
-- Found Glog: /usr/local/include
-- Found glog (include: /usr/local/include, library: /usr/local/lib/libglog.dylib)
-- Found system glog install.
-- Found LMDB: /usr/local/include
-- Found lmdb (include: /usr/local/include, library: /usr/local/lib/liblmdb.dylib)
-- Found LevelDB: /usr/local/include
-- Found LevelDB (include: /usr/local/include, library: /usr/local/lib/libleveldb.dylib)
-- Found Snappy: /usr/local/include
-- Found Snappy (include: /usr/local/include, library: /usr/local/lib/libsnappy.dylib)
-- Found RocksDB: /usr/local/include
-- Found RocksDB (include: /usr/local/include, library: /usr/local/lib/librocksdb.dylib)
-- OpenCV found (/usr/local/share/OpenCV)
-- Found PythonInterp: /usr/local/bin/python2.7 (found suitable version "2.7.13", minimum required is "2.7")
-- Found PythonLibs: /usr/lib/libpython2.7.dylib (found suitable version "2.7.10", minimum required is "2.7")
-- Found NumPy: /usr/local/lib/python2.7/site-packages/numpy/core/include (found version "1.13.1")
-- NumPy ver. 1.13.1 found (include: /usr/local/lib/python2.7/site-packages/numpy/core/include)
-- Found pybind11: /usr/local/include
-- Could NOT find OpenMP_C (missing: OpenMP_C_FLAGS OpenMP_C_LIB_NAMES) (found version "1.0")
-- Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES) (found version "1.0")
CMake Warning at cmake/Dependencies.cmake:292 (message):
Not compiling with OpenMP. Suppress this warning with -DUSE_OPENMP=OFF
Call Stack (most recent call first):
CMakeLists.txt:73 (include)
CMake Warning at cmake/Dependencies.cmake:333 (message):
If not using cuda, one should not use NCCL either.
Call Stack (most recent call first):
CMakeLists.txt:73 (include)
CMake Warning at cmake/Dependencies.cmake:358 (message):
Gloo can only be used on Linux.
Call Stack (most recent call first):
CMakeLists.txt:73 (include)
CMake Warning at cmake/Dependencies.cmake:428 (message):
Metal is only used in ios builds.
Call Stack (most recent call first):
CMakeLists.txt:73 (include)
> -- Performing Test CAFFE2_LONG_IS_INT32_OR_64
> -- Performing Test CAFFE2_LONG_IS_INT32_OR_64 - Failed
> -- Need to define long as a separate typeid.
> -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
> -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed
> -- Turning off deprecation warning due to glog.**
> -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
> -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
> -- Current compiler supports avx2 extention. Will build perfkernels.
> -- NCCL operators skipped due to no CUDA support
> -- CUDA RTC operators skipped due to no CUDA support
> -- Including image processing operators
> -- Excluding video processing operators due to no opencv
> -- Excluding mkl operators as we are not using mkl
> -- MPI operators skipped due to no MPI support
> -- Automatically generating missing __init__.py files.
### -- General:
-- Git version : v0.8.1-423-g36995d5e-dirty
-- System : Darwin
-- C++ compiler : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- C++ compiler version : 9.0.0.9000037
-- Protobuf compiler : /usr/local/bin/protoc
-- CXX flags : -Wno-deprecated -std=c++11 -O2 -fPIC -Wno-narrowing
-- Build type : Release
-- Compile definitions :
--
-- BUILD_BINARY : ON
-- BUILD_PYTHON : ON
-- Python version : 2.7.10
-- Python library : /usr/lib/libpython2.7.dylib
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : ON
-- USE_ATEN : OFF
-- USE_CUDA : OFF
-- USE_FFMPEG : OFF
-- USE_GFLAGS : ON
-- USE_GLOG : ON
-- USE_GLOO : OFF
-- USE_LEVELDB : ON
-- LevelDB version : 1.20
-- Snappy version : ..
-- USE_LITE_PROTO : OFF
-- USE_LMDB : ON
-- LMDB version : 0.9.21
-- USE_METAL : OFF
-- USE_MOBILE_OPENGL : OFF
-- USE_MPI : OFF
-- USE_NCCL : OFF
-- USE_NERVANA_GPU : OFF
-- USE_NNPACK : OFF
-- USE_OBSERVERS : OFF
-- USE_OPENCV : ON
-- OpenCV version : 2.4.13
-- USE_OPENMP : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : ON
-- USE_THREADS : ON
-- USE_ZMQ : OFF
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/meixinhu/Desktop/caffe2/build
# It seems everything is ok. But I found the CMakeError.log
Performing C++ SOURCE FILE Test CAFFE2_LONG_IS_INT32_OR_64 failed with the following output:
Change Dir: /Users/meixinhu/Desktop/caffe2/build/CMakeFiles/CMakeTmp
Run Build Command:"/usr/bin/make" "cmTC_0579f/fast"
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/cmTC_0579f.dir/build.make CMakeFiles/cmTC_0579f.dir/build
Building CXX object CMakeFiles/cmTC_0579f.dir/src.cxx.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DCAFFE2_LONG_IS_INT32_OR_64 -std=c++11 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk -o CMakeFiles/cmTC_0579f.dir/src.cxx.o -c /Users/meixinhu/Desktop/caffe2/build/CMakeFiles/CMakeTmp/src.cxx
Linking CXX executable cmTC_0579f
/usr/local/Cellar/cmake/3.9.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_0579f.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DCAFFE2_LONG_IS_INT32_OR_64 -std=c++11 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk -Wl,-search_paths_first -Wl,-headerpad_max_install_names CMakeFiles/cmTC_0579f.dir/src.cxx.o -o cmTC_0579f
Undefined symbols for architecture x86_64:
"void Foo<long>()", referenced from:
_main in src.cxx.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [cmTC_0579f] Error 1
make: *** [cmTC_0579f/fast] Error 2
Source file was:
#include <cstdint>
template <typename T> void Foo();
template<> void Foo<int32_t>() {}
template<> void Foo<int64_t>() {}
int main(int argc, char** argv) {
Foo<long>();
return 0;
}
Performing C++ SOURCE FILE Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING failed with the following output:
Change Dir: /Users/meixinhu/Desktop/caffe2/build/CMakeFiles/CMakeTmp
Run Build Command:"/usr/bin/make" "cmTC_69549/fast"
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/cmTC_69549.dir/build.make CMakeFiles/cmTC_69549.dir/build
Building CXX object CMakeFiles/cmTC_69549.dir/src.cxx.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DCAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING -std=c++11 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk -o CMakeFiles/cmTC_69549.dir/src.cxx.o -c /Users/meixinhu/Desktop/caffe2/build/CMakeFiles/CMakeTmp/src.cxx
/Users/meixinhu/Desktop/caffe2/build/CMakeFiles/CMakeTmp/src.cxx:1:10: fatal error: 'glog/stl_logging.h' file not found
#include <glog/stl_logging.h>
^~~~~~~~~~~~~~~~~~~~
1 error generated.
make[1]: *** [CMakeFiles/cmTC_69549.dir/src.cxx.o] Error 1
make: *** [cmTC_69549/fast] Error 2
Source file was:
#include <glog/stl_logging.h>
int main(int argc, char** argv) {
return 0;
}
## When I use command "make -j8" , I just see the errors that confused me.
[ 74%] Linking CXX executable binaries/fixed_divisor_test
Undefined symbols for architecture x86_64:
"testing::internal::MakeAndRegisterTestInfo(char const*, char const*, char const*, char const*, void const*, void (*)(), void (*)(), testing::internal::TestFactoryBase*)", referenced from:
__GLOBAL__sub_I_fixed_divisor_test.cc in fixed_divisor_test.cc.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [caffe2/binaries/fixed_divisor_test] Error 1
make[1]: *** [caffe2/CMakeFiles/fixed_divisor_test.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
So Could anyone tell me the solutions of these errors, Thank you!!!!!!! | 1.0 | caffe2 cpu-only on Mac highserria Error - I just want to run the cpu-only version of caffe2 on my mac. But some errors just confused me.
Command: -> cmake -DUSE_CUDA=OFF -DUSE_NNPACK=OFF ..
-- The CXX compiler identification is AppleClang 9.0.0.9000037
-- The C compiler identification is AppleClang 9.0.0.9000037
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Setting CMAKE_FIND_NO_INSTALL_PREFIX
-- Build type not set - defaulting to Release
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - found
-- Found Threads: TRUE
-- Found Protobuf: /usr/local/lib/libprotobuf.dylib (found version "3.2.0")
-- Found Git: /usr/local/bin/git (found version "2.13.0")
-- The BLAS backend of choice:Eigen
-- Found GFlags: /usr/local/include
-- Found gflags (include: /usr/local/include, library: /usr/local/lib/libgflags.dylib)
-- Found system gflags install.
-- Found Glog: /usr/local/include
-- Found glog (include: /usr/local/include, library: /usr/local/lib/libglog.dylib)
-- Found system glog install.
-- Found LMDB: /usr/local/include
-- Found lmdb (include: /usr/local/include, library: /usr/local/lib/liblmdb.dylib)
-- Found LevelDB: /usr/local/include
-- Found LevelDB (include: /usr/local/include, library: /usr/local/lib/libleveldb.dylib)
-- Found Snappy: /usr/local/include
-- Found Snappy (include: /usr/local/include, library: /usr/local/lib/libsnappy.dylib)
-- Found RocksDB: /usr/local/include
-- Found RocksDB (include: /usr/local/include, library: /usr/local/lib/librocksdb.dylib)
-- OpenCV found (/usr/local/share/OpenCV)
-- Found PythonInterp: /usr/local/bin/python2.7 (found suitable version "2.7.13", minimum required is "2.7")
-- Found PythonLibs: /usr/lib/libpython2.7.dylib (found suitable version "2.7.10", minimum required is "2.7")
-- Found NumPy: /usr/local/lib/python2.7/site-packages/numpy/core/include (found version "1.13.1")
-- NumPy ver. 1.13.1 found (include: /usr/local/lib/python2.7/site-packages/numpy/core/include)
-- Found pybind11: /usr/local/include
-- Could NOT find OpenMP_C (missing: OpenMP_C_FLAGS OpenMP_C_LIB_NAMES) (found version "1.0")
-- Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES) (found version "1.0")
CMake Warning at cmake/Dependencies.cmake:292 (message):
Not compiling with OpenMP. Suppress this warning with -DUSE_OPENMP=OFF
Call Stack (most recent call first):
CMakeLists.txt:73 (include)
CMake Warning at cmake/Dependencies.cmake:333 (message):
If not using cuda, one should not use NCCL either.
Call Stack (most recent call first):
CMakeLists.txt:73 (include)
CMake Warning at cmake/Dependencies.cmake:358 (message):
Gloo can only be used on Linux.
Call Stack (most recent call first):
CMakeLists.txt:73 (include)
CMake Warning at cmake/Dependencies.cmake:428 (message):
Metal is only used in ios builds.
Call Stack (most recent call first):
CMakeLists.txt:73 (include)
> -- Performing Test CAFFE2_LONG_IS_INT32_OR_64
> -- Performing Test CAFFE2_LONG_IS_INT32_OR_64 - Failed
> -- Need to define long as a separate typeid.
> -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
> -- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Failed
> -- Turning off deprecation warning due to glog.**
> -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
> -- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
> -- Current compiler supports avx2 extention. Will build perfkernels.
> -- NCCL operators skipped due to no CUDA support
> -- CUDA RTC operators skipped due to no CUDA support
> -- Including image processing operators
> -- Excluding video processing operators due to no opencv
> -- Excluding mkl operators as we are not using mkl
> -- MPI operators skipped due to no MPI support
> -- Automatically generating missing __init__.py files.
### -- General:
-- Git version : v0.8.1-423-g36995d5e-dirty
-- System : Darwin
-- C++ compiler : /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- C++ compiler version : 9.0.0.9000037
-- Protobuf compiler : /usr/local/bin/protoc
-- CXX flags : -Wno-deprecated -std=c++11 -O2 -fPIC -Wno-narrowing
-- Build type : Release
-- Compile definitions :
--
-- BUILD_BINARY : ON
-- BUILD_PYTHON : ON
-- Python version : 2.7.10
-- Python library : /usr/lib/libpython2.7.dylib
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : ON
-- USE_ATEN : OFF
-- USE_CUDA : OFF
-- USE_FFMPEG : OFF
-- USE_GFLAGS : ON
-- USE_GLOG : ON
-- USE_GLOO : OFF
-- USE_LEVELDB : ON
-- LevelDB version : 1.20
-- Snappy version : ..
-- USE_LITE_PROTO : OFF
-- USE_LMDB : ON
-- LMDB version : 0.9.21
-- USE_METAL : OFF
-- USE_MOBILE_OPENGL : OFF
-- USE_MPI : OFF
-- USE_NCCL : OFF
-- USE_NERVANA_GPU : OFF
-- USE_NNPACK : OFF
-- USE_OBSERVERS : OFF
-- USE_OPENCV : ON
-- OpenCV version : 2.4.13
-- USE_OPENMP : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : ON
-- USE_THREADS : ON
-- USE_ZMQ : OFF
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/meixinhu/Desktop/caffe2/build
# It seems everything is ok. But I found the CMakeError.log
Performing C++ SOURCE FILE Test CAFFE2_LONG_IS_INT32_OR_64 failed with the following output:
Change Dir: /Users/meixinhu/Desktop/caffe2/build/CMakeFiles/CMakeTmp
Run Build Command:"/usr/bin/make" "cmTC_0579f/fast"
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/cmTC_0579f.dir/build.make CMakeFiles/cmTC_0579f.dir/build
Building CXX object CMakeFiles/cmTC_0579f.dir/src.cxx.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DCAFFE2_LONG_IS_INT32_OR_64 -std=c++11 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk -o CMakeFiles/cmTC_0579f.dir/src.cxx.o -c /Users/meixinhu/Desktop/caffe2/build/CMakeFiles/CMakeTmp/src.cxx
Linking CXX executable cmTC_0579f
/usr/local/Cellar/cmake/3.9.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_0579f.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DCAFFE2_LONG_IS_INT32_OR_64 -std=c++11 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk -Wl,-search_paths_first -Wl,-headerpad_max_install_names CMakeFiles/cmTC_0579f.dir/src.cxx.o -o cmTC_0579f
Undefined symbols for architecture x86_64:
"void Foo<long>()", referenced from:
_main in src.cxx.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [cmTC_0579f] Error 1
make: *** [cmTC_0579f/fast] Error 2
Source file was:
#include <cstdint>
template <typename T> void Foo();
template<> void Foo<int32_t>() {}
template<> void Foo<int64_t>() {}
int main(int argc, char** argv) {
Foo<long>();
return 0;
}
Performing C++ SOURCE FILE Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING failed with the following output:
Change Dir: /Users/meixinhu/Desktop/caffe2/build/CMakeFiles/CMakeTmp
Run Build Command:"/usr/bin/make" "cmTC_69549/fast"
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/cmTC_69549.dir/build.make CMakeFiles/cmTC_69549.dir/build
Building CXX object CMakeFiles/cmTC_69549.dir/src.cxx.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DCAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING -std=c++11 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk -o CMakeFiles/cmTC_69549.dir/src.cxx.o -c /Users/meixinhu/Desktop/caffe2/build/CMakeFiles/CMakeTmp/src.cxx
/Users/meixinhu/Desktop/caffe2/build/CMakeFiles/CMakeTmp/src.cxx:1:10: fatal error: 'glog/stl_logging.h' file not found
#include <glog/stl_logging.h>
^~~~~~~~~~~~~~~~~~~~
1 error generated.
make[1]: *** [CMakeFiles/cmTC_69549.dir/src.cxx.o] Error 1
make: *** [cmTC_69549/fast] Error 2
Source file was:
#include <glog/stl_logging.h>
int main(int argc, char** argv) {
return 0;
}
## When I use command "make -j8" , I just see the errors that confused me.
[ 74%] Linking CXX executable binaries/fixed_divisor_test
Undefined symbols for architecture x86_64:
"testing::internal::MakeAndRegisterTestInfo(char const*, char const*, char const*, char const*, void const*, void (*)(), void (*)(), testing::internal::TestFactoryBase*)", referenced from:
__GLOBAL__sub_I_fixed_divisor_test.cc in fixed_divisor_test.cc.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [caffe2/binaries/fixed_divisor_test] Error 1
make[1]: *** [caffe2/CMakeFiles/fixed_divisor_test.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
So Could anyone tell me the solutions of these errors, Thank you!!!!!!! | non_process | cpu only on mac highserria error i just want to run the cpu only version of on my mac but some errors just confused me command cmake duse cuda off duse nnpack off the cxx compiler identification is appleclang the c compiler identification is appleclang check for working cxx compiler applications xcode app contents developer toolchains xcodedefault xctoolchain usr bin c check for working cxx compiler applications xcode app contents developer toolchains xcodedefault xctoolchain usr bin c works detecting cxx compiler abi info detecting cxx compiler abi info done detecting cxx compile features detecting cxx compile features done check for working c compiler applications xcode app contents developer toolchains xcodedefault xctoolchain usr bin cc check for working c compiler applications xcode app contents developer toolchains xcodedefault xctoolchain usr bin cc works detecting c compiler abi info detecting c compiler abi info done detecting c compile features detecting c compile features done setting cmake find no install prefix build type not set defaulting to release looking for pthread h looking for pthread h found looking for pthread create looking for pthread create found found threads true found protobuf usr local lib libprotobuf dylib found version found git usr local bin git found version the blas backend of choice eigen found gflags usr local include found gflags include usr local include library usr local lib libgflags dylib found system gflags install found glog usr local include found glog include usr local include library usr local lib libglog dylib found system glog install found lmdb usr local include found lmdb include usr local include library usr local lib liblmdb dylib found leveldb usr local include found leveldb include usr local include library usr local lib libleveldb dylib found snappy usr local include found snappy include usr local include library usr local lib libsnappy dylib found rocksdb usr local include found rocksdb include usr local include library usr local lib librocksdb dylib opencv found usr local share opencv found pythoninterp usr local bin found suitable version minimum required is found pythonlibs usr lib dylib found suitable version minimum required is found numpy usr local lib site packages numpy core include found version numpy ver found include usr local lib site packages numpy core include found usr local include could not find openmp c missing openmp c flags openmp c lib names found version could not find openmp cxx missing openmp cxx flags openmp cxx lib names found version cmake warning at cmake dependencies cmake message not compiling with openmp suppress this warning with duse openmp off call stack most recent call first cmakelists txt include cmake warning at cmake dependencies cmake message if not using cuda one should not use nccl either call stack most recent call first cmakelists txt include cmake warning at cmake dependencies cmake message gloo can only be used on linux call stack most recent call first cmakelists txt include cmake warning at cmake dependencies cmake message metal is only used in ios builds call stack most recent call first cmakelists txt include performing test long is or performing test long is or failed need to define long as a separate typeid performing test need to turn off deprecation warning performing test need to turn off deprecation warning failed turning off deprecation warning due to glog performing test compiler supports extensions performing test compiler supports extensions success current compiler supports extention will build perfkernels nccl operators skipped due to no cuda support cuda rtc operators skipped due to no cuda support including image processing operators excluding video processing operators due to no opencv excluding mkl operators as we are not using mkl mpi operators skipped due to no mpi support automatically generating missing init py files general git version dirty system darwin c compiler applications xcode app contents developer toolchains xcodedefault xctoolchain usr bin c c compiler version protobuf compiler usr local bin protoc cxx flags wno deprecated std c fpic wno narrowing build type release compile definitions build binary on build python on python version python library usr lib dylib build shared libs on build test on use aten off use cuda off use ffmpeg off use gflags on use glog on use gloo off use leveldb on leveldb version snappy version use lite proto off use lmdb on lmdb version use metal off use mobile opengl off use mpi off use nccl off use nervana gpu off use nnpack off use observers off use opencv on opencv version use openmp off use redis off use rocksdb on use threads on use zmq off configuring done generating done build files have been written to users meixinhu desktop build it seems everything is ok but i found the cmakeerror log performing c source file test long is or failed with the following output change dir users meixinhu desktop build cmakefiles cmaketmp run build command usr bin make cmtc fast applications xcode app contents developer usr bin make f cmakefiles cmtc dir build make cmakefiles cmtc dir build building cxx object cmakefiles cmtc dir src cxx o applications xcode app contents developer toolchains xcodedefault xctoolchain usr bin c long is or std c isysroot applications xcode app contents developer platforms macosx platform developer sdks sdk o cmakefiles cmtc dir src cxx o c users meixinhu desktop build cmakefiles cmaketmp src cxx linking cxx executable cmtc usr local cellar cmake bin cmake e cmake link script cmakefiles cmtc dir link txt verbose applications xcode app contents developer toolchains xcodedefault xctoolchain usr bin c long is or std c isysroot applications xcode app contents developer platforms macosx platform developer sdks sdk wl search paths first wl headerpad max install names cmakefiles cmtc dir src cxx o o cmtc undefined symbols for architecture void foo referenced from main in src cxx o ld symbol s not found for architecture clang error linker command failed with exit code use v to see invocation make error make error source file was include template void foo template void foo template void foo int main int argc char argv foo return performing c source file test need to turn off deprecation warning failed with the following output change dir users meixinhu desktop build cmakefiles cmaketmp run build command usr bin make cmtc fast applications xcode app contents developer usr bin make f cmakefiles cmtc dir build make cmakefiles cmtc dir build building cxx object cmakefiles cmtc dir src cxx o applications xcode app contents developer toolchains xcodedefault xctoolchain usr bin c need to turn off deprecation warning std c isysroot applications xcode app contents developer platforms macosx platform developer sdks sdk o cmakefiles cmtc dir src cxx o c users meixinhu desktop build cmakefiles cmaketmp src cxx users meixinhu desktop build cmakefiles cmaketmp src cxx fatal error glog stl logging h file not found include error generated make error make error source file was include int main int argc char argv return when i use command make i just see the errors that confused me linking cxx executable binaries fixed divisor test undefined symbols for architecture testing internal makeandregistertestinfo char const char const char const char const void const void void testing internal testfactorybase referenced from global sub i fixed divisor test cc in fixed divisor test cc o ld symbol s not found for architecture clang error linker command failed with exit code use v to see invocation make error make error make waiting for unfinished jobs so could anyone tell me the solutions of these errors thank you | 0 |
207,041 | 16,064,260,936 | IssuesEvent | 2021-04-23 16:32:49 | google/transmat | https://api.github.com/repos/google/transmat | closed | Write article to explain the technique in depth | documentation | Explain the technique and vision about connecting the web. | 1.0 | Write article to explain the technique in depth - Explain the technique and vision about connecting the web. | non_process | write article to explain the technique in depth explain the technique and vision about connecting the web | 0 |
278,595 | 30,702,363,699 | IssuesEvent | 2023-07-27 01:23:46 | Nivaskumark/CVE-2020-0074-frameworks_base | https://api.github.com/repos/Nivaskumark/CVE-2020-0074-frameworks_base | reopened | CVE-2021-0521 (Medium) detected in baseandroid-11.0.0_r39, baseandroid-11.0.0_r39 | Mend: dependency security vulnerability | ## CVE-2021-0521 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>baseandroid-11.0.0_r39</b>, <b>baseandroid-11.0.0_r39</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In getAllPackages of PackageManagerService, there is a possible information disclosure due to a missing permission check. This could lead to local information disclosure of cross-user permissions with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-8.1 Android-9 Android-10Android ID: A-174661955
<p>Publish Date: 2021-06-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-0521>CVE-2021-0521</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://source.android.com/security/bulletin/2021-06-01">https://source.android.com/security/bulletin/2021-06-01</a></p>
<p>Release Date: 2021-06-21</p>
<p>Fix Resolution: android-11.0.0_r38</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-0521 (Medium) detected in baseandroid-11.0.0_r39, baseandroid-11.0.0_r39 - ## CVE-2021-0521 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>baseandroid-11.0.0_r39</b>, <b>baseandroid-11.0.0_r39</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In getAllPackages of PackageManagerService, there is a possible information disclosure due to a missing permission check. This could lead to local information disclosure of cross-user permissions with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-8.1 Android-9 Android-10Android ID: A-174661955
<p>Publish Date: 2021-06-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-0521>CVE-2021-0521</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://source.android.com/security/bulletin/2021-06-01">https://source.android.com/security/bulletin/2021-06-01</a></p>
<p>Release Date: 2021-06-21</p>
<p>Fix Resolution: android-11.0.0_r38</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in baseandroid baseandroid cve medium severity vulnerability vulnerable libraries baseandroid baseandroid vulnerability details in getallpackages of packagemanagerservice there is a possible information disclosure due to a missing permission check this could lead to local information disclosure of cross user permissions with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with mend | 0 |
18,080 | 24,095,346,126 | IssuesEvent | 2022-09-19 18:13:35 | keras-team/keras-cv | https://api.github.com/repos/keras-team/keras-cv | closed | Add augment_bounding_boxes support to MixUp layer | contribution-welcome preprocessing | The augment_bounding_boxes should be implemented for Mixup Layer in keras_cv. The PR should contain implementation, test scripts and a demo script to verify implementation.
Example code for implementing augment_bounding_boxes() can be found here
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_flip.py#:~:text=def%20augment_bounding_boxes(,)%3A
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_rotation.py#:~:text=def%20augment_image(self%2C%20image%2C%20transformation%2C%20**kwargs)%3A
- The implementations can be verified using demo utils in keras_cv.bounding_box - Example of demo script can be found here : https://github.com/keras-team/keras-cv/blob/master/examples/layers/preprocessing/bounding_box/random_rotation_demo.py | 1.0 | Add augment_bounding_boxes support to MixUp layer - The augment_bounding_boxes should be implemented for Mixup Layer in keras_cv. The PR should contain implementation, test scripts and a demo script to verify implementation.
Example code for implementing augment_bounding_boxes() can be found here
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_flip.py#:~:text=def%20augment_bounding_boxes(,)%3A
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_rotation.py#:~:text=def%20augment_image(self%2C%20image%2C%20transformation%2C%20**kwargs)%3A
- The implementations can be verified using demo utils in keras_cv.bounding_box - Example of demo script can be found here : https://github.com/keras-team/keras-cv/blob/master/examples/layers/preprocessing/bounding_box/random_rotation_demo.py | process | add augment bounding boxes support to mixup layer the augment bounding boxes should be implemented for mixup layer in keras cv the pr should contain implementation test scripts and a demo script to verify implementation example code for implementing augment bounding boxes can be found here the implementations can be verified using demo utils in keras cv bounding box example of demo script can be found here | 1 |
79,681 | 28,496,326,703 | IssuesEvent | 2023-04-18 14:29:09 | vector-im/element-desktop | https://api.github.com/repos/vector-im/element-desktop | opened | Number of unread messages badge not shown in the taskbar on Windows 10 | T-Defect | **Is your suggestion related to a problem? Please describe.**
Riot currently shows the number of unread messages in the upper-left corner of the window. It would be nice if it showed the number of unread messages in the icon as well.
**Describe the solution you'd like**
Here is a screenshot of both the Riot icon in the upper-left, as well as the taskbar icon:

The taskbar icon should be a larger version of the upper-left icon, indicating that there are unread messages. Alternately, the taskbar icon could simply overlay a dot, indicating that there are unread messages. | 1.0 | Number of unread messages badge not shown in the taskbar on Windows 10 - **Is your suggestion related to a problem? Please describe.**
Riot currently shows the number of unread messages in the upper-left corner of the window. It would be nice if it showed the number of unread messages in the icon as well.
**Describe the solution you'd like**
Here is a screenshot of both the Riot icon in the upper-left, as well as the taskbar icon:

The taskbar icon should be a larger version of the upper-left icon, indicating that there are unread messages. Alternately, the taskbar icon could simply overlay a dot, indicating that there are unread messages. | non_process | number of unread messages badge not shown in the taskbar on windows is your suggestion related to a problem please describe riot currently shows the number of unread messages in the upper left corner of the window it would be nice if it showed the number of unread messages in the icon as well describe the solution you d like here is a screenshot of both the riot icon in the upper left as well as the taskbar icon the taskbar icon should be a larger version of the upper left icon indicating that there are unread messages alternately the taskbar icon could simply overlay a dot indicating that there are unread messages | 0 |
740,231 | 25,740,189,873 | IssuesEvent | 2022-12-08 05:15:22 | googleapis/nodejs-ai-platform | https://api.github.com/repos/googleapis/nodejs-ai-platform | closed | AI platform create batch prediction job video object tracking: should create a video object tracking batch prediction job failed | type: bug priority: p1 flakybot: issue api: vertex-ai | Note: #382 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 04f7c858217f1a3ce7b1072c7bf8946d39947532
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/2c1b5aae-a46c-4a7a-a474-79da01e686eb), [Sponge](http://sponge2/2c1b5aae-a46c-4a7a-a474-79da01e686eb)
status: failed
<details><summary>Test output</summary><br><pre>Command failed: node ./create-batch-prediction-job-video-object-tracking.js temp_create_batch_prediction_video_object_tracking_testb9275625-b42c-4ca6-a465-e6c8f262b328 8609932509485989888 gs://ucaip-samples-test-output/inputs/vot_batch_prediction_input.jsonl gs://ucaip-samples-test-output/ undefined us-central1
7 PERMISSION_DENIED: Permission denied: Consumer 'project:undefined' has been suspended.
Error: Command failed: node ./create-batch-prediction-job-video-object-tracking.js temp_create_batch_prediction_video_object_tracking_testb9275625-b42c-4ca6-a465-e6c8f262b328 8609932509485989888 gs://ucaip-samples-test-output/inputs/vot_batch_prediction_input.jsonl gs://ucaip-samples-test-output/ undefined us-central1
7 PERMISSION_DENIED: Permission denied: Consumer 'project:undefined' has been suspended.
at checkExecSyncError (child_process.js:635:11)
at Object.execSync (child_process.js:671:15)
at execSync (test/create-batch-prediction-job-video-object-tracking.test.js:24:28)
at Context.<anonymous> (test/create-batch-prediction-job-video-object-tracking.test.js:46:20)
at processImmediate (internal/timers.js:461:21)</pre></details> | 1.0 | AI platform create batch prediction job video object tracking: should create a video object tracking batch prediction job failed - Note: #382 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 04f7c858217f1a3ce7b1072c7bf8946d39947532
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/2c1b5aae-a46c-4a7a-a474-79da01e686eb), [Sponge](http://sponge2/2c1b5aae-a46c-4a7a-a474-79da01e686eb)
status: failed
<details><summary>Test output</summary><br><pre>Command failed: node ./create-batch-prediction-job-video-object-tracking.js temp_create_batch_prediction_video_object_tracking_testb9275625-b42c-4ca6-a465-e6c8f262b328 8609932509485989888 gs://ucaip-samples-test-output/inputs/vot_batch_prediction_input.jsonl gs://ucaip-samples-test-output/ undefined us-central1
7 PERMISSION_DENIED: Permission denied: Consumer 'project:undefined' has been suspended.
Error: Command failed: node ./create-batch-prediction-job-video-object-tracking.js temp_create_batch_prediction_video_object_tracking_testb9275625-b42c-4ca6-a465-e6c8f262b328 8609932509485989888 gs://ucaip-samples-test-output/inputs/vot_batch_prediction_input.jsonl gs://ucaip-samples-test-output/ undefined us-central1
7 PERMISSION_DENIED: Permission denied: Consumer 'project:undefined' has been suspended.
at checkExecSyncError (child_process.js:635:11)
at Object.execSync (child_process.js:671:15)
at execSync (test/create-batch-prediction-job-video-object-tracking.test.js:24:28)
at Context.<anonymous> (test/create-batch-prediction-job-video-object-tracking.test.js:46:20)
at processImmediate (internal/timers.js:461:21)</pre></details> | non_process | ai platform create batch prediction job video object tracking should create a video object tracking batch prediction job failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output command failed node create batch prediction job video object tracking js temp create batch prediction video object tracking gs ucaip samples test output inputs vot batch prediction input jsonl gs ucaip samples test output undefined us permission denied permission denied consumer project undefined has been suspended error command failed node create batch prediction job video object tracking js temp create batch prediction video object tracking gs ucaip samples test output inputs vot batch prediction input jsonl gs ucaip samples test output undefined us permission denied permission denied consumer project undefined has been suspended at checkexecsyncerror child process js at object execsync child process js at execsync test create batch prediction job video object tracking test js at context test create batch prediction job video object tracking test js at processimmediate internal timers js | 0 |
713,948 | 24,544,691,569 | IssuesEvent | 2022-10-12 07:52:46 | gobitfly/eth2-beaconchain-explorer | https://api.github.com/repos/gobitfly/eth2-beaconchain-explorer | closed | Relax "attestation missed" notification to only notify after several missing attesations | enhancement High Priority backend | As an at home staker, it is normal to miss some attestations -- e.g. the broadband probably reconnects once per day, leading to some minutes of downtime. This means it is very common to get missing attestation notification.
Much more rarely, a validator actually goes down and you need to intervene. Unfortunately, it is easy to miss this because one tends to ignore notifications that come in very regularly. It would be nice to have a setting where you don't get notified when a single attestation is missing but only on extended downtime of a validator -- e.g. after a number of attestations are missing, like 5-10, but ideally adjustable. | 1.0 | Relax "attestation missed" notification to only notify after several missing attesations - As an at home staker, it is normal to miss some attestations -- e.g. the broadband probably reconnects once per day, leading to some minutes of downtime. This means it is very common to get missing attestation notification.
Much more rarely, a validator actually goes down and you need to intervene. Unfortunately, it is easy to miss this because one tends to ignore notifications that come in very regularly. It would be nice to have a setting where you don't get notified when a single attestation is missing but only on extended downtime of a validator -- e.g. after a number of attestations are missing, like 5-10, but ideally adjustable. | non_process | relax attestation missed notification to only notify after several missing attesations as an at home staker it is normal to miss some attestations e g the broadband probably reconnects once per day leading to some minutes of downtime this means it is very common to get missing attestation notification much more rarely a validator actually goes down and you need to intervene unfortunately it is easy to miss this because one tends to ignore notifications that come in very regularly it would be nice to have a setting where you don t get notified when a single attestation is missing but only on extended downtime of a validator e g after a number of attestations are missing like but ideally adjustable | 0 |
174,210 | 6,537,797,386 | IssuesEvent | 2017-09-01 01:00:53 | CanberraOceanRacingClub/namadgi3 | https://api.github.com/repos/CanberraOceanRacingClub/namadgi3 | closed | Install float switch on electric bilge pump | BME Electrics 12V priority 1: High | Currently the electric bilge pump only operates when it is switched on at the switch panel -- and then, it operates continuously.
Prudence would demand that
1. the bilge pump be left on at all times and
1. operates only when there is water to be pumped from the bilge
This will required the installation of a float switch and possibly some rewiring of the swithchboard
| 1.0 | Install float switch on electric bilge pump - Currently the electric bilge pump only operates when it is switched on at the switch panel -- and then, it operates continuously.
Prudence would demand that
1. the bilge pump be left on at all times and
1. operates only when there is water to be pumped from the bilge
This will required the installation of a float switch and possibly some rewiring of the swithchboard
| non_process | install float switch on electric bilge pump currently the electric bilge pump only operates when it is switched on at the switch panel and then it operates continuously prudence would demand that the bilge pump be left on at all times and operates only when there is water to be pumped from the bilge this will required the installation of a float switch and possibly some rewiring of the swithchboard | 0 |
9,329 | 12,339,892,457 | IssuesEvent | 2020-05-14 18:55:57 | MicrosoftDocs/windows-uwp | https://api.github.com/repos/MicrosoftDocs/windows-uwp | closed | Awesome and simple example | Pri2 processes-and-threading/tech uwp/prod | Thank You
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f43a6232-d7f2-2600-20bb-2f26da456628
* Version Independent ID: 9fde14b7-b11b-97bc-f6f9-3dd9219b3f9a
* Content: [Launch the default app for a file - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-the-default-app-for-a-file#feedback)
* Content Source: [windows-apps-src/launch-resume/launch-the-default-app-for-a-file.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/launch-the-default-app-for-a-file.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @lastnameholiu
* Microsoft Alias: **alholiu** | 1.0 | Awesome and simple example - Thank You
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f43a6232-d7f2-2600-20bb-2f26da456628
* Version Independent ID: 9fde14b7-b11b-97bc-f6f9-3dd9219b3f9a
* Content: [Launch the default app for a file - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-the-default-app-for-a-file#feedback)
* Content Source: [windows-apps-src/launch-resume/launch-the-default-app-for-a-file.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/launch-the-default-app-for-a-file.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @lastnameholiu
* Microsoft Alias: **alholiu** | process | awesome and simple example thank you document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login lastnameholiu microsoft alias alholiu | 1 |
44,174 | 12,025,547,610 | IssuesEvent | 2020-04-12 09:49:08 | tiangolo/jbrout | https://api.github.com/repos/tiangolo/jbrout | closed | unit tests need to be run from manatlan's machine? | Priority-Low Type-Defect auto-migrated | ```
What steps will reproduce the problem?
1. `cd unittests`
2. `python runtests.py`
3.
What is the expected output? What do you see instead?
Expected output should be something about how the unit tests completed
successfully.
Instead I see this:
(cut)
...
--- Tests tests_dbtags.py
Traceback (most recent call last):
File "runtests.py", line 29, in <module>
execfile( "../unittests/"+i )
File "../unittests/tests_dbtags.py", line 62, in <module>
dbt.save()
File "/home/conrad/data/documents/projects/photos/jbrout-read-only/jbrout/jbrout/db.py", line 962, in save
fid = open(self.file,"w")
IOError: [Errno 2] No such file or directory:
'/home/manatlan/db_jbrout_tags.xml'
Since it's looking for '/home/manatlan/db_jbrout_tags.xml', the success of the
test is based on the developer's environment. Unit tests should be able to
succeed without being run from a specific developer's machine.
```
Original issue reported on code.google.com by `conrad.p...@gmail.com` on 24 Aug 2011 at 1:16
| 1.0 | unit tests need to be run from manatlan's machine? - ```
What steps will reproduce the problem?
1. `cd unittests`
2. `python runtests.py`
3.
What is the expected output? What do you see instead?
Expected output should be something about how the unit tests completed
successfully.
Instead I see this:
(cut)
...
--- Tests tests_dbtags.py
Traceback (most recent call last):
File "runtests.py", line 29, in <module>
execfile( "../unittests/"+i )
File "../unittests/tests_dbtags.py", line 62, in <module>
dbt.save()
File "/home/conrad/data/documents/projects/photos/jbrout-read-only/jbrout/jbrout/db.py", line 962, in save
fid = open(self.file,"w")
IOError: [Errno 2] No such file or directory:
'/home/manatlan/db_jbrout_tags.xml'
Since it's looking for '/home/manatlan/db_jbrout_tags.xml', the success of the
test is based on the developer's environment. Unit tests should be able to
succeed without being run from a specific developer's machine.
```
Original issue reported on code.google.com by `conrad.p...@gmail.com` on 24 Aug 2011 at 1:16
| non_process | unit tests need to be run from manatlan s machine what steps will reproduce the problem cd unittests python runtests py what is the expected output what do you see instead expected output should be something about how the unit tests completed successfully instead i see this cut tests tests dbtags py traceback most recent call last file runtests py line in execfile unittests i file unittests tests dbtags py line in dbt save file home conrad data documents projects photos jbrout read only jbrout jbrout db py line in save fid open self file w ioerror no such file or directory home manatlan db jbrout tags xml since it s looking for home manatlan db jbrout tags xml the success of the test is based on the developer s environment unit tests should be able to succeed without being run from a specific developer s machine original issue reported on code google com by conrad p gmail com on aug at | 0 |
9,701 | 12,702,093,987 | IssuesEvent | 2020-06-22 19:28:34 | unicode-org/icu4x | https://api.github.com/repos/unicode-org/icu4x | closed | Use wildcard label filters to simplify triaging.md | C-process T-enhancement backlog help wanted | triaging.md has a lot of queries that enumerate all labels in a category, like all the "T-" or "C-" labels. It would be nice to filter these all out with a wildcard label search. Unfortunately, GitHub doesn't support this yet. I'm opening this issue to track this feature request if GitHub implements it.
https://github.com/isaacs/github/issues/1553 | 1.0 | Use wildcard label filters to simplify triaging.md - triaging.md has a lot of queries that enumerate all labels in a category, like all the "T-" or "C-" labels. It would be nice to filter these all out with a wildcard label search. Unfortunately, GitHub doesn't support this yet. I'm opening this issue to track this feature request if GitHub implements it.
https://github.com/isaacs/github/issues/1553 | process | use wildcard label filters to simplify triaging md triaging md has a lot of queries that enumerate all labels in a category like all the t or c labels it would be nice to filter these all out with a wildcard label search unfortunately github doesn t support this yet i m opening this issue to track this feature request if github implements it | 1 |
769,636 | 27,015,189,786 | IssuesEvent | 2023-02-10 18:41:11 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | DISABLED test_dispatch_meta_outplace_fft_fft_cuda_float64 (__main__.TestMetaCUDA) | high priority triaged skipped module: meta tensors | This test appears to be intermittently segfaulting. See: https://hud.pytorch.org/failure/RuntimeError%3A%20test_meta%20failed
Examples:
- https://github.com/pytorch/pytorch/actions/runs/3422286124/jobs/5699855677
- https://github.com/pytorch/pytorch/actions/runs/3422172113/jobs/5699549575
- https://github.com/pytorch/pytorch/actions/runs/3421764896/jobs/5698679974
cc @ezyang @gchanan @zou3519 @eellison @bdhirsh @soumith | 1.0 | DISABLED test_dispatch_meta_outplace_fft_fft_cuda_float64 (__main__.TestMetaCUDA) - This test appears to be intermittently segfaulting. See: https://hud.pytorch.org/failure/RuntimeError%3A%20test_meta%20failed
Examples:
- https://github.com/pytorch/pytorch/actions/runs/3422286124/jobs/5699855677
- https://github.com/pytorch/pytorch/actions/runs/3422172113/jobs/5699549575
- https://github.com/pytorch/pytorch/actions/runs/3421764896/jobs/5698679974
cc @ezyang @gchanan @zou3519 @eellison @bdhirsh @soumith | non_process | disabled test dispatch meta outplace fft fft cuda main testmetacuda this test appears to be intermittently segfaulting see examples cc ezyang gchanan eellison bdhirsh soumith | 0 |
24,530 | 6,550,928,905 | IssuesEvent | 2017-09-05 13:06:48 | jimmerioles/bitcoin-currency-converter-php | https://api.github.com/repos/jimmerioles/bitcoin-currency-converter-php | closed | Fix "Generic Files LineLength TooLong" issue in src/Provider/AbstractProvider.php | codeclimate | Line exceeds 120 characters; contains 124 characters
https://codeclimate.com/github/jimmerioles/bitcoin-currency-converter-php/src/Provider/AbstractProvider.php#issue_59ae6f761dced6000100001f | 1.0 | Fix "Generic Files LineLength TooLong" issue in src/Provider/AbstractProvider.php - Line exceeds 120 characters; contains 124 characters
https://codeclimate.com/github/jimmerioles/bitcoin-currency-converter-php/src/Provider/AbstractProvider.php#issue_59ae6f761dced6000100001f | non_process | fix generic files linelength toolong issue in src provider abstractprovider php line exceeds characters contains characters | 0 |
7,103 | 10,256,891,665 | IssuesEvent | 2019-08-21 18:45:35 | material-components/material-components-ios | https://api.github.com/repos/material-components/material-components-ios | closed | Internal issue: b/138794963 | type:Process | This was filed as an internal issue. If you are a Googler, please visit [b/138794963](http://b/138794963) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/138794963](http://b/138794963)
- Blocked by: https://github.com/material-components/material-components-ios/issues/8157 | 1.0 | Internal issue: b/138794963 - This was filed as an internal issue. If you are a Googler, please visit [b/138794963](http://b/138794963) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/138794963](http://b/138794963)
- Blocked by: https://github.com/material-components/material-components-ios/issues/8157 | process | internal issue b this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug blocked by | 1 |
8,380 | 11,541,983,012 | IssuesEvent | 2020-02-18 06:03:48 | google/ground-android | https://api.github.com/repos/google/ground-android | opened | [Feature] Description | priority: p2 type: process | From comment https://github.com/google/ground-android/pull/370#pullrequestreview-359936471
- Run code check on commit instead
- Realtime warnings in Android Studio (probably by using a plugin) | 1.0 | [Feature] Description - From comment https://github.com/google/ground-android/pull/370#pullrequestreview-359936471
- Run code check on commit instead
- Realtime warnings in Android Studio (probably by using a plugin) | process | description from comment run code check on commit instead realtime warnings in android studio probably by using a plugin | 1 |
10,157 | 13,044,162,618 | IssuesEvent | 2020-07-29 03:47:34 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | UCP: Migrate scalar function `FoundRows` from TiDB | challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor |
## Description
Port the scalar function `FoundRows` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| 2.0 | UCP: Migrate scalar function `FoundRows` from TiDB -
## Description
Port the scalar function `FoundRows` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| process | ucp migrate scalar function foundrows from tidb description port the scalar function foundrows from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb | 1 |
4,730 | 7,572,148,893 | IssuesEvent | 2018-04-23 14:15:04 | openvstorage/pyrakoon | https://api.github.com/repos/openvstorage/pyrakoon | closed | KeyError in compat.py _send_message | process_cantreproduce type_bug | ```
Dec 19 03:15:51 NY1SRV0002 python2[39339]: 2017-12-19 03:15:51 75500 -0500 - NY1SRV0002 - 39339/139840258029312 - arakoon_client/pyrakoon - 17834353 - ERROR - 'roLR7yTDk9hMdPGK': Unable to validate master on node
roLR7yTDk9hMdPGK
Dec 19 03:15:51 NY1SRV0002 python2[39339]: Traceback (most recent call last):
Dec 19 03:15:51 NY1SRV0002 python2[39339]: File "/opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1185, in determine_master
Dec 19 03:15:51 NY1SRV0002 python2[39339]: self._validate_master_id(self.master_id):
Dec 19 03:15:51 NY1SRV0002 python2[39339]: File "/opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1216, in _validate_master_id
Dec 19 03:15:51 NY1SRV0002 python2[39339]: other_master_id = self._get_master_id_from_node(master_id)
Dec 19 03:15:51 NY1SRV0002 python2[39339]: File "/opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1207, in _get_master_id_from_node
Dec 19 03:15:51 NY1SRV0002 python2[39339]: connection = self._send_message(node_id, data)
Dec 19 03:15:51 NY1SRV0002 python2[39339]: File "/opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1146, in _send_message
Dec 19 03:15:51 NY1SRV0002 python2[39339]: self._connections.pop(node_id).close()
Dec 19 03:15:51 NY1SRV0002 python2[39339]: KeyError: 'roLR7yTDk9hMdPGK'
``` | 1.0 | KeyError in compat.py _send_message - ```
Dec 19 03:15:51 NY1SRV0002 python2[39339]: 2017-12-19 03:15:51 75500 -0500 - NY1SRV0002 - 39339/139840258029312 - arakoon_client/pyrakoon - 17834353 - ERROR - 'roLR7yTDk9hMdPGK': Unable to validate master on node
roLR7yTDk9hMdPGK
Dec 19 03:15:51 NY1SRV0002 python2[39339]: Traceback (most recent call last):
Dec 19 03:15:51 NY1SRV0002 python2[39339]: File "/opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1185, in determine_master
Dec 19 03:15:51 NY1SRV0002 python2[39339]: self._validate_master_id(self.master_id):
Dec 19 03:15:51 NY1SRV0002 python2[39339]: File "/opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1216, in _validate_master_id
Dec 19 03:15:51 NY1SRV0002 python2[39339]: other_master_id = self._get_master_id_from_node(master_id)
Dec 19 03:15:51 NY1SRV0002 python2[39339]: File "/opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1207, in _get_master_id_from_node
Dec 19 03:15:51 NY1SRV0002 python2[39339]: connection = self._send_message(node_id, data)
Dec 19 03:15:51 NY1SRV0002 python2[39339]: File "/opt/OpenvStorage/ovs/extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1146, in _send_message
Dec 19 03:15:51 NY1SRV0002 python2[39339]: self._connections.pop(node_id).close()
Dec 19 03:15:51 NY1SRV0002 python2[39339]: KeyError: 'roLR7yTDk9hMdPGK'
``` | process | keyerror in compat py send message dec arakoon client pyrakoon error unable to validate master on node dec traceback most recent call last dec file opt openvstorage ovs extensions db arakoon pyrakoon pyrakoon compat py line in determine master dec self validate master id self master id dec file opt openvstorage ovs extensions db arakoon pyrakoon pyrakoon compat py line in validate master id dec other master id self get master id from node master id dec file opt openvstorage ovs extensions db arakoon pyrakoon pyrakoon compat py line in get master id from node dec connection self send message node id data dec file opt openvstorage ovs extensions db arakoon pyrakoon pyrakoon compat py line in send message dec self connections pop node id close dec keyerror | 1 |
6,646 | 9,762,161,080 | IssuesEvent | 2019-06-05 10:34:42 | aiidateam/aiida_core | https://api.github.com/repos/aiidateam/aiida_core | opened | Should process functions be made submittable | aiida-core 1.x priority/important requires discussion topic/daemon topic/processes type/question | Currently, it is not possible to `submit` process functions. This means that they will always "run" blockingly in the current interpreter, be it a local interpreter or one of a daemon worker. However, this also means that when a daemon worker is shutdown while it is working on a process function, that will be interrupted. Any calling processes will most likely than also fail. The fact that process functions always run blockingly means that shutting down the daemon becomes dangerous, because one risks killing active process functions. I am not sure yet if it would even be possible to make process functions submittable like other processes and make them restartable when the daemon is shutdown
Pinging @muhrin for input | 1.0 | Should process functions be made submittable - Currently, it is not possible to `submit` process functions. This means that they will always "run" blockingly in the current interpreter, be it a local interpreter or one of a daemon worker. However, this also means that when a daemon worker is shutdown while it is working on a process function, that will be interrupted. Any calling processes will most likely than also fail. The fact that process functions always run blockingly means that shutting down the daemon becomes dangerous, because one risks killing active process functions. I am not sure yet if it would even be possible to make process functions submittable like other processes and make them restartable when the daemon is shutdown
Pinging @muhrin for input | process | should process functions be made submittable currently it is not possible to submit process functions this means that they will always run blockingly in the current interpreter be it a local interpreter or one of a daemon worker however this also means that when a daemon worker is shutdown while it is working on a process function that will be interrupted any calling processes will most likely than also fail the fact that process functions always run blockingly means that shutting down the daemon becomes dangerous because one risks killing active process functions i am not sure yet if it would even be possible to make process functions submittable like other processes and make them restartable when the daemon is shutdown pinging muhrin for input | 1 |
26,013 | 12,824,792,801 | IssuesEvent | 2020-07-06 14:01:28 | tarantool/tarantool | https://api.github.com/repos/tarantool/tarantool | closed | Add TPC-H benchmark to CI for regular testing | performance qa | In #2903 we have prepared tools (scripts, data, makefiles) for running TPC-H benchmarking of Tarantool against baseline.
Now we need to integrate it to regular CI testing with collection of results (Q1..Q12 for given commit hash) for visualization in the bench.tarantool.org
- NB! Outliers (Q13, Q17 and Q20) are excluded now, to keep timings of run as reasonabel values. We will return back to them once #4933, #4935 and #4936 will be resolved;
- I'd love to see chart with Q1 .. Q22 as x-axis, and version numbers as y-axis. But I'm ok to see it as we display now for other benchmarks; | True | Add TPC-H benchmark to CI for regular testing - In #2903 we have prepared tools (scripts, data, makefiles) for running TPC-H benchmarking of Tarantool against baseline.
Now we need to integrate it to regular CI testing with collection of results (Q1..Q12 for given commit hash) for visualization in the bench.tarantool.org
- NB! Outliers (Q13, Q17 and Q20) are excluded now, to keep timings of run as reasonabel values. We will return back to them once #4933, #4935 and #4936 will be resolved;
- I'd love to see chart with Q1 .. Q22 as x-axis, and version numbers as y-axis. But I'm ok to see it as we display now for other benchmarks; | non_process | add tpc h benchmark to ci for regular testing in we have prepared tools scripts data makefiles for running tpc h benchmarking of tarantool against baseline now we need to integrate it to regular ci testing with collection of results for given commit hash for visualization in the bench tarantool org nb outliers and are excluded now to keep timings of run as reasonabel values we will return back to them once and will be resolved i d love to see chart with as x axis and version numbers as y axis but i m ok to see it as we display now for other benchmarks | 0 |
73,028 | 15,252,093,556 | IssuesEvent | 2021-02-20 01:30:06 | billmcchesney1/davinci | https://api.github.com/repos/billmcchesney1/davinci | opened | CVE-2021-23341 (High) detected in prismjs-1.19.0.tgz | security vulnerability | ## CVE-2021-23341 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>prismjs-1.19.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.19.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.19.0.tgz</a></p>
<p>Path to dependency file: davinci/package.json</p>
<p>Path to vulnerable library: davinci/node_modules/prismjs/package.json</p>
<p>
Dependency Hierarchy:
- vuepress-1.2.0.tgz (Root Library)
- core-1.3.0.tgz
- markdown-1.3.0.tgz
- :x: **prismjs-1.19.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341>CVE-2021-23341</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341</a></p>
<p>Release Date: 2021-02-18</p>
<p>Fix Resolution: 1.23.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.19.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"vuepress:1.2.0;@vuepress/core:1.3.0;@vuepress/markdown:1.3.0;prismjs:1.19.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.23.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23341","vulnerabilityDetails":"The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-23341 (High) detected in prismjs-1.19.0.tgz - ## CVE-2021-23341 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>prismjs-1.19.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.19.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.19.0.tgz</a></p>
<p>Path to dependency file: davinci/package.json</p>
<p>Path to vulnerable library: davinci/node_modules/prismjs/package.json</p>
<p>
Dependency Hierarchy:
- vuepress-1.2.0.tgz (Root Library)
- core-1.3.0.tgz
- markdown-1.3.0.tgz
- :x: **prismjs-1.19.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341>CVE-2021-23341</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341</a></p>
<p>Release Date: 2021-02-18</p>
<p>Fix Resolution: 1.23.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.19.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"vuepress:1.2.0;@vuepress/core:1.3.0;@vuepress/markdown:1.3.0;prismjs:1.19.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.23.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23341","vulnerabilityDetails":"The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_process | cve high detected in prismjs tgz cve high severity vulnerability vulnerable library prismjs tgz lightweight robust elegant syntax highlighting a spin off project from dabblet library home page a href path to dependency file davinci package json path to vulnerable library davinci node modules prismjs package json dependency hierarchy vuepress tgz root library core tgz markdown tgz x prismjs tgz vulnerable library found in base branch master vulnerability details the package prismjs before are vulnerable to regular expression denial of service redos via the prism asciidoc prism rest prism tap and prism eiffel components publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree vuepress vuepress core vuepress markdown prismjs isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the package prismjs before are vulnerable to regular expression denial of service redos via the prism asciidoc prism rest prism tap and prism eiffel components vulnerabilityurl | 0 |
21,420 | 6,150,957,683 | IssuesEvent | 2017-06-28 00:27:07 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | Stop using @pete and @uma (strings) in role assignments rather than @2 and @3 (database IDs) | Component: Code Infrastructure Component: Permissions Type: Suggestion zTriaged | The more I stare at the role assignment table (screenshot attached) the more I'm completely freaked out by the use of strings in role assignments rather than id numbers.
When I assign a role to Pete it gets persisted as "@pete" rather than "@2".
I'm fine with the use of @ for users and some other symbol (&) for groups.
But what if pete wants to change his username?
Why should I have to do extra database lookups to figure out the database id of the authenticated user? Why can't I just have the authenticated user's database id right there in front of me while I'm looking at the role assignment table?
Here are some more examples with `@admin`, `@finch`, and `@spruce`:

| 1.0 | Stop using @pete and @uma (strings) in role assignments rather than @2 and @3 (database IDs) - The more I stare at the role assignment table (screenshot attached) the more I'm completely freaked out by the use of strings in role assignments rather than id numbers.
When I assign a role to Pete it gets persisted as "@pete" rather than "@2".
I'm fine with the use of @ for users and some other symbol (&) for groups.
But what if pete wants to change his username?
Why should I have to do extra database lookups to figure out the database id of the authenticated user? Why can't I just have the authenticated user's database id right there in front of me while I'm looking at the role assignment table?
Here are some more examples with `@admin`, `@finch`, and `@spruce`:

| non_process | stop using pete and uma strings in role assignments rather than and database ids the more i stare at the role assignment table screenshot attached the more i m completely freaked out by the use of strings in role assignments rather than id numbers when i assign a role to pete it gets persisted as pete rather than i m fine with the use of for users and some other symbol for groups but what if pete wants to change his username why should i have to do extra database lookups to figure out the database id of the authenticated user why can t i just have the authenticated user s database id right there in front of me while i m looking at the role assignment table here are some more examples with admin finch and spruce | 0 |
16,937 | 22,286,954,466 | IssuesEvent | 2022-06-11 19:40:16 | sparc4-dev/astropop | https://api.github.com/repos/sparc4-dev/astropop | closed | Set reference image for image registration | enhancement image-processing | Image registration for lists of images should have an argument called `ref_image` to manual set the image number to be used as reference. | 1.0 | Set reference image for image registration - Image registration for lists of images should have an argument called `ref_image` to manual set the image number to be used as reference. | process | set reference image for image registration image registration for lists of images should have an argument called ref image to manual set the image number to be used as reference | 1 |
1,846 | 4,647,440,011 | IssuesEvent | 2016-10-01 14:00:29 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | `process.stdin` is not implemented in bash for Windows 10... unknown type | child_process libuv process windows | * **Version**: 6.4.0 and 4.5.0 tested
* **Platform**: bash on Windows 10 64bit
* **Subsystem**: stdio (I think?)
uname output, just in case: `Linux WHITSON 3.4.0+ #1 PREEMPT Thu Aug 1 17:06:05 CST 2013 x86_64 x86_64 x86_64`
Repro code:
```javascript
var exec = require('child_process').exec;
var fs = require('fs');
var stream = fs.createReadStream('./repro.js');
var child = exec(
'node -e "process.stdin.pipe(process.stdout)"',
{ cwd: process.cwd() },
console.log.bind(console, 'result args:')
);
stream.pipe(child.stdin);
```
I named this file `repro.js`... yes, it reads itself, but any stream will reproduce this error.
Then run: `node repro.js`
This will output:
```bash
kiril@WHITSON:~/temp$ node repro.js
result args: { [Error: Command failed: /bin/sh -c node -e "process.stdin.pipe(process.stdout)"
node.js:708
throw new Error('Implement me. Unknown stdin file type!');
^
Error: Implement me. Unknown stdin file type!
at process.stdin (node.js:708:17)
at [eval]:1:8
at Object.exports.runInThisContext (vm.js:54:17)
at Object.<anonymous> ([eval]-wrapper:6:22)
at Module._compile (module.js:409:26)
at node.js:579:27
at nextTickCallbackWith0Args (node.js:420:9)
at process._tickCallback (node.js:349:13)
]
killed: false,
code: 1,
signal: null,
cmd: '/bin/sh -c node -e "process.stdin.pipe(process.stdout)"' } node.js:708
throw new Error('Implement me. Unknown stdin file type!');
^
Error: Implement me. Unknown stdin file type!
at process.stdin (node.js:708:17)
at [eval]:1:8
at Object.exports.runInThisContext (vm.js:54:17)
at Object.<anonymous> ([eval]-wrapper:6:22)
at Module._compile (module.js:409:26)
at node.js:579:27
at nextTickCallbackWith0Args (node.js:420:9)
at process._tickCallback (node.js:349:13)
```
This code works correctly (outputting the contents of the file itself) on regular Ubuntu and regular Windows, but not on the Linux subsystem in Windows 10.
This was discovered in https://github.com/catdad/shellton/issues/11.
Same issue happens with `spawn` as well, obviously, but it's tougher to get the error. | 2.0 | `process.stdin` is not implemented in bash for Windows 10... unknown type - * **Version**: 6.4.0 and 4.5.0 tested
* **Platform**: bash on Windows 10 64bit
* **Subsystem**: stdio (I think?)
uname output, just in case: `Linux WHITSON 3.4.0+ #1 PREEMPT Thu Aug 1 17:06:05 CST 2013 x86_64 x86_64 x86_64`
Repro code:
```javascript
var exec = require('child_process').exec;
var fs = require('fs');
var stream = fs.createReadStream('./repro.js');
var child = exec(
'node -e "process.stdin.pipe(process.stdout)"',
{ cwd: process.cwd() },
console.log.bind(console, 'result args:')
);
stream.pipe(child.stdin);
```
I named this file `repro.js`... yes, it reads itself, but any stream will reproduce this error.
Then run: `node repro.js`
This will output:
```bash
kiril@WHITSON:~/temp$ node repro.js
result args: { [Error: Command failed: /bin/sh -c node -e "process.stdin.pipe(process.stdout)"
node.js:708
throw new Error('Implement me. Unknown stdin file type!');
^
Error: Implement me. Unknown stdin file type!
at process.stdin (node.js:708:17)
at [eval]:1:8
at Object.exports.runInThisContext (vm.js:54:17)
at Object.<anonymous> ([eval]-wrapper:6:22)
at Module._compile (module.js:409:26)
at node.js:579:27
at nextTickCallbackWith0Args (node.js:420:9)
at process._tickCallback (node.js:349:13)
]
killed: false,
code: 1,
signal: null,
cmd: '/bin/sh -c node -e "process.stdin.pipe(process.stdout)"' } node.js:708
throw new Error('Implement me. Unknown stdin file type!');
^
Error: Implement me. Unknown stdin file type!
at process.stdin (node.js:708:17)
at [eval]:1:8
at Object.exports.runInThisContext (vm.js:54:17)
at Object.<anonymous> ([eval]-wrapper:6:22)
at Module._compile (module.js:409:26)
at node.js:579:27
at nextTickCallbackWith0Args (node.js:420:9)
at process._tickCallback (node.js:349:13)
```
This code works correctly (outputting the contents of the file itself) on regular Ubuntu and regular Windows, but not on the Linux subsystem in Windows 10.
This was discovered in https://github.com/catdad/shellton/issues/11.
Same issue happens with `spawn` as well, obviously, but it's tougher to get the error. | process | process stdin is not implemented in bash for windows unknown type version and tested platform bash on windows subsystem stdio i think uname output just in case linux whitson preempt thu aug cst repro code javascript var exec require child process exec var fs require fs var stream fs createreadstream repro js var child exec node e process stdin pipe process stdout cwd process cwd console log bind console result args stream pipe child stdin i named this file repro js yes it reads itself but any stream will reproduce this error then run node repro js this will output bash kiril whitson temp node repro js result args error command failed bin sh c node e process stdin pipe process stdout node js throw new error implement me unknown stdin file type error implement me unknown stdin file type at process stdin node js at at object exports runinthiscontext vm js at object wrapper at module compile module js at node js at node js at process tickcallback node js killed false code signal null cmd bin sh c node e process stdin pipe process stdout node js throw new error implement me unknown stdin file type error implement me unknown stdin file type at process stdin node js at at object exports runinthiscontext vm js at object wrapper at module compile module js at node js at node js at process tickcallback node js this code works correctly outputting the contents of the file itself on regular ubuntu and regular windows but not on the linux subsystem in windows this was discovered in same issue happens with spawn as well obviously but it s tougher to get the error | 1 |
171,771 | 13,247,849,142 | IssuesEvent | 2020-08-19 17:56:34 | robe070/cookbooks | https://api.github.com/repos/robe070/cookbooks | reopened | P2 Solution Template Regression Test Suite | azure enhancement test | The Solution Template is fully regression tested in the build pipeline. (Previously it was going to be in preview which presumed that Microsoft will have checked the submission thoroughly by the. But they don't. They only do that when choose to go LIVE! Actually getting to preview seems to just require that the ARM TTK tests pass, which we already know is the case. Better to test it automatically in the build pipeline rather than have an extra manual step in Preview. Now, once its ready to submit for preview there are no further tests to be performed. From then on its just about Microsoft validation)
(Note: tables are edited in [this ](https://docs.google.com/document/d/1wcWsssDRp6wCH0xJu-rQ0w_4asgLkS0E-J8IAZdX8Ho/edit?usp=sharing) Google Docs document. And permission needs to be provided to edit it)
- Each run of the template must be created in its own resource group with a unique name. Possibly the names of the tests below will work?
- If the resource group exists already it must be deleted.
- If the test succeeds, the resource group is deleted
- If the test fails, the resource group is retained for diagnostic purposes.
- The resource group must be tagged `Usage=test-temp`. These resource groups will be deleted at the COB IST. If they need to be retained overnight then the tag `ShutdownPolicy=excluded` must be added manually.
The tests are as follows
- Template Settings must be the default unless specified in the table below.
Test | SQLAZURE1 | SQLAZURE2 | SQLAZURE3 | SQLAZURE4 | SQLAZURE5 | SQLAZURE6
-- | -- | -- | -- | -- | -- | --
AG Tier | Standard | Standard | Standard | WAF | WAF | WAF
AG SKU | Standard_Small | Standard_Medium | Standard_Large | WAF_Medium | WAF_Large | WAF_v2
DB New | New | New | New | New | New | New
DB Type | SQLAZURE | SQLAZURE | SQLAZURE | SQLAZURE | SQLAZURE | SQLAZURE
Edition | Basic | Standard | Standard | Standard | Premium | Premium
RSO Name | Basic | S0 | S4 | S12 | P1 | P11
Image Offer | lansa-scalable-license | lansa-scalable-license | lansa-scalable-license | lansa-scalable-license | lansa-scalable-license | lansa-scalable-license
Image Plan | lansa-scalable-license-14-2 | w19d-14-2 | w16d-14-2 | w19d-15-0 | w16d-15-0 | w12r2d-15-0
Result | Success | Success | Success | Success | Success | Success
The tests for MYSQL and MSSQLS must programmatically create the database in the resource group before deploying the template.
Test | MYSQL1 | MSSQLS1
-- | -- | --
AG Tier | Standard | Standard
AG SKU | Standard_Medium | Standard_Medium
DB New | Existing | Existing
DB Type | MYSQL | MSSQLS
Edition | Standard | Standard
RSO Name | S2 | S2
Image Offer | lansa-scalable-license | lansa-scalable-license
Image Plan | lansa-scalable-license-14-2 | lansa-scalable-license-14-2
Result | Success | Success
| 1.0 | P2 Solution Template Regression Test Suite - The Solution Template is fully regression tested in the build pipeline. (Previously it was going to be in preview which presumed that Microsoft will have checked the submission thoroughly by the. But they don't. They only do that when choose to go LIVE! Actually getting to preview seems to just require that the ARM TTK tests pass, which we already know is the case. Better to test it automatically in the build pipeline rather than have an extra manual step in Preview. Now, once its ready to submit for preview there are no further tests to be performed. From then on its just about Microsoft validation)
(Note: tables are edited in [this ](https://docs.google.com/document/d/1wcWsssDRp6wCH0xJu-rQ0w_4asgLkS0E-J8IAZdX8Ho/edit?usp=sharing) Google Docs document. And permission needs to be provided to edit it)
- Each run of the template must be created in its own resource group with a unique name. Possibly the names of the tests below will work?
- If the resource group exists already it must be deleted.
- If the test succeeds, the resource group is deleted
- If the test fails, the resource group is retained for diagnostic purposes.
- The resource group must be tagged `Usage=test-temp`. These resource groups will be deleted at the COB IST. If they need to be retained overnight then the tag `ShutdownPolicy=excluded` must be added manually.
The tests are as follows
- Template Settings must be the default unless specified in the table below.
Test | SQLAZURE1 | SQLAZURE2 | SQLAZURE3 | SQLAZURE4 | SQLAZURE5 | SQLAZURE6
-- | -- | -- | -- | -- | -- | --
AG Tier | Standard | Standard | Standard | WAF | WAF | WAF
AG SKU | Standard_Small | Standard_Medium | Standard_Large | WAF_Medium | WAF_Large | WAF_v2
DB New | New | New | New | New | New | New
DB Type | SQLAZURE | SQLAZURE | SQLAZURE | SQLAZURE | SQLAZURE | SQLAZURE
Edition | Basic | Standard | Standard | Standard | Premium | Premium
RSO Name | Basic | S0 | S4 | S12 | P1 | P11
Image Offer | lansa-scalable-license | lansa-scalable-license | lansa-scalable-license | lansa-scalable-license | lansa-scalable-license | lansa-scalable-license
Image Plan | lansa-scalable-license-14-2 | w19d-14-2 | w16d-14-2 | w19d-15-0 | w16d-15-0 | w12r2d-15-0
Result | Success | Success | Success | Success | Success | Success
The tests for MYSQL and MSSQLS must programmatically create the database in the resource group before deploying the template.
Test | MYSQL1 | MSSQLS1
-- | -- | --
AG Tier | Standard | Standard
AG SKU | Standard_Medium | Standard_Medium
DB New | Existing | Existing
DB Type | MYSQL | MSSQLS
Edition | Standard | Standard
RSO Name | S2 | S2
Image Offer | lansa-scalable-license | lansa-scalable-license
Image Plan | lansa-scalable-license-14-2 | lansa-scalable-license-14-2
Result | Success | Success
| non_process | solution template regression test suite the solution template is fully regression tested in the build pipeline previously it was going to be in preview which presumed that microsoft will have checked the submission thoroughly by the but they don t they only do that when choose to go live actually getting to preview seems to just require that the arm ttk tests pass which we already know is the case better to test it automatically in the build pipeline rather than have an extra manual step in preview now once its ready to submit for preview there are no further tests to be performed from then on its just about microsoft validation note tables are edited in google docs document and permission needs to be provided to edit it each run of the template must be created in its own resource group with a unique name possibly the names of the tests below will work if the resource group exists already it must be deleted if the test succeeds the resource group is deleted if the test fails the resource group is retained for diagnostic purposes the resource group must be tagged usage test temp these resource groups will be deleted at the cob ist if they need to be retained overnight then the tag shutdownpolicy excluded must be added manually the tests are as follows template settings must be the default unless specified in the table below test ag tier standard standard standard waf waf waf ag sku standard small standard medium standard large waf medium waf large waf db new new new new new new new db type sqlazure sqlazure sqlazure sqlazure sqlazure sqlazure edition basic standard standard standard premium premium rso name basic image offer lansa scalable license lansa scalable license lansa scalable license lansa scalable license lansa scalable license lansa scalable license image plan lansa scalable license result success success success success success success the tests for mysql and mssqls must programmatically create the database in the resource group before deploying the template test ag tier standard standard ag sku standard medium standard medium db new existing existing db type mysql mssqls edition standard standard rso name image offer lansa scalable license lansa scalable license image plan lansa scalable license lansa scalable license result success success | 0 |
4,356 | 7,260,435,275 | IssuesEvent | 2018-02-18 09:50:25 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | [FEATURE] Remove zonal stats plugin | Automatic new feature Processing | Original commit: https://github.com/qgis/QGIS/commit/b2587b7bf33376dc2f158e145322f82b341a1922 by nyalldawson
This is now fully exposed via processing, which is the logical
place for this feature to reside.
One less c++ plugin is a good thing!
(marked as feature so we remember to mention this in changelog!)
| 1.0 | [FEATURE] Remove zonal stats plugin - Original commit: https://github.com/qgis/QGIS/commit/b2587b7bf33376dc2f158e145322f82b341a1922 by nyalldawson
This is now fully exposed via processing, which is the logical
place for this feature to reside.
One less c++ plugin is a good thing!
(marked as feature so we remember to mention this in changelog!)
| process | remove zonal stats plugin original commit by nyalldawson this is now fully exposed via processing which is the logical place for this feature to reside one less c plugin is a good thing marked as feature so we remember to mention this in changelog | 1 |
105,112 | 16,624,121,836 | IssuesEvent | 2021-06-03 07:25:28 | Thanraj/OpenSSL_1.0.1b | https://api.github.com/repos/Thanraj/OpenSSL_1.0.1b | opened | CVE-2013-0166 (Medium) detected in opensslOpenSSL_1_0_1b, opensslOpenSSL_1_0_1b | security vulnerability | ## CVE-2013-0166 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opensslOpenSSL_1_0_1b</b>, <b>opensslOpenSSL_1_0_1b</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
OpenSSL before 0.9.8y, 1.0.0 before 1.0.0k, and 1.0.1 before 1.0.1d does not properly perform signature verification for OCSP responses, which allows remote OCSP servers to cause a denial of service (NULL pointer dereference and application crash) via an invalid key.
<p>Publish Date: 2013-02-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-0166>CVE-2013-0166</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2013-0166">https://nvd.nist.gov/vuln/detail/CVE-2013-0166</a></p>
<p>Release Date: 2013-02-08</p>
<p>Fix Resolution: 0.9.8y,1.0.0k,1.0.1d</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2013-0166 (Medium) detected in opensslOpenSSL_1_0_1b, opensslOpenSSL_1_0_1b - ## CVE-2013-0166 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opensslOpenSSL_1_0_1b</b>, <b>opensslOpenSSL_1_0_1b</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
OpenSSL before 0.9.8y, 1.0.0 before 1.0.0k, and 1.0.1 before 1.0.1d does not properly perform signature verification for OCSP responses, which allows remote OCSP servers to cause a denial of service (NULL pointer dereference and application crash) via an invalid key.
<p>Publish Date: 2013-02-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-0166>CVE-2013-0166</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2013-0166">https://nvd.nist.gov/vuln/detail/CVE-2013-0166</a></p>
<p>Release Date: 2013-02-08</p>
<p>Fix Resolution: 0.9.8y,1.0.0k,1.0.1d</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in opensslopenssl opensslopenssl cve medium severity vulnerability vulnerable libraries opensslopenssl opensslopenssl vulnerability details openssl before before and before does not properly perform signature verification for ocsp responses which allows remote ocsp servers to cause a denial of service null pointer dereference and application crash via an invalid key publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
8,432 | 11,596,716,257 | IssuesEvent | 2020-02-24 19:29:32 | aws/aws-cdk-rfcs | https://api.github.com/repos/aws/aws-cdk-rfcs | closed | Construct library graduation process | process status/proposed |
|PR|Champion|
|--|--------|
|# | |
## Description
Devise a methodology to graduate a module
- Bake time (age)
- Bugs
- Feature-set
- API sign-off
- Documentation
- Sample
- Bug Bash
- Usability
Devise a metric of "amount" of experimentation we have, and track it so we allocate capacity to maintain continuously.
## Progress
<!-- indicates the state of the proposal -->
<!-- see readme for information on rfc lifecycle -->
- [x] Tracking Issue Created
- [ ] RFC PR Created <!-- add link to above table when available -->
- [ ] Core Team Member Assigned <!-- add username to above table when known -->
- [ ] Initial Approval / Final Comment Period
- [ ] Ready For Implementation
<!-- add list of issues needed for implementing the proposal here -->
- [ ] implementation issue 1
- [ ] Resolved <!-- implementation complete and merged --> | 1.0 | Construct library graduation process -
|PR|Champion|
|--|--------|
|# | |
## Description
Devise a methodology to graduate a module
- Bake time (age)
- Bugs
- Feature-set
- API sign-off
- Documentation
- Sample
- Bug Bash
- Usability
Devise a metric of "amount" of experimentation we have, and track it so we allocate capacity to maintain continuously.
## Progress
<!-- indicates the state of the proposal -->
<!-- see readme for information on rfc lifecycle -->
- [x] Tracking Issue Created
- [ ] RFC PR Created <!-- add link to above table when available -->
- [ ] Core Team Member Assigned <!-- add username to above table when known -->
- [ ] Initial Approval / Final Comment Period
- [ ] Ready For Implementation
<!-- add list of issues needed for implementing the proposal here -->
- [ ] implementation issue 1
- [ ] Resolved <!-- implementation complete and merged --> | process | construct library graduation process pr champion description devise a methodology to graduate a module bake time age bugs feature set api sign off documentation sample bug bash usability devise a metric of amount of experimentation we have and track it so we allocate capacity to maintain continuously progress tracking issue created rfc pr created core team member assigned initial approval final comment period ready for implementation implementation issue resolved | 1 |
16,868 | 9,917,003,345 | IssuesEvent | 2019-06-28 21:58:26 | istio/istio | https://api.github.com/repos/istio/istio | closed | After applying a policy, Galley crashes | area/config area/security/aaa | Working with Istio 1.1.7
Studying how to use JWTs, I applied the following YAML file
```
apiVersion: "authentication.istio.io/v1alpha1"
kind: Policy
metadata:
name: front-end-policy
spec:
targets:
- name: front-end
origins:
- jwt:
issuer: "testing@secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/jwks.json"
principalBinding: USE_ORIGIN
```
I get this erro
Error from server (InternalError): error when creating "policy.yaml": Internal error occurred: failed calling admission webhook "pilot.validation.istio.io": Post https://istio-galley.istio-system.svc:7443/admitpilot?timeout=30s: stream error: stream ID 1; INTERNAL_ERROR
The Galley logs show
2019-06-13T11:22:49.757037Z info http2: panic serving 172.16.0.49:35736: runtime error: invalid memory address or nil pointer dereference
goroutine 1442 [running]:
net/http.(*http2serverConn).runHandler.func1(0xc4203daae0, 0xc420be9faf, 0xc42050c1c0)
/usr/local/go/src/net/http/h2_bundle.go:5753 +0x190
panic(0x1ca6dc0, 0x2fd1910)
/usr/local/go/src/runtime/panic.go:502 +0x229
istio.io/istio/pilot/pkg/model.ValidateAuthenticationPolicy(0xc420c818b0, 0x10, 0xc420c818a7, 0x7, 0x21be3c0, 0xc420474100, 0xe, 0x1fab594)
/workspace/go/src/istio.io/istio/pilot/pkg/model/validation.go:1457 +0x58d
istio.io/istio/galley/pkg/crd/validation.(*Webhook).admitPilot(0xc420692b40, 0xc420e96000, 0xc4201bef00)
/workspace/go/src/istio.io/istio/galley/pkg/crd/validation/webhook.go:472 +0x509
istio.io/istio/galley/pkg/crd/validation.(*Webhook).(istio.io/istio/galley/pkg/crd/validation.admitPilot)-fm(0xc420e96000, 0xc420d9b000)
/workspace/go/src/istio.io/istio/galley/pkg/crd/validation/webhook.go:435 +0x34
istio.io/istio/galley/pkg/crd/validation.serve(0x21cb6c0, 0xc4203daae0, 0xc420d4a200, 0xc420be9cc0)
/workspace/go/src/istio.io/istio/galley/pkg/crd/validation/webhook.go:407 +0x4ee
istio.io/istio/galley/pkg/crd/validation.(*Webhook).serveAdmitPilot(0xc420692b40, 0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/workspace/go/src/istio.io/istio/galley/pkg/crd/validation/webhook.go:435 +0x5f
istio.io/istio/galley/pkg/crd/validation.(*Webhook).(istio.io/istio/galley/pkg/crd/validation.serveAdmitPilot)-fm(0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/workspace/go/src/istio.io/istio/galley/pkg/crd/validation/webhook.go:287 +0x48
net/http.HandlerFunc.ServeHTTP(0xc4200b7c10, 0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/usr/local/go/src/net/http/server.go:1947 +0x44
net/http.(*ServeMux).ServeHTTP(0xc4206919b0, 0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/usr/local/go/src/net/http/server.go:2340 +0x130
net/http.serverHandler.ServeHTTP(0xc42061fe10, 0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/usr/local/go/src/net/http/server.go:2697 +0xbc
net/http.initNPNRequest.ServeHTTP(0xc4204bf500, 0xc42061fe10, 0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/usr/local/go/src/net/http/server.go:3263 +0x9a
net/http.(Handler).ServeHTTP-fm(0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/usr/local/go/src/net/http/h2_bundle.go:5475 +0x4d
net/http.(*http2serverConn).runHandler(0xc42050c1c0, 0xc4203daae0, 0xc420d4a200, 0xc420c4d600)
/usr/local/go/src/net/http/h2_bundle.go:5760 +0x89
created by net/http.(*http2serverConn).processHeaders
/usr/local/go/src/net/http/h2_bundle.go:5494 +0x46b
Any ideas, please? | True | After applying a policy, Galley crashes - Working with Istio 1.1.7
Studying how to use JWTs, I applied the following YAML file
```
apiVersion: "authentication.istio.io/v1alpha1"
kind: Policy
metadata:
name: front-end-policy
spec:
targets:
- name: front-end
origins:
- jwt:
issuer: "testing@secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.1/security/tools/jwt/samples/jwks.json"
principalBinding: USE_ORIGIN
```
I get this erro
Error from server (InternalError): error when creating "policy.yaml": Internal error occurred: failed calling admission webhook "pilot.validation.istio.io": Post https://istio-galley.istio-system.svc:7443/admitpilot?timeout=30s: stream error: stream ID 1; INTERNAL_ERROR
The Galley logs show
2019-06-13T11:22:49.757037Z info http2: panic serving 172.16.0.49:35736: runtime error: invalid memory address or nil pointer dereference
goroutine 1442 [running]:
net/http.(*http2serverConn).runHandler.func1(0xc4203daae0, 0xc420be9faf, 0xc42050c1c0)
/usr/local/go/src/net/http/h2_bundle.go:5753 +0x190
panic(0x1ca6dc0, 0x2fd1910)
/usr/local/go/src/runtime/panic.go:502 +0x229
istio.io/istio/pilot/pkg/model.ValidateAuthenticationPolicy(0xc420c818b0, 0x10, 0xc420c818a7, 0x7, 0x21be3c0, 0xc420474100, 0xe, 0x1fab594)
/workspace/go/src/istio.io/istio/pilot/pkg/model/validation.go:1457 +0x58d
istio.io/istio/galley/pkg/crd/validation.(*Webhook).admitPilot(0xc420692b40, 0xc420e96000, 0xc4201bef00)
/workspace/go/src/istio.io/istio/galley/pkg/crd/validation/webhook.go:472 +0x509
istio.io/istio/galley/pkg/crd/validation.(*Webhook).(istio.io/istio/galley/pkg/crd/validation.admitPilot)-fm(0xc420e96000, 0xc420d9b000)
/workspace/go/src/istio.io/istio/galley/pkg/crd/validation/webhook.go:435 +0x34
istio.io/istio/galley/pkg/crd/validation.serve(0x21cb6c0, 0xc4203daae0, 0xc420d4a200, 0xc420be9cc0)
/workspace/go/src/istio.io/istio/galley/pkg/crd/validation/webhook.go:407 +0x4ee
istio.io/istio/galley/pkg/crd/validation.(*Webhook).serveAdmitPilot(0xc420692b40, 0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/workspace/go/src/istio.io/istio/galley/pkg/crd/validation/webhook.go:435 +0x5f
istio.io/istio/galley/pkg/crd/validation.(*Webhook).(istio.io/istio/galley/pkg/crd/validation.serveAdmitPilot)-fm(0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/workspace/go/src/istio.io/istio/galley/pkg/crd/validation/webhook.go:287 +0x48
net/http.HandlerFunc.ServeHTTP(0xc4200b7c10, 0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/usr/local/go/src/net/http/server.go:1947 +0x44
net/http.(*ServeMux).ServeHTTP(0xc4206919b0, 0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/usr/local/go/src/net/http/server.go:2340 +0x130
net/http.serverHandler.ServeHTTP(0xc42061fe10, 0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/usr/local/go/src/net/http/server.go:2697 +0xbc
net/http.initNPNRequest.ServeHTTP(0xc4204bf500, 0xc42061fe10, 0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/usr/local/go/src/net/http/server.go:3263 +0x9a
net/http.(Handler).ServeHTTP-fm(0x21cb6c0, 0xc4203daae0, 0xc420d4a200)
/usr/local/go/src/net/http/h2_bundle.go:5475 +0x4d
net/http.(*http2serverConn).runHandler(0xc42050c1c0, 0xc4203daae0, 0xc420d4a200, 0xc420c4d600)
/usr/local/go/src/net/http/h2_bundle.go:5760 +0x89
created by net/http.(*http2serverConn).processHeaders
/usr/local/go/src/net/http/h2_bundle.go:5494 +0x46b
Any ideas, please? | non_process | after applying a policy galley crashes working with istio studying how to use jwts i applied the following yaml file apiversion authentication istio io kind policy metadata name front end policy spec targets name front end origins jwt issuer testing secure istio io jwksuri principalbinding use origin i get this erro error from server internalerror error when creating policy yaml internal error occurred failed calling admission webhook pilot validation istio io post stream error stream id internal error the galley logs show info panic serving runtime error invalid memory address or nil pointer dereference goroutine net http runhandler usr local go src net http bundle go panic usr local go src runtime panic go istio io istio pilot pkg model validateauthenticationpolicy workspace go src istio io istio pilot pkg model validation go istio io istio galley pkg crd validation webhook admitpilot workspace go src istio io istio galley pkg crd validation webhook go istio io istio galley pkg crd validation webhook istio io istio galley pkg crd validation admitpilot fm workspace go src istio io istio galley pkg crd validation webhook go istio io istio galley pkg crd validation serve workspace go src istio io istio galley pkg crd validation webhook go istio io istio galley pkg crd validation webhook serveadmitpilot workspace go src istio io istio galley pkg crd validation webhook go istio io istio galley pkg crd validation webhook istio io istio galley pkg crd validation serveadmitpilot fm workspace go src istio io istio galley pkg crd validation webhook go net http handlerfunc servehttp usr local go src net http server go net http servemux servehttp usr local go src net http server go net http serverhandler servehttp usr local go src net http server go net http initnpnrequest servehttp usr local go src net http server go net http handler servehttp fm usr local go src net http bundle go net http runhandler usr local go src net http bundle go created by net http processheaders usr local go src net http bundle go any ideas please | 0 |
1,376 | 3,932,874,308 | IssuesEvent | 2016-04-25 17:10:33 | kerubistan/kerub | https://api.github.com/repos/kerubistan/kerub | opened | add firewall commands | component:data processing component:virtualization enhancement priority: normal | Add junix commands to handle firewall status
basic operations just open and close ports | 1.0 | add firewall commands - Add junix commands to handle firewall status
basic operations just open and close ports | process | add firewall commands add junix commands to handle firewall status basic operations just open and close ports | 1 |
18,151 | 24,189,065,006 | IssuesEvent | 2022-09-23 15:42:08 | ydb-platform/ydb | https://api.github.com/repos/ydb-platform/ydb | closed | Добавление рекурсивных запросов | enhancement area/queryprocessor | Планируете ли вы добавить рекурсивные запросы? Насколько я понял, на данный момент yql не поддерживает рекурсивные запросы даже через ACTION, лямбда функции или subquerries. | 1.0 | Добавление рекурсивных запросов - Планируете ли вы добавить рекурсивные запросы? Насколько я понял, на данный момент yql не поддерживает рекурсивные запросы даже через ACTION, лямбда функции или subquerries. | process | добавление рекурсивных запросов планируете ли вы добавить рекурсивные запросы насколько я понял на данный момент yql не поддерживает рекурсивные запросы даже через action лямбда функции или subquerries | 1 |
83,444 | 10,352,661,570 | IssuesEvent | 2019-09-05 09:43:51 | Royal-Navy/standards-toolkit | https://api.github.com/repos/Royal-Navy/standards-toolkit | closed | Docs Site sidebar redesign | Docs Site In Progress - Development Signed Off - Design enhancement | # Overview
After conducting multiple User Research sessions on the Docs Site, a recurring issue has emerged with the usability of the sidebar. Currently, the active state uses a blue colour to signify the active page in sidebar, however we also use a slightly darker blue to highlight a parent link in the sidebar. This causes the parent link to look active, even when it isn't.
The proposed solution:
- Remove the blue highlight from the parent links to prevent confusion as to what is currently the active page.
- Placing all sub links inside a collapsable menu underneath the Parent link.
- Removing the background to reduce the overall impact of the sidebar
<img width="983" alt="Screenshot 2019-08-22 at 12 10 57" src="https://user-images.githubusercontent.com/48090803/63510362-47504880-c4d6-11e9-8234-7c33f604fd45.png">
| 1.0 | Docs Site sidebar redesign - # Overview
After conducting multiple User Research sessions on the Docs Site, a recurring issue has emerged with the usability of the sidebar. Currently, the active state uses a blue colour to signify the active page in sidebar, however we also use a slightly darker blue to highlight a parent link in the sidebar. This causes the parent link to look active, even when it isn't.
The proposed solution:
- Remove the blue highlight from the parent links to prevent confusion as to what is currently the active page.
- Placing all sub links inside a collapsable menu underneath the Parent link.
- Removing the background to reduce the overall impact of the sidebar
<img width="983" alt="Screenshot 2019-08-22 at 12 10 57" src="https://user-images.githubusercontent.com/48090803/63510362-47504880-c4d6-11e9-8234-7c33f604fd45.png">
| non_process | docs site sidebar redesign overview after conducting multiple user research sessions on the docs site a recurring issue has emerged with the usability of the sidebar currently the active state uses a blue colour to signify the active page in sidebar however we also use a slightly darker blue to highlight a parent link in the sidebar this causes the parent link to look active even when it isn t the proposed solution remove the blue highlight from the parent links to prevent confusion as to what is currently the active page placing all sub links inside a collapsable menu underneath the parent link removing the background to reduce the overall impact of the sidebar img width alt screenshot at src | 0 |
9,255 | 12,291,928,751 | IssuesEvent | 2020-05-10 12:26:12 | Arch666Angel/mods | https://api.github.com/repos/Arch666Angel/mods | closed | Enhancements to bio exploration requirements, so you don't have to search big areas | Angels Bio Processing Enhancement | Rework idea for bio processing gardens (see [forum post](https://forums.factorio.com/viewtopic.php?p=491359#p491359)).
- Red science
- Ue 1 plant life sample to generate a random garden
- 1% chance to get a garden of each
- this means 1/33.333 to get a garden from 1 sample, so a small loss
- [x] Recipe:
- 1 plant life sample
- 50 mineralised water
- 5 cellulose pulp
- Green science
- Duplication of gardens. Repeating this process increases the net gain of plant life samples. This means to keep this process going, you don't have to convert every garden to samples again to keep this going, so you get a net gain of gardens out of this.
- Use 1 garden + 16 samples to get 2 gardens out
- [x] Recipe
- 1 garden
- 16 samples
- 50 alien goo
- To create trees, the nuclear garden option is obsolete since the option mentioned above will replace this. Create new recipes to create trees using :
- [x] Recipe using general tree seed
- 16 plant life sample
- 2 seed
- 50 alien goo
- 1 fertelizer
- result in 25% chance of getting any special tree
- [x] Recipes to get rid of your excess special trees
- 1 special tree gives 6 raw bio stuffs (depending on tree)
- [ ] ~~New tech to unlock this after the special tree arboretums~~
- Green science bugfixes
- Alien bacteria require Perchloric acid, which is blue science.
- [x] Swap this out with Hydrochloric acid instead
- [x] Alien Bio Processing 1 (tech) should have a prerequisite on Chlorine processing 1
- Creating Alien Goo also requires quite an amount of nutrient pulp. Having a recipe that uses excess meat to turn into more Alien Goo would be profitable, but making sure it is less efficient than creating fish itself.
- [x] Recipe to create more alien goo from polluted fish water
- 100 fish water
- 25 meat
- [ ] ~~Add numbers I and II to the biter egg recipes~~
- [x] Check light color on arboretum recipes
- [x] Fix localization on queen biters
- Blue science
- [x] Recipe using special tree seeds
- 10 plant life sample
- 2 seed
- 50 alien goo
- 1 nuclear fertelizer
- result in 50% chance of getting that single special tree (double the chance, but only 1/3 of trees, so net 2/3 as productive as the general seed)
- [ ] ~~Tech to unlock set recipes (including nuclear fertilizer)~~
- Prerequisite on tree arboretum 2
- prerequisite on the previous recipe mentioned to create from general seed
- prerequisite on uranium processing
- Blue science fixes
- [x] Remove now the obsolete garden mutation
- General fixes
- [x] Make the necessary migration | 1.0 | Enhancements to bio exploration requirements, so you don't have to search big areas - Rework idea for bio processing gardens (see [forum post](https://forums.factorio.com/viewtopic.php?p=491359#p491359)).
- Red science
- Ue 1 plant life sample to generate a random garden
- 1% chance to get a garden of each
- this means 1/33.333 to get a garden from 1 sample, so a small loss
- [x] Recipe:
- 1 plant life sample
- 50 mineralised water
- 5 cellulose pulp
- Green science
- Duplication of gardens. Repeating this process increases the net gain of plant life samples. This means to keep this process going, you don't have to convert every garden to samples again to keep this going, so you get a net gain of gardens out of this.
- Use 1 garden + 16 samples to get 2 gardens out
- [x] Recipe
- 1 garden
- 16 samples
- 50 alien goo
- To create trees, the nuclear garden option is obsolete since the option mentioned above will replace this. Create new recipes to create trees using :
- [x] Recipe using general tree seed
- 16 plant life sample
- 2 seed
- 50 alien goo
- 1 fertelizer
- result in 25% chance of getting any special tree
- [x] Recipes to get rid of your excess special trees
- 1 special tree gives 6 raw bio stuffs (depending on tree)
- [ ] ~~New tech to unlock this after the special tree arboretums~~
- Green science bugfixes
- Alien bacteria require Perchloric acid, which is blue science.
- [x] Swap this out with Hydrochloric acid instead
- [x] Alien Bio Processing 1 (tech) should have a prerequisite on Chlorine processing 1
- Creating Alien Goo also requires quite an amount of nutrient pulp. Having a recipe that uses excess meat to turn into more Alien Goo would be profitable, but making sure it is less efficient than creating fish itself.
- [x] Recipe to create more alien goo from polluted fish water
- 100 fish water
- 25 meat
- [ ] ~~Add numbers I and II to the biter egg recipes~~
- [x] Check light color on arboretum recipes
- [x] Fix localization on queen biters
- Blue science
- [x] Recipe using special tree seeds
- 10 plant life sample
- 2 seed
- 50 alien goo
- 1 nuclear fertelizer
- result in 50% chance of getting that single special tree (double the chance, but only 1/3 of trees, so net 2/3 as productive as the general seed)
- [ ] ~~Tech to unlock set recipes (including nuclear fertilizer)~~
- Prerequisite on tree arboretum 2
- prerequisite on the previous recipe mentioned to create from general seed
- prerequisite on uranium processing
- Blue science fixes
- [x] Remove now the obsolete garden mutation
- General fixes
- [x] Make the necessary migration | process | enhancements to bio exploration requirements so you don t have to search big areas rework idea for bio processing gardens see red science ue plant life sample to generate a random garden chance to get a garden of each this means to get a garden from sample so a small loss recipe plant life sample mineralised water cellulose pulp green science duplication of gardens repeating this process increases the net gain of plant life samples this means to keep this process going you don t have to convert every garden to samples again to keep this going so you get a net gain of gardens out of this use garden samples to get gardens out recipe garden samples alien goo to create trees the nuclear garden option is obsolete since the option mentioned above will replace this create new recipes to create trees using recipe using general tree seed plant life sample seed alien goo fertelizer result in chance of getting any special tree recipes to get rid of your excess special trees special tree gives raw bio stuffs depending on tree new tech to unlock this after the special tree arboretums green science bugfixes alien bacteria require perchloric acid which is blue science swap this out with hydrochloric acid instead alien bio processing tech should have a prerequisite on chlorine processing creating alien goo also requires quite an amount of nutrient pulp having a recipe that uses excess meat to turn into more alien goo would be profitable but making sure it is less efficient than creating fish itself recipe to create more alien goo from polluted fish water fish water meat add numbers i and ii to the biter egg recipes check light color on arboretum recipes fix localization on queen biters blue science recipe using special tree seeds plant life sample seed alien goo nuclear fertelizer result in chance of getting that single special tree double the chance but only of trees so net as productive as the general seed tech to unlock set recipes including nuclear fertilizer prerequisite on tree arboretum prerequisite on the previous recipe mentioned to create from general seed prerequisite on uranium processing blue science fixes remove now the obsolete garden mutation general fixes make the necessary migration | 1 |
14,125 | 17,019,772,105 | IssuesEvent | 2021-07-02 16:59:42 | icra/ecam | https://api.github.com/repos/icra/ecam | closed | Include sources (of the equation) directly in the equation | discussed in process | Dear Lluis, this might need some further work from Lluis' side, but I will include this "issue" here already. Just so that we don't forget.
We want to move away from a document with sources to having all the sources in the tool. With the option that you suggested (to show the page of the IPCC report directly. But showing the page in ECAM would already be the luxurious version... | 1.0 | Include sources (of the equation) directly in the equation - Dear Lluis, this might need some further work from Lluis' side, but I will include this "issue" here already. Just so that we don't forget.
We want to move away from a document with sources to having all the sources in the tool. With the option that you suggested (to show the page of the IPCC report directly. But showing the page in ECAM would already be the luxurious version... | process | include sources of the equation directly in the equation dear lluis this might need some further work from lluis side but i will include this issue here already just so that we don t forget we want to move away from a document with sources to having all the sources in the tool with the option that you suggested to show the page of the ipcc report directly but showing the page in ecam would already be the luxurious version | 1 |
18,308 | 24,420,582,466 | IssuesEvent | 2022-10-05 19:55:13 | biocodellc/localcontexts_db | https://api.github.com/repos/biocodellc/localcontexts_db | closed | Content update for Registration > Verify your email | registration process content | Clarify and add in time limit on the Verify your email page during registration.
### Current text
<img width="1032" alt="Screen Shot 2022-10-04 at 2 52 06 PM" src="https://user-images.githubusercontent.com/49764220/193902039-391afad6-8f8c-4bf5-8d32-4bcb66900819.png">
### Updated text (bold and bullets as below, if possible)
**A verification link has been sent to the email you provided.**
- Your email must be verified within 30 days in order to be activated.
- If you have verified your email, you’re all set! You can close this window.
**Didn’t get an email?**
- Check your spam or junk folder for an email from no-reply@localcontextshub.org
- If you do not receive an email within 5 minutes, please enter your email below to send it again. If you have not received an email after this, please contact us.
| 1.0 | Content update for Registration > Verify your email - Clarify and add in time limit on the Verify your email page during registration.
### Current text
<img width="1032" alt="Screen Shot 2022-10-04 at 2 52 06 PM" src="https://user-images.githubusercontent.com/49764220/193902039-391afad6-8f8c-4bf5-8d32-4bcb66900819.png">
### Updated text (bold and bullets as below, if possible)
**A verification link has been sent to the email you provided.**
- Your email must be verified within 30 days in order to be activated.
- If you have verified your email, you’re all set! You can close this window.
**Didn’t get an email?**
- Check your spam or junk folder for an email from no-reply@localcontextshub.org
- If you do not receive an email within 5 minutes, please enter your email below to send it again. If you have not received an email after this, please contact us.
| process | content update for registration verify your email clarify and add in time limit on the verify your email page during registration current text img width alt screen shot at pm src updated text bold and bullets as below if possible a verification link has been sent to the email you provided your email must be verified within days in order to be activated if you have verified your email you’re all set you can close this window didn’t get an email check your spam or junk folder for an email from no reply localcontextshub org if you do not receive an email within minutes please enter your email below to send it again if you have not received an email after this please contact us | 1 |
7,854 | 11,028,779,732 | IssuesEvent | 2019-12-06 12:31:09 | codeuniversity/smag-mvp | https://api.github.com/repos/codeuniversity/smag-mvp | reopened | Mood-Game for data gathering and as intro | Frontend Image Processing | To get more images of the user, we can implement a small mood/gesture game with image recognition in the beginning.
Possible solution: https://pusher.com/tutorials/emotion-recognition-tensorflow | 1.0 | Mood-Game for data gathering and as intro - To get more images of the user, we can implement a small mood/gesture game with image recognition in the beginning.
Possible solution: https://pusher.com/tutorials/emotion-recognition-tensorflow | process | mood game for data gathering and as intro to get more images of the user we can implement a small mood gesture game with image recognition in the beginning possible solution | 1 |
279,914 | 8,675,713,381 | IssuesEvent | 2018-11-30 11:46:53 | cosmos/voyager | https://api.github.com/repos/cosmos/voyager | opened | wrong class on PageProposal elements | governance-1 :ballot_box: low priority | Description:
<!-- Steps to reproduce, logs, and screenshots are helpful for us to resolve the bug -->
`PageProposal` is using `validator-profile...` classes, for eg `class="validator-profile__header validator-profile__section proposal"` .
We should change them to have `proposal-page...` classes | 1.0 | wrong class on PageProposal elements - Description:
<!-- Steps to reproduce, logs, and screenshots are helpful for us to resolve the bug -->
`PageProposal` is using `validator-profile...` classes, for eg `class="validator-profile__header validator-profile__section proposal"` .
We should change them to have `proposal-page...` classes | non_process | wrong class on pageproposal elements description pageproposal is using validator profile classes for eg class validator profile header validator profile section proposal we should change them to have proposal page classes | 0 |
341 | 2,793,244,269 | IssuesEvent | 2015-05-11 09:38:05 | ecodistrict/IDSSDashboard | https://api.github.com/repos/ecodistrict/IDSSDashboard | closed | It would be helpful if we could also export a 3D model e.g. as a 3DM file. | enhancement form feedback 09102014 process step: assess alternatives | It would be helpful if we could also export a 3D model e.g. as a 3DM file. | 1.0 | It would be helpful if we could also export a 3D model e.g. as a 3DM file. - It would be helpful if we could also export a 3D model e.g. as a 3DM file. | process | it would be helpful if we could also export a model e g as a file it would be helpful if we could also export a model e g as a file | 1 |
260,901 | 8,216,626,981 | IssuesEvent | 2018-09-05 09:45:50 | kubeapps/kubeapps | https://api.github.com/repos/kubeapps/kubeapps | closed | E2E tests hang | kind/bug priority/important-soon | This job has been hanging for 5 hours https://circleci.com/gh/kubeapps/kubeapps/944
```
RROR: (gcloud.container.clusters.delete) Some requests did not succeed:
- ResponseError: code=400, message=EXTERNAL: Operation operation-1535653491640-ec212c7c is currently creating cluster kubeapps-test-master-11-ci. Please wait and try again once it is done.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
This will enable the autorepair feature for nodes. Please see
https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more
information on node autorepairs.
WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
ERROR: (gcloud.container.clusters.create) ResponseError: code=409, message=EXTERNAL: Already exists: projects/helm-publish-ci/zones/us-east1-c/clusters/kubeapps-test-master-11-ci.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
``` | 1.0 | E2E tests hang - This job has been hanging for 5 hours https://circleci.com/gh/kubeapps/kubeapps/944
```
RROR: (gcloud.container.clusters.delete) Some requests did not succeed:
- ResponseError: code=400, message=EXTERNAL: Operation operation-1535653491640-ec212c7c is currently creating cluster kubeapps-test-master-11-ci. Please wait and try again once it is done.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
This will enable the autorepair feature for nodes. Please see
https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more
information on node autorepairs.
WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
ERROR: (gcloud.container.clusters.create) ResponseError: code=409, message=EXTERNAL: Already exists: projects/helm-publish-ci/zones/us-east1-c/clusters/kubeapps-test-master-11-ci.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
``` | non_process | tests hang this job has been hanging for hours rror gcloud container clusters delete some requests did not succeed responseerror code message external operation operation is currently creating cluster kubeapps test master ci please wait and try again once it is done warning starting in new clusters will have basic authentication disabled by default basic authentication can be enabled or disabled manually using the enable basic auth flag warning starting in new clusters will not have a client certificate issued you can manually enable or disable the issuance of the client certificate using the issue client certificate flag warning currently vpc native is not the default mode during cluster creation in the future this will become the default mode and can be disabled using no enable ip alias flag use enable ip alias flag to suppress this warning this will enable the autorepair feature for nodes please see for more information on node autorepairs warning starting in kubernetes new clusters will no longer get compute rw and storage ro scopes added to what is specified in scopes though the latter will remain included in the default scopes to use these scopes add them explicitly to scopes to use the new behavior set container new scopes behavior property gcloud config set container new scopes behavior true error gcloud container clusters create responseerror code message external already exists projects helm publish ci zones us c clusters kubeapps test master ci the connection to the server localhost was refused did you specify the right host or port the connection to the server localhost was refused did you specify the right host or port | 0 |
6,758 | 9,884,040,957 | IssuesEvent | 2019-06-24 20:58:46 | Maps4HTML/HTML-Map-Element-UseCases-Requirements | https://api.github.com/repos/Maps4HTML/HTML-Map-Element-UseCases-Requirements | closed | Meta: Set up separate src vs build branches with automated deployment | meta/process | The report uses [ReSpec](https://github.com/w3c/respec/wiki), plus some custom scripts, to enhance the [source HTML file](https://github.com/Maps4HTML/HTML-Map-Element-UseCases-Requirements/blob/gh-pages/index.html) into the [final content](https://maps4html.github.io/HTML-Map-Element-UseCases-Requirements/).
The benefit of ReSpec is that the raw document is still a functional HTML file, but most of the repetitive code is automated.
The downside is that every time you load the report, it takes a few seconds to get ready. The longer the report gets, with more features like auto-embedding GitHub issues, the slower the start up becomes.
One option is to use two branches on GitHub: source code on a master branch, and then a gh-pages branch which has the final processed HTML file generated by ReSpec. This is what the ARIA specs do, using [Travis CI](https://travis-ci.org/) to update the gh-pages branch every time a commit is made to master. E.g., [this is the config for SVG AAM](https://travis-ci.org/w3c/svg-aam/jobs/535954059/config). It may also be possible now to do this with [GitHub Actions](https://help.github.com/en/articles/about-github-actions). | 1.0 | Meta: Set up separate src vs build branches with automated deployment - The report uses [ReSpec](https://github.com/w3c/respec/wiki), plus some custom scripts, to enhance the [source HTML file](https://github.com/Maps4HTML/HTML-Map-Element-UseCases-Requirements/blob/gh-pages/index.html) into the [final content](https://maps4html.github.io/HTML-Map-Element-UseCases-Requirements/).
The benefit of ReSpec is that the raw document is still a functional HTML file, but most of the repetitive code is automated.
The downside is that every time you load the report, it takes a few seconds to get ready. The longer the report gets, with more features like auto-embedding GitHub issues, the slower the start up becomes.
One option is to use two branches on GitHub: source code on a master branch, and then a gh-pages branch which has the final processed HTML file generated by ReSpec. This is what the ARIA specs do, using [Travis CI](https://travis-ci.org/) to update the gh-pages branch every time a commit is made to master. E.g., [this is the config for SVG AAM](https://travis-ci.org/w3c/svg-aam/jobs/535954059/config). It may also be possible now to do this with [GitHub Actions](https://help.github.com/en/articles/about-github-actions). | process | meta set up separate src vs build branches with automated deployment the report uses plus some custom scripts to enhance the into the the benefit of respec is that the raw document is still a functional html file but most of the repetitive code is automated the downside is that every time you load the report it takes a few seconds to get ready the longer the report gets with more features like auto embedding github issues the slower the start up becomes one option is to use two branches on github source code on a master branch and then a gh pages branch which has the final processed html file generated by respec this is what the aria specs do using to update the gh pages branch every time a commit is made to master e g it may also be possible now to do this with | 1 |
15,879 | 20,068,933,780 | IssuesEvent | 2022-02-04 02:24:02 | kimharr24/RPS-Deep-Learning | https://api.github.com/repos/kimharr24/RPS-Deep-Learning | opened | Add image augmentation methods | data pre-processing | To make the CNN more robust and reduce overfitting, add some image augmentation methods to a module. | 1.0 | Add image augmentation methods - To make the CNN more robust and reduce overfitting, add some image augmentation methods to a module. | process | add image augmentation methods to make the cnn more robust and reduce overfitting add some image augmentation methods to a module | 1 |
9,347 | 12,358,956,850 | IssuesEvent | 2020-05-17 08:32:04 | arunkumar9t2/scabbard | https://api.github.com/repos/arunkumar9t2/scabbard | closed | Render inherited bindings in subcomponents | enhancement module:processor | Currently due to split by component nature of graphs, subcomponents graph do not render all the inherited bindings. I think inherited bindings are really useful to find out which bindings are inherited and from where.
POC: https://github.com/arunkumar9t2/scabbard/tree/feat/rendering-inherited-bindings | 1.0 | Render inherited bindings in subcomponents - Currently due to split by component nature of graphs, subcomponents graph do not render all the inherited bindings. I think inherited bindings are really useful to find out which bindings are inherited and from where.
POC: https://github.com/arunkumar9t2/scabbard/tree/feat/rendering-inherited-bindings | process | render inherited bindings in subcomponents currently due to split by component nature of graphs subcomponents graph do not render all the inherited bindings i think inherited bindings are really useful to find out which bindings are inherited and from where poc | 1 |
12,147 | 14,741,381,952 | IssuesEvent | 2021-01-07 10:32:07 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Quarterly billing glitch | anc-process anp-important ant-bug ant-child/secondary ant-enhancement | In GitLab by @kdjstudios on Jan 15, 2019, 15:19
**Submitted by:** Charlie Crown <ccrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6611986
**Server:** External
**Client/Site:** Towne
**Account:** NA
**Issue:**
I meant to email you at the beginning of the year when we did our last billing.
We had several quarterly billed accounts that were billed on our 12/3 billing. They were also sent an invoice for the same billing again on our 12/31 invoicing. I think it also added late charges to the one customer (I couldn’t quickly identify which one, if we need to I can search longer). | 1.0 | Quarterly billing glitch - In GitLab by @kdjstudios on Jan 15, 2019, 15:19
**Submitted by:** Charlie Crown <ccrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6611986
**Server:** External
**Client/Site:** Towne
**Account:** NA
**Issue:**
I meant to email you at the beginning of the year when we did our last billing.
We had several quarterly billed accounts that were billed on our 12/3 billing. They were also sent an invoice for the same billing again on our 12/31 invoicing. I think it also added late charges to the one customer (I couldn’t quickly identify which one, if we need to I can search longer). | process | quarterly billing glitch in gitlab by kdjstudios on jan submitted by charlie crown helpdesk server external client site towne account na issue i meant to email you at the beginning of the year when we did our last billing we had several quarterly billed accounts that were billed on our billing they were also sent an invoice for the same billing again on our invoicing i think it also added late charges to the one customer i couldn’t quickly identify which one if we need to i can search longer | 1 |
295,829 | 22,273,748,640 | IssuesEvent | 2022-06-10 14:38:55 | llvm/llvm-project | https://api.github.com/repos/llvm/llvm-project | closed | Documentation: C++20 exists | enhancement good first issue clang:documentation | The standard has been published AND the compiler supports -std=c++20
It would be cool to update clang/docs/CommandGuide/clang.rst | 1.0 | Documentation: C++20 exists - The standard has been published AND the compiler supports -std=c++20
It would be cool to update clang/docs/CommandGuide/clang.rst | non_process | documentation c exists the standard has been published and the compiler supports std c it would be cool to update clang docs commandguide clang rst | 0 |
20,059 | 26,545,105,593 | IssuesEvent | 2023-01-19 23:06:10 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | Support for extracting attributes/labels from log body | enhancement processor/attributes processor/transform exporter/loki | ### Is your feature request related to a problem? Please describe.
I wanna set loki labels for my logs based on a key-value pair in my JSON formatted logs,
we provide an option to set loki labels from attributes but I'm not able to extract the body value to an attribute...
E.g
assuming the logs to be something like
```
{
"message":"Adding oranges to basket",
"service":"my-service-1",
"log_type":"orange_log"
...
},
{
"message":"Peeling apples for Milkshake",
"service":"my-service-1",
"log_type":"apple_log"
...
}
```
I wanna extract the the log_type and set it as a loki label...
Promtail supports this behaviour via its [pipeline_stages](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#pipeline_stages) configuration
```
pipeline_stages:
- json:
expressions:
log_type:
- labels:
log_type:
```
### Describe the solution you'd like
I see that we already have the attribute processor which supports setting loki labels and adding context based or static attributes,
we also have the transform processor which can be extended to read/parse from log bodies
I think it would be appropriate to extend the behaviour for one of these processors..
Although not sure if this requirement is loki specific (Since most logging solutions index the body and don't need an explicit label) and needs to be implemented in the loki exporter...
### Describe alternatives you've considered
For now the alternatives seem to be either not adding these labels (which would lead to inefficient queries in loki) or using another tool (maybe kafka) to handle this processing...
### Additional context
_No response_ | 2.0 | Support for extracting attributes/labels from log body - ### Is your feature request related to a problem? Please describe.
I wanna set loki labels for my logs based on a key-value pair in my JSON formatted logs,
we provide an option to set loki labels from attributes but I'm not able to extract the body value to an attribute...
E.g
assuming the logs to be something like
```
{
"message":"Adding oranges to basket",
"service":"my-service-1",
"log_type":"orange_log"
...
},
{
"message":"Peeling apples for Milkshake",
"service":"my-service-1",
"log_type":"apple_log"
...
}
```
I wanna extract the the log_type and set it as a loki label...
Promtail supports this behaviour via its [pipeline_stages](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#pipeline_stages) configuration
```
pipeline_stages:
- json:
expressions:
log_type:
- labels:
log_type:
```
### Describe the solution you'd like
I see that we already have the attribute processor which supports setting loki labels and adding context based or static attributes,
we also have the transform processor which can be extended to read/parse from log bodies
I think it would be appropriate to extend the behaviour for one of these processors..
Although not sure if this requirement is loki specific (Since most logging solutions index the body and don't need an explicit label) and needs to be implemented in the loki exporter...
### Describe alternatives you've considered
For now the alternatives seem to be either not adding these labels (which would lead to inefficient queries in loki) or using another tool (maybe kafka) to handle this processing...
### Additional context
_No response_ | process | support for extracting attributes labels from log body is your feature request related to a problem please describe i wanna set loki labels for my logs based on a key value pair in my json formatted logs we provide an option to set loki labels from attributes but i m not able to extract the body value to an attribute e g assuming the logs to be something like message adding oranges to basket service my service log type orange log message peeling apples for milkshake service my service log type apple log i wanna extract the the log type and set it as a loki label promtail supports this behaviour via its configuration pipeline stages json expressions log type labels log type describe the solution you d like i see that we already have the attribute processor which supports setting loki labels and adding context based or static attributes we also have the transform processor which can be extended to read parse from log bodies i think it would be appropriate to extend the behaviour for one of these processors although not sure if this requirement is loki specific since most logging solutions index the body and don t need an explicit label and needs to be implemented in the loki exporter describe alternatives you ve considered for now the alternatives seem to be either not adding these labels which would lead to inefficient queries in loki or using another tool maybe kafka to handle this processing additional context no response | 1 |
13,968 | 16,744,206,455 | IssuesEvent | 2021-06-11 13:40:47 | sysflow-telemetry/sf-docs | https://api.github.com/repos/sysflow-telemetry/sf-docs | opened | Implement env variable override for dot-separated attributes | enhancement sf-processor | **Indicate project**
Processor
**Describe the feature you'd like**
Enable the environment variable configuration override in the Processor's pipeline.json to support dot-separated attributes.
Example:
```json
{
"processor": "exporter",
"vault.path": "/run/secrets/k8s"
}
```
This attribute should be overwritable by setting the environment variable EXPORTER_VAULT_PATH. Note that '.'s in the original attribute should be specified as '_''s in the environment variable.
| 1.0 | Implement env variable override for dot-separated attributes - **Indicate project**
Processor
**Describe the feature you'd like**
Enable the environment variable configuration override in the Processor's pipeline.json to support dot-separated attributes.
Example:
```json
{
"processor": "exporter",
"vault.path": "/run/secrets/k8s"
}
```
This attribute should be overwritable by setting the environment variable EXPORTER_VAULT_PATH. Note that '.'s in the original attribute should be specified as '_''s in the environment variable.
| process | implement env variable override for dot separated attributes indicate project processor describe the feature you d like enable the environment variable configuration override in the processor s pipeline json to support dot separated attributes example json processor exporter vault path run secrets this attribute should be overwritable by setting the environment variable exporter vault path note that s in the original attribute should be specified as s in the environment variable | 1 |
761,875 | 26,700,776,208 | IssuesEvent | 2023-01-27 14:14:03 | Mikaaah/TownsAndKingdoms | https://api.github.com/repos/Mikaaah/TownsAndKingdoms | closed | Mana Mechanism Invalid Recipe | Priority: Critical Type: Bug In Progress Type: Crafting Recipe | Should be
<recipetype:botania:runic_altar>.addRecipe("manamechanism", <item:contenttweaker:mana_mechanism>, 5000, <item:contenttweaker:kinetic_mechanism>, <item:botania:manasteel_ingot>, <item:minecraft:quartz>);
| 1.0 | Mana Mechanism Invalid Recipe - Should be
<recipetype:botania:runic_altar>.addRecipe("manamechanism", <item:contenttweaker:mana_mechanism>, 5000, <item:contenttweaker:kinetic_mechanism>, <item:botania:manasteel_ingot>, <item:minecraft:quartz>);
| non_process | mana mechanism invalid recipe should be addrecipe manamechanism | 0 |
19,918 | 26,378,834,901 | IssuesEvent | 2023-01-12 06:33:05 | googleapis/common-protos-php | https://api.github.com/repos/googleapis/common-protos-php | reopened | Dependency Dashboard | type: process | This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Repository problems
These problems occurred while renovating this repository.
- WARN: RepoCacheS3.getCacheFolder() - appending missing trailing slash to pathname
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/phpunit-phpunit-9.x -->[chore(deps): update dependency phpunit/phpunit to v9](../pull/23)
- [ ] <!-- recreate-branch=renovate/php-8.x -->[chore(deps): update php docker tag to v8](../pull/43)
## Detected dependencies
<details><summary>composer</summary>
<blockquote>
<details><summary>composer.json</summary>
- `google/protobuf ^3.6.1`
- `phpunit/phpunit ^4.8.36||^8.5`
- `sami/sami *`
</details>
</blockquote>
</details>
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/docs.yml</summary>
- `actions/checkout v3`
- `nick-invision/retry v2`
- `php 7.4-cli`
</details>
<details><summary>.github/workflows/tests.yml</summary>
- `actions/checkout v3`
- `codecov/codecov-action v3`
- `shivammathur/setup-php v2`
- `nick-invision/retry v2`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 1.0 | Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Repository problems
These problems occurred while renovating this repository.
- WARN: RepoCacheS3.getCacheFolder() - appending missing trailing slash to pathname
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/phpunit-phpunit-9.x -->[chore(deps): update dependency phpunit/phpunit to v9](../pull/23)
- [ ] <!-- recreate-branch=renovate/php-8.x -->[chore(deps): update php docker tag to v8](../pull/43)
## Detected dependencies
<details><summary>composer</summary>
<blockquote>
<details><summary>composer.json</summary>
- `google/protobuf ^3.6.1`
- `phpunit/phpunit ^4.8.36||^8.5`
- `sami/sami *`
</details>
</blockquote>
</details>
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/docs.yml</summary>
- `actions/checkout v3`
- `nick-invision/retry v2`
- `php 7.4-cli`
</details>
<details><summary>.github/workflows/tests.yml</summary>
- `actions/checkout v3`
- `codecov/codecov-action v3`
- `shivammathur/setup-php v2`
- `nick-invision/retry v2`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| process | dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more repository problems these problems occurred while renovating this repository warn getcachefolder appending missing trailing slash to pathname ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull detected dependencies composer composer json google protobuf phpunit phpunit sami sami github actions github workflows docs yml actions checkout nick invision retry php cli github workflows tests yml actions checkout codecov codecov action shivammathur setup php nick invision retry check this box to trigger a request for renovate to run again on this repository | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.