Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
49,156 | 6,011,539,442 | IssuesEvent | 2017-06-06 15:24:07 | IBMStreams/streamsx.kafka | https://api.github.com/repos/IBMStreams/streamsx.kafka | closed | Add additional tests | Target: v1.0.0 Type: Test-related | Need tests for the following:
* ~~each of the supported message and key attribute types (int, blob, float64, etc)~~
* ~~app config vs. properties file~~
* ~~`startPosition` param (KafkaConsumer)~~
* ~~each of the different attribute parameters (`topicAttrName`, `outputMessageAttrName`, etc)~~ | 1.0 | Add additional tests - Need tests for the following:
* ~~each of the supported message and key attribute types (int, blob, float64, etc)~~
* ~~app config vs. properties file~~
* ~~`startPosition` param (KafkaConsumer)~~
* ~~each of the different attribute parameters (`topicAttrName`, `outputMessageAttrName`, etc)~~ | test | add additional tests need tests for the following each of the supported message and key attribute types int blob etc app config vs properties file startposition param kafkaconsumer each of the different attribute parameters topicattrname outputmessageattrname etc | 1 |
306,443 | 9,393,475,165 | IssuesEvent | 2019-04-07 11:58:14 | wix/wix-style-react | https://api.github.com/repos/wix/wix-style-react | closed | `<Page/>` +`<PageHeader/>` - Use renderProps instead of `cloneElement` (`minimized`, `hasBackgroundImage` unrecognized attributes ) | API Page PageHeader Priority:High | # β¨ Feature Request
### π¦ Scope
<PageHeader/>
### π Explanation
For example, `<PageHeader/>`'s `actionsBar` prop is of ReactNode type.
We clone it:
```js
React.cloneElement(actionsBar, { minimized, hasBackgroundImage })
```
This cause an inconveniece in the simple use-case where `actionsBar` is a `<Button/>`.
Native `<button/>` does not support `minimized`, `hasBackgroundImage` attributes, so we get warnings from React.
Props list to change:
- [ ] PageHeader - actionsBar
- [x] PageHeader - breadcrumbs
- [x] Page - childrenObject.PageHeader
- [x] Page - childrenObject.PageTail
- [x] Page - childrenObject.PageFixedContent (no props added in clone, but we can add it to API)
### πΎ Possible solution <!-- optional -->
Use renderProp.
```js
typeof actionsBar === 'function'? actionsBar({ minimized, hasBackgroundImage }): actionsBar;
```
### π Severity
- Low
| 1.0 | `<Page/>` +`<PageHeader/>` - Use renderProps instead of `cloneElement` (`minimized`, `hasBackgroundImage` unrecognized attributes ) - # β¨ Feature Request
### π¦ Scope
<PageHeader/>
### π Explanation
For example, `<PageHeader/>`'s `actionsBar` prop is of ReactNode type.
We clone it:
```js
React.cloneElement(actionsBar, { minimized, hasBackgroundImage })
```
This cause an inconveniece in the simple use-case where `actionsBar` is a `<Button/>`.
Native `<button/>` does not support `minimized`, `hasBackgroundImage` attributes, so we get warnings from React.
Props list to change:
- [ ] PageHeader - actionsBar
- [x] PageHeader - breadcrumbs
- [x] Page - childrenObject.PageHeader
- [x] Page - childrenObject.PageTail
- [x] Page - childrenObject.PageFixedContent (no props added in clone, but we can add it to API)
### πΎ Possible solution <!-- optional -->
Use renderProp.
```js
typeof actionsBar === 'function'? actionsBar({ minimized, hasBackgroundImage }): actionsBar;
```
### π Severity
- Low
| non_test | use renderprops instead of cloneelement minimized hasbackgroundimage unrecognized attributes β¨ feature request π¦ scope π explanation for example s actionsbar prop is of reactnode type we clone it js react cloneelement actionsbar minimized hasbackgroundimage this cause an inconveniece in the simple use case where actionsbar is a native does not support minimized hasbackgroundimage attributes so we get warnings from react props list to change pageheader actionsbar pageheader breadcrumbs page childrenobject pageheader page childrenobject pagetail page childrenobject pagefixedcontent no props added in clone but we can add it to api πΎ possible solution use renderprop js typeof actionsbar function actionsbar minimized hasbackgroundimage actionsbar π severity low | 0 |
137,424 | 11,136,933,802 | IssuesEvent | 2019-12-20 17:53:45 | ValveSoftware/steam-for-linux | https://api.github.com/repos/ValveSoftware/steam-for-linux | closed | Invalid pointer crash on startup | Need Retest reviewed | #### Your system information
* Steam client version: 1.0.0.54
* Distribution (e.g. Ubuntu): Arch Linux (Yeah, I know, not offically supported.)
* Opted into Steam client beta?: No
* Have you checked for system updates?: Yes
Running ldd on the binary:
https://gist.github.com/stertingen/a521f63b8371249594c82e42ce1df00d
#### Please describe your issue in as much detail as possible:
Launching Steam on Arch Linux does not work, creating following console output: https://gist.github.com/stertingen/84849aa4626ced88727b9f30f886fc9e
I tried both runtime enabled and disabled (disabled = load native system libraries), scroll down to see the log with runtime disabled.
Dump for the crash (runtime enabled)
[assert_20170116101753_1.dmp.txt](https://github.com/ValveSoftware/steam-for-linux/files/707719/assert_20170116101753_1.dmp.txt)
Renamed to .txt for upload reasons.
These files contain more information about my hardware. (CPU etc.)
Dump for the crash (runtime disabled)
[assert_20170116101430_1.dmp.txt](https://github.com/ValveSoftware/steam-for-linux/files/707721/assert_20170116101430_1.dmp.txt)
#### Steps for reproducing this issue:
That's hard because on my Laptop, also Arch Linux, it works fine.
| 1.0 | Invalid pointer crash on startup - #### Your system information
* Steam client version: 1.0.0.54
* Distribution (e.g. Ubuntu): Arch Linux (Yeah, I know, not offically supported.)
* Opted into Steam client beta?: No
* Have you checked for system updates?: Yes
Running ldd on the binary:
https://gist.github.com/stertingen/a521f63b8371249594c82e42ce1df00d
#### Please describe your issue in as much detail as possible:
Launching Steam on Arch Linux does not work, creating following console output: https://gist.github.com/stertingen/84849aa4626ced88727b9f30f886fc9e
I tried both runtime enabled and disabled (disabled = load native system libraries), scroll down to see the log with runtime disabled.
Dump for the crash (runtime enabled)
[assert_20170116101753_1.dmp.txt](https://github.com/ValveSoftware/steam-for-linux/files/707719/assert_20170116101753_1.dmp.txt)
Renamed to .txt for upload reasons.
These files contain more information about my hardware. (CPU etc.)
Dump for the crash (runtime disabled)
[assert_20170116101430_1.dmp.txt](https://github.com/ValveSoftware/steam-for-linux/files/707721/assert_20170116101430_1.dmp.txt)
#### Steps for reproducing this issue:
That's hard because on my Laptop, also Arch Linux, it works fine.
| test | invalid pointer crash on startup your system information steam client version distribution e g ubuntu arch linux yeah i know not offically supported opted into steam client beta no have you checked for system updates yes running ldd on the binary please describe your issue in as much detail as possible launching steam on arch linux does not work creating following console output i tried both runtime enabled and disabled disabled load native system libraries scroll down to see the log with runtime disabled dump for the crash runtime enabled renamed to txt for upload reasons these files contain more information about my hardware cpu etc dump for the crash runtime disabled steps for reproducing this issue that s hard because on my laptop also arch linux it works fine | 1 |
285,056 | 21,482,541,173 | IssuesEvent | 2022-04-26 19:15:56 | hyperledger/firefly | https://api.github.com/repos/hyperledger/firefly | closed | Docs need to be updated to reflect new API endpoints | documentation | We recently deprecated the `broadcast/message` and `send/message` endpoints in favor of a new format. The Getting Started tutorials https://labs.hyperledger.org/firefly/gettingstarted/gettingstarted.html and possibly other places should be updated to reflect that, so new developers don't start implementing new projects against the deprecated paths. | 1.0 | Docs need to be updated to reflect new API endpoints - We recently deprecated the `broadcast/message` and `send/message` endpoints in favor of a new format. The Getting Started tutorials https://labs.hyperledger.org/firefly/gettingstarted/gettingstarted.html and possibly other places should be updated to reflect that, so new developers don't start implementing new projects against the deprecated paths. | non_test | docs need to be updated to reflect new api endpoints we recently deprecated the broadcast message and send message endpoints in favor of a new format the getting started tutorials and possibly other places should be updated to reflect that so new developers don t start implementing new projects against the deprecated paths | 0 |
325,360 | 9,923,246,989 | IssuesEvent | 2019-07-01 06:38:32 | ca25nada/refactored-pancake | https://api.github.com/repos/ca25nada/refactored-pancake | closed | Tooltip Implementation | Priority: Medium Status: In Progress Type: Feature | Implement tooltips for giving users hints for the item that the mouse is currently hovering over. (e.g. hotkey info for a button)
Current requirements:
- Be available on any screen
- Follow the mouse cursor's X/Y position
- Text box should resize accordingly depending on text length
- Text box should not exit off of the screen
| 1.0 | Tooltip Implementation - Implement tooltips for giving users hints for the item that the mouse is currently hovering over. (e.g. hotkey info for a button)
Current requirements:
- Be available on any screen
- Follow the mouse cursor's X/Y position
- Text box should resize accordingly depending on text length
- Text box should not exit off of the screen
| non_test | tooltip implementation implement tooltips for giving users hints for the item that the mouse is currently hovering over e g hotkey info for a button current requirements be available on any screen follow the mouse cursor s x y position text box should resize accordingly depending on text length text box should not exit off of the screen | 0 |
258,592 | 22,329,887,371 | IssuesEvent | 2022-06-14 13:46:07 | vaadin/testbench | https://api.github.com/repos/vaadin/testbench | opened | Try to fail fast is mock environment is not set up | UITest | It may happen that the wrong base class is used when creating tests and then the mock environment gets not set up.
For example this happens when exteing `UIUnitTest` but the project is configure to run only JUnit 4, so the '@BeforeEach` hook is not invoked at all.
To have a fast feedback that the environment is not set up correctly, we may check that UI is available in most common test helpers:
* navigate
* getCurrentView
* query
* wrap | 1.0 | Try to fail fast is mock environment is not set up - It may happen that the wrong base class is used when creating tests and then the mock environment gets not set up.
For example this happens when exteing `UIUnitTest` but the project is configure to run only JUnit 4, so the '@BeforeEach` hook is not invoked at all.
To have a fast feedback that the environment is not set up correctly, we may check that UI is available in most common test helpers:
* navigate
* getCurrentView
* query
* wrap | test | try to fail fast is mock environment is not set up it may happen that the wrong base class is used when creating tests and then the mock environment gets not set up for example this happens when exteing uiunittest but the project is configure to run only junit so the beforeeach hook is not invoked at all to have a fast feedback that the environment is not set up correctly we may check that ui is available in most common test helpers navigate getcurrentview query wrap | 1 |
138,201 | 11,194,428,090 | IssuesEvent | 2020-01-03 00:58:53 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | rancher istio 1.4.2 UI changes | [zube]: To Test area/istio kind/enhancement team/ui | To make istio 1.4.2 work in rancher, we need the follwing UI changes:
1. Change kiali link to be not hard-coded. This is due to the change that in istio 1.4.2 they changed kiali service name and port.
2. Change default answer pair to disable PodDisruptionBudget. This is to work around node draining issue for istio control plane. https://github.com/istio/istio/issues/12602. We should also add documentation so that if user is going to enable PodDisruptionBudget then they have to increase replicas numbers of istio control plane. | 1.0 | rancher istio 1.4.2 UI changes - To make istio 1.4.2 work in rancher, we need the follwing UI changes:
1. Change kiali link to be not hard-coded. This is due to the change that in istio 1.4.2 they changed kiali service name and port.
2. Change default answer pair to disable PodDisruptionBudget. This is to work around node draining issue for istio control plane. https://github.com/istio/istio/issues/12602. We should also add documentation so that if user is going to enable PodDisruptionBudget then they have to increase replicas numbers of istio control plane. | test | rancher istio ui changes to make istio work in rancher we need the follwing ui changes change kiali link to be not hard coded this is due to the change that in istio they changed kiali service name and port change default answer pair to disable poddisruptionbudget this is to work around node draining issue for istio control plane we should also add documentation so that if user is going to enable poddisruptionbudget then they have to increase replicas numbers of istio control plane | 1 |
86,294 | 24,814,888,843 | IssuesEvent | 2022-10-25 12:26:12 | elastic/elastic-agent | https://api.github.com/repos/elastic/elastic-agent | closed | Build 508 for main with status FAILURE | Team:Elastic-Agent-Control-Plane ci-reported automation build-failures |
## :broken_heart: Build Failed
<!-- BUILD BADGES-->
> _the below badges are clickable and redirect to their specific view in the CI or DOCS_
[](https://fleet-ci.elastic.co/blue/organizations/jenkins/elastic-agent%2Felastic-agent-mbp%2Fmain/detail/main/508//pipeline) [](https://fleet-ci.elastic.co/blue/organizations/jenkins/elastic-agent%2Felastic-agent-mbp%2Fmain/detail/main/508//tests) [](https://fleet-ci.elastic.co/blue/organizations/jenkins/elastic-agent%2Felastic-agent-mbp%2Fmain/detail/main/508//changes) [](https://fleet-ci.elastic.co/blue/organizations/jenkins/elastic-agent%2Felastic-agent-mbp%2Fmain/detail/main/508//artifacts) [](http://elastic-agent_null.docs-preview.app.elstc.co/diff) [](https://ci-stats.elastic.co/app/apm/services/fleet-ci/transactions/view?rangeFrom=2022-10-13T09:09:45.169Z&rangeTo=2022-10-13T09:29:45.169Z&transactionName=BUILD+elastic-agent%2Felastic-agent-mbp%2Fmain&transactionType=job&latencyAggregationType=avg&traceId=d990a414fd818bdf6d929ede756ad766&transactionId=945ec81a24a864b0)
<!-- BUILD SUMMARY-->
<details><summary>Expand to view the summary</summary>
<p>
#### Build stats
* Start Time: 2022-10-13T09:19:45.169+0000
* Duration: 41 min 46 sec
#### Test stats :test_tube:
| Test | Results |
| ------------ | :-----------------------------: |
| Failed | 0 |
| Passed | 4991 |
| Skipped | 17 |
| Total | 5008 |
</p>
</details>
<!-- TEST RESULTS IF ANY-->
<!-- STEPS ERRORS IF ANY -->
### Steps errors [](https://fleet-ci.elastic.co/blue/organizations/jenkins/elastic-agent%2Felastic-agent-mbp%2Fmain/detail/main/508//pipeline)
<details><summary>Expand to view the steps failures</summary>
<p>
##### `[Clone] Kibana-Repository`
<ul>
<li>Took 4 min 35 sec . View more details <a href="https://fleet-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/elastic-agent/pipelines/elastic-agent-mbp/pipelines/main/runs/508/steps/1980/log/?start=0">here</a></li>
<li>Description: <code> make ci-clone-kibana-repository cp Makefile ./kibana cd kibana make ci-create-kubernetes-templates-pull-request </code></l1>
</ul>
</p>
</details>
| 1.0 | Build 508 for main with status FAILURE -
## :broken_heart: Build Failed
<!-- BUILD BADGES-->
> _the below badges are clickable and redirect to their specific view in the CI or DOCS_
[](https://fleet-ci.elastic.co/blue/organizations/jenkins/elastic-agent%2Felastic-agent-mbp%2Fmain/detail/main/508//pipeline) [](https://fleet-ci.elastic.co/blue/organizations/jenkins/elastic-agent%2Felastic-agent-mbp%2Fmain/detail/main/508//tests) [](https://fleet-ci.elastic.co/blue/organizations/jenkins/elastic-agent%2Felastic-agent-mbp%2Fmain/detail/main/508//changes) [](https://fleet-ci.elastic.co/blue/organizations/jenkins/elastic-agent%2Felastic-agent-mbp%2Fmain/detail/main/508//artifacts) [](http://elastic-agent_null.docs-preview.app.elstc.co/diff) [](https://ci-stats.elastic.co/app/apm/services/fleet-ci/transactions/view?rangeFrom=2022-10-13T09:09:45.169Z&rangeTo=2022-10-13T09:29:45.169Z&transactionName=BUILD+elastic-agent%2Felastic-agent-mbp%2Fmain&transactionType=job&latencyAggregationType=avg&traceId=d990a414fd818bdf6d929ede756ad766&transactionId=945ec81a24a864b0)
<!-- BUILD SUMMARY-->
<details><summary>Expand to view the summary</summary>
<p>
#### Build stats
* Start Time: 2022-10-13T09:19:45.169+0000
* Duration: 41 min 46 sec
#### Test stats :test_tube:
| Test | Results |
| ------------ | :-----------------------------: |
| Failed | 0 |
| Passed | 4991 |
| Skipped | 17 |
| Total | 5008 |
</p>
</details>
<!-- TEST RESULTS IF ANY-->
<!-- STEPS ERRORS IF ANY -->
### Steps errors [](https://fleet-ci.elastic.co/blue/organizations/jenkins/elastic-agent%2Felastic-agent-mbp%2Fmain/detail/main/508//pipeline)
<details><summary>Expand to view the steps failures</summary>
<p>
##### `[Clone] Kibana-Repository`
<ul>
<li>Took 4 min 35 sec . View more details <a href="https://fleet-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/elastic-agent/pipelines/elastic-agent-mbp/pipelines/main/runs/508/steps/1980/log/?start=0">here</a></li>
<li>Description: <code> make ci-clone-kibana-repository cp Makefile ./kibana cd kibana make ci-create-kubernetes-templates-pull-request </code></l1>
</ul>
</p>
</details>
| non_test | build for main with status failure broken heart build failed the below badges are clickable and redirect to their specific view in the ci or docs expand to view the summary build stats start time duration min sec test stats test tube test results failed passed skipped total steps errors expand to view the steps failures kibana repository took min sec view more details a href description make ci clone kibana repository cp makefile kibana cd kibana make ci create kubernetes templates pull request | 0 |
29,031 | 11,706,183,582 | IssuesEvent | 2020-03-07 20:27:43 | vlaship/hadoop-wc | https://api.github.com/repos/vlaship/hadoop-wc | opened | CVE-2019-12384 (Medium) detected in jackson-databind-2.9.5.jar | security vulnerability | ## CVE-2019-12384 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- hadoop-client-3.2.0.jar (Root Library)
- hadoop-common-3.2.0.jar
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vlaship/hadoop-wc/commit/f1363bd417f4ca7591b0fef369881a3acd4cdeb5">f1363bd417f4ca7591b0fef369881a3acd4cdeb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.9.1 might allow attackers to have a variety of impacts by leveraging failure to block the logback-core class from polymorphic deserialization. Depending on the classpath content, remote code execution may be possible.
<p>Publish Date: 2019-06-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12384>CVE-2019-12384</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12384">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12384</a></p>
<p>Release Date: 2019-06-24</p>
<p>Fix Resolution: 2.9.9.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-12384 (Medium) detected in jackson-databind-2.9.5.jar - ## CVE-2019-12384 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.5/3490508379d065fe3fcb80042b62f630f7588606/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- hadoop-client-3.2.0.jar (Root Library)
- hadoop-common-3.2.0.jar
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vlaship/hadoop-wc/commit/f1363bd417f4ca7591b0fef369881a3acd4cdeb5">f1363bd417f4ca7591b0fef369881a3acd4cdeb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.9.1 might allow attackers to have a variety of impacts by leveraging failure to block the logback-core class from polymorphic deserialization. Depending on the classpath content, remote code execution may be possible.
<p>Publish Date: 2019-06-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12384>CVE-2019-12384</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12384">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12384</a></p>
<p>Release Date: 2019-06-24</p>
<p>Fix Resolution: 2.9.9.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy hadoop client jar root library hadoop common jar x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before might allow attackers to have a variety of impacts by leveraging failure to block the logback core class from polymorphic deserialization depending on the classpath content remote code execution may be possible publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
273,225 | 23,739,084,683 | IssuesEvent | 2022-08-31 10:44:51 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | opened | Release 4.3.7-2 - Release Candidate 2 - Packages tests | team/cicd release test/4.3.7 | ### Packages tests information
| | |
|---------------------------------|--------------------------------------------|
| **Main release candidate issue** | #14691 |
| **Version** | 4.3.7 |
| **Release candidate #** | -- |
| **Tag** | https://github.com/wazuh/wazuh/tree/v4.3.7 |
| **Previous packages metrics** | -- |
| Status | Result | Test | Issue |
| -- | -- | -- | -- |
| β« | β« | Installation | -- |
| β« | β« | Upgrade | -- |
| βͺ | βͺ | SELinux | -- |
| βͺ | βͺ | Register | -- |
| βͺ | βͺ | Service | -- |
| βͺ | βͺ | Specific systems | -- |
| βͺ | βͺ | Indexer/Dashboard | -- |
Result legend:
β« - Not started
βͺ - Skipped
π - Pending/In progress
βοΈ - Results Ready
β οΈ - Review required
Status legend:
β« - None
βͺ - Skipped
π΄ - Rejected
π’ - Approved
## Auditors validation
In order to close and proceed with release or the next candidate version, the following auditors must give the green light to this RC.
- [ ] @okynos
| 1.0 | Release 4.3.7-2 - Release Candidate 2 - Packages tests - ### Packages tests information
| | |
|---------------------------------|--------------------------------------------|
| **Main release candidate issue** | #14691 |
| **Version** | 4.3.7 |
| **Release candidate #** | -- |
| **Tag** | https://github.com/wazuh/wazuh/tree/v4.3.7 |
| **Previous packages metrics** | -- |
| Status | Result | Test | Issue |
| -- | -- | -- | -- |
| β« | β« | Installation | -- |
| β« | β« | Upgrade | -- |
| βͺ | βͺ | SELinux | -- |
| βͺ | βͺ | Register | -- |
| βͺ | βͺ | Service | -- |
| βͺ | βͺ | Specific systems | -- |
| βͺ | βͺ | Indexer/Dashboard | -- |
Result legend:
β« - Not started
βͺ - Skipped
π - Pending/In progress
βοΈ - Results Ready
β οΈ - Review required
Status legend:
β« - None
βͺ - Skipped
π΄ - Rejected
π’ - Approved
## Auditors validation
In order to close and proceed with release or the next candidate version, the following auditors must give the green light to this RC.
- [ ] @okynos
| test | release release candidate packages tests packages tests information main release candidate issue version release candidate tag previous packages metrics status result test issue β« β« installation β« β« upgrade βͺ βͺ selinux βͺ βͺ register βͺ βͺ service βͺ βͺ specific systems βͺ βͺ indexer dashboard result legend β« not started βͺ skipped π pending in progress βοΈ results ready β οΈ review required status legend β« none βͺ skipped π΄ rejected π’ approved auditors validation in order to close and proceed with release or the next candidate version the following auditors must give the green light to this rc okynos | 1 |
326,945 | 28,033,506,321 | IssuesEvent | 2023-03-28 13:48:20 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] IndexShardTests testShardExposesWriteLoadStats failing | >test-failure :Distributed/Distributed Team:Distributed | **Build scan:**
https://gradle-enterprise.elastic.co/s/4th3kse67fo64/tests/:server:test/org.elasticsearch.index.shard.IndexShardTests/testShardExposesWriteLoadStats
**Reproduction line:**
```
./gradlew ':server:test' --tests "org.elasticsearch.index.shard.IndexShardTests.testShardExposesWriteLoadStats" -Dtests.seed=61A3A8364DDF1313 -Dtests.locale=he -Dtests.timezone=America/Indiana/Knox -Druntime.java=20
```
**Applicable branches:**
main
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.index.shard.IndexShardTests&tests.test=testShardExposesWriteLoadStats
**Failure excerpt:**
```
java.lang.AssertionError:
Expected: is <1.0>
but: was <0.1>
at __randomizedtesting.SeedInfo.seed([61A3A8364DDF1313:FB05FAD6770A7A47]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.elasticsearch.index.shard.IndexShardTests.testShardExposesWriteLoadStats(IndexShardTests.java:4678)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.lang.reflect.Method.invoke(Method.java:578)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:1623)
``` | 1.0 | [CI] IndexShardTests testShardExposesWriteLoadStats failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/4th3kse67fo64/tests/:server:test/org.elasticsearch.index.shard.IndexShardTests/testShardExposesWriteLoadStats
**Reproduction line:**
```
./gradlew ':server:test' --tests "org.elasticsearch.index.shard.IndexShardTests.testShardExposesWriteLoadStats" -Dtests.seed=61A3A8364DDF1313 -Dtests.locale=he -Dtests.timezone=America/Indiana/Knox -Druntime.java=20
```
**Applicable branches:**
main
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.index.shard.IndexShardTests&tests.test=testShardExposesWriteLoadStats
**Failure excerpt:**
```
java.lang.AssertionError:
Expected: is <1.0>
but: was <0.1>
at __randomizedtesting.SeedInfo.seed([61A3A8364DDF1313:FB05FAD6770A7A47]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.elasticsearch.index.shard.IndexShardTests.testShardExposesWriteLoadStats(IndexShardTests.java:4678)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.lang.reflect.Method.invoke(Method.java:578)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:1623)
``` | test | indexshardtests testshardexposeswriteloadstats failing build scan reproduction line gradlew server test tests org elasticsearch index shard indexshardtests testshardexposeswriteloadstats dtests seed dtests locale he dtests timezone america indiana knox druntime java applicable branches main reproduces locally didn t try failure history failure excerpt java lang assertionerror expected is but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org junit assert assertthat assert java at org elasticsearch index shard indexshardtests testshardexposeswriteloadstats indexshardtests java at jdk internal reflect directmethodhandleaccessor invoke directmethodhandleaccessor java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java | 1 |
316,235 | 27,148,105,422 | IssuesEvent | 2023-02-16 21:50:11 | w3c/aria-at | https://api.github.com/repos/w3c/aria-at | closed | Tester issue report for: "Open submenu in interaction mode" | tests pilot-test-2020 |
### Test file at exact commit
[tests/menubar-editor/test-11-open-submenu-of-menubar-interaction.html](https://github.com/w3c/aria-at/blob/2a3c46f1cbd20e7f2f0a706052d2d82aaf207a29/tests/menubar-editor/test-11-open-submenu-of-menubar-interaction.html)
### Cycle:
Test Pilot (2020-05-27)
### AT:
NVDA (version 2020.1)
### Browser:
Firefox (version 76.0.1)
### Description
Should the state of the radio button also be conveyed
| 2.0 | Tester issue report for: "Open submenu in interaction mode" -
### Test file at exact commit
[tests/menubar-editor/test-11-open-submenu-of-menubar-interaction.html](https://github.com/w3c/aria-at/blob/2a3c46f1cbd20e7f2f0a706052d2d82aaf207a29/tests/menubar-editor/test-11-open-submenu-of-menubar-interaction.html)
### Cycle:
Test Pilot (2020-05-27)
### AT:
NVDA (version 2020.1)
### Browser:
Firefox (version 76.0.1)
### Description
Should the state of the radio button also be conveyed
| test | tester issue report for open submenu in interaction mode test file at exact commit cycle test pilot at nvda version browser firefox version description should the state of the radio button also be conveyed | 1 |
14,970 | 11,274,272,304 | IssuesEvent | 2020-01-14 18:12:46 | cashapp/sqldelight | https://api.github.com/repos/cashapp/sqldelight | closed | Use Travis CI build stages | enhancement infrastructure | https://docs.travis-ci.com/user/build-stages
This should allow us to prevent snapshot deployment until linux and mac os pass. The downside is that we can't easily share artifacts so we would have to rebuild on mac os to deploy everything as the final stage.
(Circle CI would let us do partial builds on multiple platforms, aggregate, and publish) | 1.0 | Use Travis CI build stages - https://docs.travis-ci.com/user/build-stages
This should allow us to prevent snapshot deployment until linux and mac os pass. The downside is that we can't easily share artifacts so we would have to rebuild on mac os to deploy everything as the final stage.
(Circle CI would let us do partial builds on multiple platforms, aggregate, and publish) | non_test | use travis ci build stages this should allow us to prevent snapshot deployment until linux and mac os pass the downside is that we can t easily share artifacts so we would have to rebuild on mac os to deploy everything as the final stage circle ci would let us do partial builds on multiple platforms aggregate and publish | 0 |
302,340 | 26,139,861,719 | IssuesEvent | 2022-12-29 16:49:52 | apache/beam | https://api.github.com/repos/apache/beam | closed | Deploy PerfKit Explorer for Beam | tests P3 bug |
Imported from Jira [BEAM-1596](https://issues.apache.org/jira/browse/BEAM-1596). Original Jira may contain additional context.
Reported by: jaku. | 1.0 | Deploy PerfKit Explorer for Beam -
Imported from Jira [BEAM-1596](https://issues.apache.org/jira/browse/BEAM-1596). Original Jira may contain additional context.
Reported by: jaku. | test | deploy perfkit explorer for beam imported from jira original jira may contain additional context reported by jaku | 1 |
324,606 | 27,812,027,983 | IssuesEvent | 2023-03-18 08:18:23 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix elementwise.test_frexp | Sub Task Ivy API Experimental Failing Test | | | |
|---|---|
|tensorflow|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4347680692/jobs/7595284531" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4350568668/jobs/7601401435" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix elementwise.test_frexp - | | |
|---|---|
|tensorflow|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4347680692/jobs/7595284531" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4350568668/jobs/7601401435" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
| test | fix elementwise test frexp tensorflow img src torch img src numpy img src jax img src | 1 |
77,512 | 9,592,424,915 | IssuesEvent | 2019-05-09 08:54:36 | GDquest/godot-metroidvania-2d | https://api.github.com/repos/GDquest/godot-metroidvania-2d | closed | Write a game concept for the game and the course | design | Write a concept for the demo's gameplay and related points to teach.
Come up with one or more prototype(s) to create or next tasks to move forward with pre-production. | 1.0 | Write a game concept for the game and the course - Write a concept for the demo's gameplay and related points to teach.
Come up with one or more prototype(s) to create or next tasks to move forward with pre-production. | non_test | write a game concept for the game and the course write a concept for the demo s gameplay and related points to teach come up with one or more prototype s to create or next tasks to move forward with pre production | 0 |
231,757 | 7,643,078,578 | IssuesEvent | 2018-05-08 11:27:49 | dagcoin/dagcoin | https://api.github.com/repos/dagcoin/dagcoin | closed | Enable different versions of dagcoin works at the same time in the same device | enhancement high priority | Enable core gets folder name from environment. It is the clue.
## Expected Behavior
Enable different versions of dagcoin works at the same time in the same device
## Your Environment
devnet | 1.0 | Enable different versions of dagcoin works at the same time in the same device - Enable core gets folder name from environment. It is the clue.
## Expected Behavior
Enable different versions of dagcoin works at the same time in the same device
## Your Environment
devnet | non_test | enable different versions of dagcoin works at the same time in the same device enable core gets folder name from environment it is the clue expected behavior enable different versions of dagcoin works at the same time in the same device your environment devnet | 0 |
8,101 | 2,611,452,422 | IssuesEvent | 2015-02-27 05:00:11 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | closed | Placing the king requires teleportation in the weapon set. | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Play with king.
2. Use weapon set with no teleportation.
3. Run the fight.
What is the expected output? What do you see instead?
I expected that the king will use the extra teleport, as it happens, when you
choose to place all hedgehogs manually, but the king uses the teleport from
weapon set (if there is any). I think there should be an ability to choose if
you want to place (only) the king manually before starting the game.
What version of the product are you using? On what operating system?
0.9.13 on Windows XP SP2
Please provide any additional information below.
```
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 7 Oct 2010 at 4:10 | 1.0 | Placing the king requires teleportation in the weapon set. - ```
What steps will reproduce the problem?
1. Play with king.
2. Use weapon set with no teleportation.
3. Run the fight.
What is the expected output? What do you see instead?
I expected that the king will use the extra teleport, as it happens, when you
choose to place all hedgehogs manually, but the king uses the teleport from
weapon set (if there is any). I think there should be an ability to choose if
you want to place (only) the king manually before starting the game.
What version of the product are you using? On what operating system?
0.9.13 on Windows XP SP2
Please provide any additional information below.
```
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 7 Oct 2010 at 4:10 | non_test | placing the king requires teleportation in the weapon set what steps will reproduce the problem play with king use weapon set with no teleportation run the fight what is the expected output what do you see instead i expected that the king will use the extra teleport as it happens when you choose to place all hedgehogs manually but the king uses the teleport from weapon set if there is any i think there should be an ability to choose if you want to place only the king manually before starting the game what version of the product are you using on what operating system on windows xp please provide any additional information below original issue reported on code google com by adibiaz gmail com on oct at | 0 |
291,461 | 25,149,823,445 | IssuesEvent | 2022-11-10 09:09:39 | wazuh/wazuh-qa | https://api.github.com/repos/wazuh/wazuh-qa | closed | Adding rules for Sysmon ID 20 events | team/qa target/4.4.0 type/dev-testing subteam/qa-main | | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
| 4.4.0 | https://github.com/wazuh/wazuh-qa/issues/3396 | https://github.com/wazuh/wazuh/pull/13673 |
Adding rules for Sysmon ID 20 events | 1.0 | Adding rules for Sysmon ID 20 events - | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
| 4.4.0 | https://github.com/wazuh/wazuh-qa/issues/3396 | https://github.com/wazuh/wazuh/pull/13673 |
Adding rules for Sysmon ID 20 events | test | adding rules for sysmon id events target version related issue related pr adding rules for sysmon id events | 1 |
230,403 | 18,668,002,542 | IssuesEvent | 2021-10-30 06:26:27 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: sqlsmith/setup=seed/setting=no-ddl failed | C-test-failure O-robot O-roachtest branch-master release-blocker | roachtest.sqlsmith/setup=seed/setting=no-ddl [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3658629&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3658629&tab=artifacts#/sqlsmith/setup=seed/setting=no-ddl) on master @ [db229cad354c2776ea35c9dc4f78e6ce66b9437c](https://github.com/cockroachdb/cockroach/commits/db229cad354c2776ea35c9dc4f78e6ce66b9437c):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=seed/setting=no-ddl/run_1
sqlsmith.go:224,sqlsmith.go:257,test_runner.go:777: error: pq: internal error: crdb_internal.reset_index_usage_stats(): index usage stats controller not set
stmt:
SELECT
crdb_internal.reset_index_usage_stats()::BOOL AS col_3262, '\xa318b179':::BYTES AS col_3263
FROM
defaultdb.public.seed@seed__int8__float8__date_idx AS tab_1358
WHERE
tab_1358._bool
ORDER BY
tab_1358._enum ASC, tab_1358._inet
LIMIT
90:::INT8;
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
|
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=seed/setting=no-ddl.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: sqlsmith/setup=seed/setting=no-ddl failed - roachtest.sqlsmith/setup=seed/setting=no-ddl [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3658629&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3658629&tab=artifacts#/sqlsmith/setup=seed/setting=no-ddl) on master @ [db229cad354c2776ea35c9dc4f78e6ce66b9437c](https://github.com/cockroachdb/cockroach/commits/db229cad354c2776ea35c9dc4f78e6ce66b9437c):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=seed/setting=no-ddl/run_1
sqlsmith.go:224,sqlsmith.go:257,test_runner.go:777: error: pq: internal error: crdb_internal.reset_index_usage_stats(): index usage stats controller not set
stmt:
SELECT
crdb_internal.reset_index_usage_stats()::BOOL AS col_3262, '\xa318b179':::BYTES AS col_3263
FROM
defaultdb.public.seed@seed__int8__float8__date_idx AS tab_1358
WHERE
tab_1358._bool
ORDER BY
tab_1358._enum ASC, tab_1358._inet
LIMIT
90:::INT8;
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
|
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=seed/setting=no-ddl.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest sqlsmith setup seed setting no ddl failed roachtest sqlsmith setup seed setting no ddl with on master the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts sqlsmith setup seed setting no ddl run sqlsmith go sqlsmith go test runner go error pq internal error crdb internal reset index usage stats index usage stats controller not set stmt select crdb internal reset index usage stats bool as col bytes as col from defaultdb public seed seed date idx as tab where tab bool order by tab enum asc tab inet limit help see see cc cockroachdb sql queries | 1 |
792,039 | 27,943,674,012 | IssuesEvent | 2023-03-24 00:01:11 | anoma/typhon | https://api.github.com/repos/anoma/typhon | opened | Heterogeneous Paxos Stateright Prototype | enhancement Priority: A Typhon | Building on the [english spec](https://specs.anoma.net/main/components/typhon/heterogeneous_paxos.html), and the [formal spec](https://github.com/anoma/typhon/blob/main/tla/HPaxos.tla) construct a prototype version of [Heterogeneous Paxos](https://isaacsheff.com/hetcons) in the [Stateright](https://github.com/stateright/stateright) framework. To begin with, we'll try to do a single instance of consensus with some kind of simple proposal scheme.
This should ultimately be integrated with a [Heterogeneous Narwhal](https://specs.anoma.net/main/components/typhon/mempool.html) [Stateright prototype](https://github.com/anoma/typhon/issues/44) and an [execution engine](https://specs.anoma.net/main/components/typhon/execution.html) Stateright prototype for a Typhon Stateright prototype. | 1.0 | Heterogeneous Paxos Stateright Prototype - Building on the [english spec](https://specs.anoma.net/main/components/typhon/heterogeneous_paxos.html), and the [formal spec](https://github.com/anoma/typhon/blob/main/tla/HPaxos.tla) construct a prototype version of [Heterogeneous Paxos](https://isaacsheff.com/hetcons) in the [Stateright](https://github.com/stateright/stateright) framework. To begin with, we'll try to do a single instance of consensus with some kind of simple proposal scheme.
This should ultimately be integrated with a [Heterogeneous Narwhal](https://specs.anoma.net/main/components/typhon/mempool.html) [Stateright prototype](https://github.com/anoma/typhon/issues/44) and an [execution engine](https://specs.anoma.net/main/components/typhon/execution.html) Stateright prototype for a Typhon Stateright prototype. | non_test | heterogeneous paxos stateright prototype building on the and the construct a prototype version of in the framework to begin with we ll try to do a single instance of consensus with some kind of simple proposal scheme this should ultimately be integrated with a and an stateright prototype for a typhon stateright prototype | 0 |
66,105 | 6,988,881,132 | IssuesEvent | 2017-12-14 14:30:50 | teamdocs/kantu2018 | https://api.github.com/repos/teamdocs/kantu2018 | closed | Pressing stop should stop the loop completely | Ready for Test | One new discovered issue: If in a loop, pressing stop should stop the loop completely, currently it just stops the macro run.. so if I run a loop from 1... 100 I might have to press stop 99 times ;)

| 1.0 | Pressing stop should stop the loop completely - One new discovered issue: If in a loop, pressing stop should stop the loop completely, currently it just stops the macro run.. so if I run a loop from 1... 100 I might have to press stop 99 times ;)

| test | pressing stop should stop the loop completely one new discovered issue if in a loop pressing stop should stop the loop completely currently it just stops the macro run so if i run a loop from i might have to press stop times | 1 |
73,694 | 7,349,915,213 | IssuesEvent | 2018-03-08 12:30:44 | EnMasseProject/enmasse | https://api.github.com/repos/EnMasseProject/enmasse | closed | system-tests: review list of February disabled tests | component/systemtests | 1. enable tests with fixed issues
2. create new issue (March list of disabled tests) for unfixed tests | 1.0 | system-tests: review list of February disabled tests - 1. enable tests with fixed issues
2. create new issue (March list of disabled tests) for unfixed tests | test | system tests review list of february disabled tests enable tests with fixed issues create new issue march list of disabled tests for unfixed tests | 1 |
156,716 | 12,335,790,048 | IssuesEvent | 2020-05-14 12:35:28 | openethereum/openethereum | https://api.github.com/repos/openethereum/openethereum | closed | Update state tests to v7.0.0 | F4-tests π» | https://www.reddit.com/r/ethereum/comments/dd5hkj/release_v700beta1_for_ethereum_consensus_tests/ marks the first "real" release in a long time. We should update our tests to the `v7` tag when released and make sure we pull in the tests from the right folders. | 1.0 | Update state tests to v7.0.0 - https://www.reddit.com/r/ethereum/comments/dd5hkj/release_v700beta1_for_ethereum_consensus_tests/ marks the first "real" release in a long time. We should update our tests to the `v7` tag when released and make sure we pull in the tests from the right folders. | test | update state tests to marks the first real release in a long time we should update our tests to the tag when released and make sure we pull in the tests from the right folders | 1 |
468,713 | 13,489,027,377 | IssuesEvent | 2020-09-11 13:20:58 | web-platform-tests/wpt | https://api.github.com/repos/web-platform-tests/wpt | closed | Firefox stable broken since September 1st | infra priority:urgent | There have been no Firefox stable runs on wpt.fyi since September 1st ([list of runs](https://wpt.fyi/runs?label=master&label=experimental&max-count=100&product=chrome&product=edge&product=firefox&product=safari&product=webkitgtk)).
Checking the `epochs/daily` branch, looks like all reftests fail; [latest run](https://community-tc.services.mozilla.com/tasks/groups/N-qzAWqJQV-StLOtoR3KIQ).
Looking at [one log](https://community-tc.services.mozilla.com/tasks/XRXtY3VqQ7CyvnrZ--Kf6g/runs/0/logs/https%3A%2F%2Fcommunity-tc.services.mozilla.com%2Fapi%2Fqueue%2Fv1%2Ftask%2FXRXtY3VqQ7CyvnrZ--Kf6g%2Fruns%2F0%2Fartifacts%2Fpublic%2Flogs%2Flive.log), it looks like Firefox is failing to start:
```
7:03.02 INFO Application command: /home/test/build/firefox/firefox --marionette about:blank -profile /tmp/tmp5d3txd
7:03.03 pid:2456 Full command: /home/test/build/firefox/firefox --marionette about:blank -profile /tmp/tmp49Kkgv
pid:2456 console.error: SearchCache: "_readCacheFile: Error reading cache file:" (new Error("", "(unknown module)"))
7:03.03 pid:2456 1599355969164 Marionette INFO Listening on port 54715
7:03.03 INFO Starting runner
7:03.19 INFO Browser exited with return code -15
7:03.19 INFO PROCESS LEAKS None
7:03.19 WARNING Traceback (most recent call last):
File "/home/test/web-platform-tests/tools/wptrunner/wptrunner/executors/executormarionette.py", line 1025, in teardown
self.executor.protocol.marionette._send_message("reftest:teardown", {})
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/decorators.py", line 36, in _
m._handle_socket_failure()
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/marionette.py", line 654, in _handle_socket_failure
reraise(exc_cls, exc, tb)
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/decorators.py", line 26, in _
return func(*args, **kwargs)
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/marionette.py", line 594, in _send_message
msg = self.client.request(name, params)
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/transport.py", line 276, in request
return self.receive()
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/transport.py", line 162, in receive
raise socket.error("No data received over socket")
error: No data received over socket
7:03.19 INFO Closing logging queue
7:03.19 INFO queue closed
7:03.19 CRITICAL Max restarts exceeded
7:03.19 INFO PROCESS LEAKS None
7:03.24 INFO Browser exited with return code -15
7:03.24 INFO Got 0 unexpected results
7:03.24 SUITE_END
```
This may be a new version of Firefox stable, or may be a change in WPT. Digging up the working --> failed commit range for the latter first. | 1.0 | Firefox stable broken since September 1st - There have been no Firefox stable runs on wpt.fyi since September 1st ([list of runs](https://wpt.fyi/runs?label=master&label=experimental&max-count=100&product=chrome&product=edge&product=firefox&product=safari&product=webkitgtk)).
Checking the `epochs/daily` branch, looks like all reftests fail; [latest run](https://community-tc.services.mozilla.com/tasks/groups/N-qzAWqJQV-StLOtoR3KIQ).
Looking at [one log](https://community-tc.services.mozilla.com/tasks/XRXtY3VqQ7CyvnrZ--Kf6g/runs/0/logs/https%3A%2F%2Fcommunity-tc.services.mozilla.com%2Fapi%2Fqueue%2Fv1%2Ftask%2FXRXtY3VqQ7CyvnrZ--Kf6g%2Fruns%2F0%2Fartifacts%2Fpublic%2Flogs%2Flive.log), it looks like Firefox is failing to start:
```
7:03.02 INFO Application command: /home/test/build/firefox/firefox --marionette about:blank -profile /tmp/tmp5d3txd
7:03.03 pid:2456 Full command: /home/test/build/firefox/firefox --marionette about:blank -profile /tmp/tmp49Kkgv
pid:2456 console.error: SearchCache: "_readCacheFile: Error reading cache file:" (new Error("", "(unknown module)"))
7:03.03 pid:2456 1599355969164 Marionette INFO Listening on port 54715
7:03.03 INFO Starting runner
7:03.19 INFO Browser exited with return code -15
7:03.19 INFO PROCESS LEAKS None
7:03.19 WARNING Traceback (most recent call last):
File "/home/test/web-platform-tests/tools/wptrunner/wptrunner/executors/executormarionette.py", line 1025, in teardown
self.executor.protocol.marionette._send_message("reftest:teardown", {})
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/decorators.py", line 36, in _
m._handle_socket_failure()
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/marionette.py", line 654, in _handle_socket_failure
reraise(exc_cls, exc, tb)
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/decorators.py", line 26, in _
return func(*args, **kwargs)
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/marionette.py", line 594, in _send_message
msg = self.client.request(name, params)
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/transport.py", line 276, in request
return self.receive()
File "/home/test/web-platform-tests/_venv2/lib/python2.7/site-packages/marionette_driver/transport.py", line 162, in receive
raise socket.error("No data received over socket")
error: No data received over socket
7:03.19 INFO Closing logging queue
7:03.19 INFO queue closed
7:03.19 CRITICAL Max restarts exceeded
7:03.19 INFO PROCESS LEAKS None
7:03.24 INFO Browser exited with return code -15
7:03.24 INFO Got 0 unexpected results
7:03.24 SUITE_END
```
This may be a new version of Firefox stable, or may be a change in WPT. Digging up the working --> failed commit range for the latter first. | non_test | firefox stable broken since september there have been no firefox stable runs on wpt fyi since september checking the epochs daily branch looks like all reftests fail looking at it looks like firefox is failing to start info application command home test build firefox firefox marionette about blank profile tmp pid full command home test build firefox firefox marionette about blank profile tmp pid console error searchcache readcachefile error reading cache file new error unknown module pid marionette info listening on port info starting runner info browser exited with return code info process leaks none warning traceback most recent call last file home test web platform tests tools wptrunner wptrunner executors executormarionette py line in teardown self executor protocol marionette send message reftest teardown file home test web platform tests lib site packages marionette driver decorators py line in m handle socket failure file home test web platform tests lib site packages marionette driver marionette py line in handle socket failure reraise exc cls exc tb file home test web platform tests lib site packages marionette driver decorators py line in return func args kwargs file home test web platform tests lib site packages marionette driver marionette py line in send message msg self client request name params file home test web platform tests lib site packages marionette driver transport py line in request return self receive file home test web platform tests lib site packages marionette driver transport py line in receive raise socket error no data received over socket error no data received over socket info closing logging queue info queue closed critical max restarts exceeded info process leaks none info browser exited with return code info got unexpected results suite end this may be a new version of firefox stable or may be a change in wpt digging up the working failed commit range for the latter first | 0 |
283,477 | 21,316,628,713 | IssuesEvent | 2022-04-16 11:49:37 | spencernah/pe | https://api.github.com/repos/spencernah/pe | opened | In UG, editing contacts feature did not mention about selecting an index greater than list size | severity.Low type.DocumentationBug | Document did not mention that users are not allowed to select an index (SEQ_NO_OF_CONTACT) that is within the list size
<!--session: 1650104048631-0cd6c1d3-70f0-4888-a90c-34f0e9b1f41b-->
<!--Version: Web v3.4.2--> | 1.0 | In UG, editing contacts feature did not mention about selecting an index greater than list size - Document did not mention that users are not allowed to select an index (SEQ_NO_OF_CONTACT) that is within the list size
<!--session: 1650104048631-0cd6c1d3-70f0-4888-a90c-34f0e9b1f41b-->
<!--Version: Web v3.4.2--> | non_test | in ug editing contacts feature did not mention about selecting an index greater than list size document did not mention that users are not allowed to select an index seq no of contact that is within the list size | 0 |
825,783 | 31,471,610,665 | IssuesEvent | 2023-08-30 07:56:52 | markgravity/golang-ic | https://api.github.com/repos/markgravity/golang-ic | opened | [Backend] As a user, I will be required a valid token when sending authenticated request | type: feature priority: high @0.3.0 | ## Acceptance Criteria
- Setup `authenticated_request` middleware
- Apply to `keywords/upload` API
| 1.0 | [Backend] As a user, I will be required a valid token when sending authenticated request - ## Acceptance Criteria
- Setup `authenticated_request` middleware
- Apply to `keywords/upload` API
| non_test | as a user i will be required a valid token when sending authenticated request acceptance criteria setup authenticated request middleware apply to keywords upload api | 0 |
88,349 | 10,569,428,389 | IssuesEvent | 2019-10-06 19:29:41 | getsentry/sentry-php | https://api.github.com/repos/getsentry/sentry-php | closed | Add phptek/sentry package for SilverStripe projects | Status: Confirmed Type: Documentation Type: Improvement | Hi, I'm the maintainer of https://github.com/phptek/silverstripe-sentry - the most popular of the small handful of packages for Sentry integration with [SilverStripe](https://silverstripe.org) projects. Please consider listing the package on your README.
Thanks heaps! | 1.0 | Add phptek/sentry package for SilverStripe projects - Hi, I'm the maintainer of https://github.com/phptek/silverstripe-sentry - the most popular of the small handful of packages for Sentry integration with [SilverStripe](https://silverstripe.org) projects. Please consider listing the package on your README.
Thanks heaps! | non_test | add phptek sentry package for silverstripe projects hi i m the maintainer of the most popular of the small handful of packages for sentry integration with projects please consider listing the package on your readme thanks heaps | 0 |
79,960 | 7,734,437,408 | IssuesEvent | 2018-05-27 01:01:47 | ray-project/ray | https://api.github.com/repos/ray-project/ray | closed | Valgrind test failure in local scheduler test. | test failure | The test that fails sometimes is.
```
python ./python/ray/local_scheduler/test/test.py valgrind
```
Specifically, the error happens at least in `TestLocalSchedulerClient.test_scheduling_when_objects_evicted`.
The full relevant error is
```
[0K$ python ./python/ray/local_scheduler/test/test.py valgrind
Using valgrind for tests
test_scheduling_when_objects_evicted (__main__.TestLocalSchedulerClient) ... Allowing the Plasma store to use up to 1GB of memory.
Starting object store with directory /dev/shm and huge page support disabled
==9715== Memcheck, a memory error detector
==9715== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==9715== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==9715== Command: /home/travis/.local/lib/python2.7/site-packages/ray-0.2.1-py2.7-linux-x86_64.egg/ray/local_scheduler/../core/src/local_scheduler/local_scheduler -s /tmp/scheduler7471754 -p /tmp/plasma_store42885200 -h 127.0.0.1 -n 0
==9715==
[WARN] (/home/travis/build/ray-project/ray/src/local_scheduler/local_scheduler.cc:330) No valid command to start a worker provided, local scheduler will not start any workers.
==9715== Invalid free() / delete / delete[] / realloc()
==9715== at 0x4C2BDEC: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9715== by 0x427CE3: TaskSpec_free(unsigned char*) (task.cc:310)
==9715== by 0x415B4B: TaskQueueEntry_free(TaskQueueEntry*) (local_scheduler_algorithm.cc:109)
==9715== by 0x415D20: SchedulingAlgorithmState_free(SchedulingAlgorithmState*) (local_scheduler_algorithm.cc:130)
==9715== by 0x40539E: LocalSchedulerState_free(LocalSchedulerState*) (local_scheduler.cc:198)
==9715== by 0x40A170: signal_handler(int) (local_scheduler.cc:1103)
==9715== by 0x55A5CAF: ??? (in /lib/x86_64-linux-gnu/libc-2.19.so)
==9715== by 0x4C2BDEB: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9715== by 0x427CE3: TaskSpec_free(unsigned char*) (task.cc:310)
==9715== by 0x415B4B: TaskQueueEntry_free(TaskQueueEntry*) (local_scheduler_algorithm.cc:109)
==9715== by 0x41825B: dispatch_tasks(LocalSchedulerState*, SchedulingAlgorithmState*) (local_scheduler_algorithm.cc:738)
==9715== by 0x4182E0: dispatch_all_tasks(LocalSchedulerState*, SchedulingAlgorithmState*) (local_scheduler_algorithm.cc:754)
==9715== Address 0x5c4d450 is 0 bytes inside a block of size 332 free'd
==9715== at 0x4C2BDEC: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9715== by 0x427CE3: TaskSpec_free(unsigned char*) (task.cc:310)
==9715== by 0x415B4B: TaskQueueEntry_free(TaskQueueEntry*) (local_scheduler_algorithm.cc:109)
==9715== by 0x41825B: dispatch_tasks(LocalSchedulerState*, SchedulingAlgorithmState*) (local_scheduler_algorithm.cc:738)
==9715== by 0x4182E0: dispatch_all_tasks(LocalSchedulerState*, SchedulingAlgorithmState*) (local_scheduler_algorithm.cc:754)
==9715== by 0x41AD1F: handle_object_available(LocalSchedulerState*, SchedulingAlgorithmState*, UniqueID) (local_scheduler_algorithm.cc:1252)
==9715== by 0x407FE7: process_plasma_notification(aeEventLoop*, int, void*, int) (local_scheduler.cc:621)
==9715== by 0x443FF7: aeProcessEvents (ae.c:412)
==9715== by 0x444181: aeMain (ae.c:455)
==9715== by 0x424973: event_loop_run(aeEventLoop*) (event_loop.cc:58)
==9715== by 0x40AD4D: start_server(char const*, char const*, char const*, int, char const*, char const*, char const*, bool, double const*, char const*, int) (local_scheduler.cc:1288)
==9715== by 0x40B620: main (local_scheduler.cc:1432)
==9715==
==9715==
==9715== HEAP SUMMARY:
==9715== in use at exit: 104 bytes in 2 blocks
==9715== total heap usage: 67 allocs, 66 frees, 63,092 bytes allocated
==9715==
==9715== 8 bytes in 1 blocks are still reachable in loss record 1 of 2
==9715== at 0x4C2B0E0: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9715== by 0x422629: __gnu_cxx::new_allocator<std::_List_iterator<TaskQueueEntry> >::allocate(unsigned long, void const*) (new_allocator.h:104)
==9715== by 0x4207FE: std::_Vector_base<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > >::_M_allocate(unsigned long) (in /home/travis/.local/lib/python2.7/site-packages/ray-0.2.1-py2.7-linux-x86_64.egg/ray/core/src/local_scheduler/local_scheduler)
==9715== by 0x41E2BD: std::_List_iterator<TaskQueueEntry>* std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > >::_M_allocate_and_copy<__gnu_cxx::__normal_iterator<std::_List_iterator<TaskQueueEntry> const*, std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > > > >(unsigned long, __gnu_cxx::__normal_iterator<std::_List_iterator<TaskQueueEntry> const*, std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > > >, __gnu_cxx::__normal_iterator<std::_List_iterator<TaskQueueEntry> const*, std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > > >) (stl_vector.h:1138)
==9715== by 0x41C481: std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > >::operator=(std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > > const&) (vector.tcc:188)
==9715== by 0x41B958: ObjectEntry::operator=(ObjectEntry const&) (local_scheduler_algorithm.cc:28)
==9715== by 0x41AAFE: handle_object_available(LocalSchedulerState*, SchedulingAlgorithmState*, UniqueID) (local_scheduler_algorithm.cc:1230)
==9715== by 0x407FE7: process_plasma_notification(aeEventLoop*, int, void*, int) (local_scheduler.cc:621)
==9715== by 0x443FF7: aeProcessEvents (ae.c:412)
==9715== by 0x444181: aeMain (ae.c:455)
==9715== by 0x424973: event_loop_run(aeEventLoop*) (event_loop.cc:58)
==9715== by 0x40AD4D: start_server(char const*, char const*, char const*, int, char const*, char const*, char const*, bool, double const*, char const*, int) (local_scheduler.cc:1288)
==9715==
==9715== 96 bytes in 1 blocks are still reachable in loss record 2 of 2
==9715== at 0x4C2AB80: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9715== by 0x42CBA7: read_message_async(aeEventLoop*, int) (io.cc:342)
==9715== by 0x407E8D: process_plasma_notification(aeEventLoop*, int, void*, int) (local_scheduler.cc:609)
==9715== by 0x443FF7: aeProcessEvents (ae.c:412)
==9715== by 0x444181: aeMain (ae.c:455)
==9715== by 0x424973: event_loop_run(aeEventLoop*) (event_loop.cc:58)
==9715== by 0x40AD4D: start_server(char const*, char const*, char const*, int, char const*, char const*, char const*, bool, double const*, char const*, int) (local_scheduler.cc:1288)
==9715== by 0x40B620: main (local_scheduler.cc:1432)
==9715==
==9715== LEAK SUMMARY:
==9715== definitely lost: 0 bytes in 0 blocks
==9715== indirectly lost: 0 bytes in 0 blocks
==9715== possibly lost: 0 bytes in 0 blocks
==9715== still reachable: 104 bytes in 2 blocks
==9715== suppressed: 0 bytes in 0 blocks
==9715==
==9715== For counts of detected and suppressed errors, rerun with: -v
==9715== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
travis_time:end:103fc612:start=1506901943994008668,finish=1506901946031377414,duration=2037368746
[0K
[31;1mThe command "python ./python/ray/local_scheduler/test/test.py valgrind" exited with 255.[0m
```
See https://s3.amazonaws.com/archive.travis-ci.org/jobs/282049762/log.txt?X-Amz-Expires=30&X-Amz-Date=20171002T051537Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJRYRXRSVGNKPKO5A/20171002/us-east-1/s3/aws4_request&X-Amz-SignedHeaders=host&X-Amz-Signature=0570d06565f1d980454844e7341f77d97065390122962608b7bf8c7c72091ae3 for an example log. | 1.0 | Valgrind test failure in local scheduler test. - The test that fails sometimes is.
```
python ./python/ray/local_scheduler/test/test.py valgrind
```
Specifically, the error happens at least in `TestLocalSchedulerClient.test_scheduling_when_objects_evicted`.
The full relevant error is
```
[0K$ python ./python/ray/local_scheduler/test/test.py valgrind
Using valgrind for tests
test_scheduling_when_objects_evicted (__main__.TestLocalSchedulerClient) ... Allowing the Plasma store to use up to 1GB of memory.
Starting object store with directory /dev/shm and huge page support disabled
==9715== Memcheck, a memory error detector
==9715== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==9715== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==9715== Command: /home/travis/.local/lib/python2.7/site-packages/ray-0.2.1-py2.7-linux-x86_64.egg/ray/local_scheduler/../core/src/local_scheduler/local_scheduler -s /tmp/scheduler7471754 -p /tmp/plasma_store42885200 -h 127.0.0.1 -n 0
==9715==
[WARN] (/home/travis/build/ray-project/ray/src/local_scheduler/local_scheduler.cc:330) No valid command to start a worker provided, local scheduler will not start any workers.
==9715== Invalid free() / delete / delete[] / realloc()
==9715== at 0x4C2BDEC: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9715== by 0x427CE3: TaskSpec_free(unsigned char*) (task.cc:310)
==9715== by 0x415B4B: TaskQueueEntry_free(TaskQueueEntry*) (local_scheduler_algorithm.cc:109)
==9715== by 0x415D20: SchedulingAlgorithmState_free(SchedulingAlgorithmState*) (local_scheduler_algorithm.cc:130)
==9715== by 0x40539E: LocalSchedulerState_free(LocalSchedulerState*) (local_scheduler.cc:198)
==9715== by 0x40A170: signal_handler(int) (local_scheduler.cc:1103)
==9715== by 0x55A5CAF: ??? (in /lib/x86_64-linux-gnu/libc-2.19.so)
==9715== by 0x4C2BDEB: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9715== by 0x427CE3: TaskSpec_free(unsigned char*) (task.cc:310)
==9715== by 0x415B4B: TaskQueueEntry_free(TaskQueueEntry*) (local_scheduler_algorithm.cc:109)
==9715== by 0x41825B: dispatch_tasks(LocalSchedulerState*, SchedulingAlgorithmState*) (local_scheduler_algorithm.cc:738)
==9715== by 0x4182E0: dispatch_all_tasks(LocalSchedulerState*, SchedulingAlgorithmState*) (local_scheduler_algorithm.cc:754)
==9715== Address 0x5c4d450 is 0 bytes inside a block of size 332 free'd
==9715== at 0x4C2BDEC: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9715== by 0x427CE3: TaskSpec_free(unsigned char*) (task.cc:310)
==9715== by 0x415B4B: TaskQueueEntry_free(TaskQueueEntry*) (local_scheduler_algorithm.cc:109)
==9715== by 0x41825B: dispatch_tasks(LocalSchedulerState*, SchedulingAlgorithmState*) (local_scheduler_algorithm.cc:738)
==9715== by 0x4182E0: dispatch_all_tasks(LocalSchedulerState*, SchedulingAlgorithmState*) (local_scheduler_algorithm.cc:754)
==9715== by 0x41AD1F: handle_object_available(LocalSchedulerState*, SchedulingAlgorithmState*, UniqueID) (local_scheduler_algorithm.cc:1252)
==9715== by 0x407FE7: process_plasma_notification(aeEventLoop*, int, void*, int) (local_scheduler.cc:621)
==9715== by 0x443FF7: aeProcessEvents (ae.c:412)
==9715== by 0x444181: aeMain (ae.c:455)
==9715== by 0x424973: event_loop_run(aeEventLoop*) (event_loop.cc:58)
==9715== by 0x40AD4D: start_server(char const*, char const*, char const*, int, char const*, char const*, char const*, bool, double const*, char const*, int) (local_scheduler.cc:1288)
==9715== by 0x40B620: main (local_scheduler.cc:1432)
==9715==
==9715==
==9715== HEAP SUMMARY:
==9715== in use at exit: 104 bytes in 2 blocks
==9715== total heap usage: 67 allocs, 66 frees, 63,092 bytes allocated
==9715==
==9715== 8 bytes in 1 blocks are still reachable in loss record 1 of 2
==9715== at 0x4C2B0E0: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9715== by 0x422629: __gnu_cxx::new_allocator<std::_List_iterator<TaskQueueEntry> >::allocate(unsigned long, void const*) (new_allocator.h:104)
==9715== by 0x4207FE: std::_Vector_base<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > >::_M_allocate(unsigned long) (in /home/travis/.local/lib/python2.7/site-packages/ray-0.2.1-py2.7-linux-x86_64.egg/ray/core/src/local_scheduler/local_scheduler)
==9715== by 0x41E2BD: std::_List_iterator<TaskQueueEntry>* std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > >::_M_allocate_and_copy<__gnu_cxx::__normal_iterator<std::_List_iterator<TaskQueueEntry> const*, std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > > > >(unsigned long, __gnu_cxx::__normal_iterator<std::_List_iterator<TaskQueueEntry> const*, std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > > >, __gnu_cxx::__normal_iterator<std::_List_iterator<TaskQueueEntry> const*, std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > > >) (stl_vector.h:1138)
==9715== by 0x41C481: std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > >::operator=(std::vector<std::_List_iterator<TaskQueueEntry>, std::allocator<std::_List_iterator<TaskQueueEntry> > > const&) (vector.tcc:188)
==9715== by 0x41B958: ObjectEntry::operator=(ObjectEntry const&) (local_scheduler_algorithm.cc:28)
==9715== by 0x41AAFE: handle_object_available(LocalSchedulerState*, SchedulingAlgorithmState*, UniqueID) (local_scheduler_algorithm.cc:1230)
==9715== by 0x407FE7: process_plasma_notification(aeEventLoop*, int, void*, int) (local_scheduler.cc:621)
==9715== by 0x443FF7: aeProcessEvents (ae.c:412)
==9715== by 0x444181: aeMain (ae.c:455)
==9715== by 0x424973: event_loop_run(aeEventLoop*) (event_loop.cc:58)
==9715== by 0x40AD4D: start_server(char const*, char const*, char const*, int, char const*, char const*, char const*, bool, double const*, char const*, int) (local_scheduler.cc:1288)
==9715==
==9715== 96 bytes in 1 blocks are still reachable in loss record 2 of 2
==9715== at 0x4C2AB80: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9715== by 0x42CBA7: read_message_async(aeEventLoop*, int) (io.cc:342)
==9715== by 0x407E8D: process_plasma_notification(aeEventLoop*, int, void*, int) (local_scheduler.cc:609)
==9715== by 0x443FF7: aeProcessEvents (ae.c:412)
==9715== by 0x444181: aeMain (ae.c:455)
==9715== by 0x424973: event_loop_run(aeEventLoop*) (event_loop.cc:58)
==9715== by 0x40AD4D: start_server(char const*, char const*, char const*, int, char const*, char const*, char const*, bool, double const*, char const*, int) (local_scheduler.cc:1288)
==9715== by 0x40B620: main (local_scheduler.cc:1432)
==9715==
==9715== LEAK SUMMARY:
==9715== definitely lost: 0 bytes in 0 blocks
==9715== indirectly lost: 0 bytes in 0 blocks
==9715== possibly lost: 0 bytes in 0 blocks
==9715== still reachable: 104 bytes in 2 blocks
==9715== suppressed: 0 bytes in 0 blocks
==9715==
==9715== For counts of detected and suppressed errors, rerun with: -v
==9715== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
travis_time:end:103fc612:start=1506901943994008668,finish=1506901946031377414,duration=2037368746
[0K
[31;1mThe command "python ./python/ray/local_scheduler/test/test.py valgrind" exited with 255.[0m
```
See https://s3.amazonaws.com/archive.travis-ci.org/jobs/282049762/log.txt?X-Amz-Expires=30&X-Amz-Date=20171002T051537Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJRYRXRSVGNKPKO5A/20171002/us-east-1/s3/aws4_request&X-Amz-SignedHeaders=host&X-Amz-Signature=0570d06565f1d980454844e7341f77d97065390122962608b7bf8c7c72091ae3 for an example log. | test | valgrind test failure in local scheduler test the test that fails sometimes is python python ray local scheduler test test py valgrind specifically the error happens at least in testlocalschedulerclient test scheduling when objects evicted the full relevant error is python python ray local scheduler test test py valgrind using valgrind for tests test scheduling when objects evicted main testlocalschedulerclient allowing the plasma store to use up to of memory starting object store with directory dev shm and huge page support disabled memcheck a memory error detector copyright c and gnu gpl d by julian seward et al using valgrind and libvex rerun with h for copyright info command home travis local lib site packages ray linux egg ray local scheduler core src local scheduler local scheduler s tmp p tmp plasma h n home travis build ray project ray src local scheduler local scheduler cc no valid command to start a worker provided local scheduler will not start any workers invalid free delete delete realloc at free in usr lib valgrind vgpreload memcheck linux so by taskspec free unsigned char task cc by taskqueueentry free taskqueueentry local scheduler algorithm cc by schedulingalgorithmstate free schedulingalgorithmstate local scheduler algorithm cc by localschedulerstate free localschedulerstate local scheduler cc by signal handler int local scheduler cc by in lib linux gnu libc so by free in usr lib valgrind vgpreload memcheck linux so by taskspec free unsigned char task cc by taskqueueentry free taskqueueentry local scheduler algorithm cc by dispatch tasks localschedulerstate schedulingalgorithmstate local scheduler algorithm cc by dispatch all tasks localschedulerstate schedulingalgorithmstate local scheduler algorithm cc address is bytes inside a block of size free d at free in usr lib valgrind vgpreload memcheck linux so by taskspec free unsigned char task cc by taskqueueentry free taskqueueentry local scheduler algorithm cc by dispatch tasks localschedulerstate schedulingalgorithmstate local scheduler algorithm cc by dispatch all tasks localschedulerstate schedulingalgorithmstate local scheduler algorithm cc by handle object available localschedulerstate schedulingalgorithmstate uniqueid local scheduler algorithm cc by process plasma notification aeeventloop int void int local scheduler cc by aeprocessevents ae c by aemain ae c by event loop run aeeventloop event loop cc by start server char const char const char const int char const char const char const bool double const char const int local scheduler cc by main local scheduler cc heap summary in use at exit bytes in blocks total heap usage allocs frees bytes allocated bytes in blocks are still reachable in loss record of at operator new unsigned long in usr lib valgrind vgpreload memcheck linux so by gnu cxx new allocator allocate unsigned long void const new allocator h by std vector base std allocator m allocate unsigned long in home travis local lib site packages ray linux egg ray core src local scheduler local scheduler by std list iterator std vector std allocator m allocate and copy const std vector std allocator unsigned long gnu cxx normal iterator const std vector std allocator gnu cxx normal iterator const std vector std allocator stl vector h by std vector std allocator operator std vector std allocator const vector tcc by objectentry operator objectentry const local scheduler algorithm cc by handle object available localschedulerstate schedulingalgorithmstate uniqueid local scheduler algorithm cc by process plasma notification aeeventloop int void int local scheduler cc by aeprocessevents ae c by aemain ae c by event loop run aeeventloop event loop cc by start server char const char const char const int char const char const char const bool double const char const int local scheduler cc bytes in blocks are still reachable in loss record of at malloc in usr lib valgrind vgpreload memcheck linux so by read message async aeeventloop int io cc by process plasma notification aeeventloop int void int local scheduler cc by aeprocessevents ae c by aemain ae c by event loop run aeeventloop event loop cc by start server char const char const char const int char const char const char const bool double const char const int local scheduler cc by main local scheduler cc leak summary definitely lost bytes in blocks indirectly lost bytes in blocks possibly lost bytes in blocks still reachable bytes in blocks suppressed bytes in blocks for counts of detected and suppressed errors rerun with v error summary errors from contexts suppressed from travis time end start finish duration command python python ray local scheduler test test py valgrind exited with see for an example log | 1 |
753,956 | 26,369,025,327 | IssuesEvent | 2023-01-11 18:59:05 | vaticle/bazel-distribution | https://api.github.com/repos/vaticle/bazel-distribution | closed | conflicting actions from multiple deploy_maven usages in the same package | type: bug priority: low | Similar to the issue in https://github.com/graknlabs/bazel-distribution/issues/215, if you define two java libraries, two assemble_maven targets, and two deploy_maven targets in the same package (same BUILD file), then the build fails with:
```
ERROR: file 'deploy.py' is generated by these conflicting actions:
Label: //:a_deploy_maven, //:b_deploy_maven
RuleClass: deploy_maven rule
Configuration: c459523ff98608ba9e7e7cae41b6408e94f5281c4537939d1c1bcfebf2509c7e
Mnemonic: TemplateExpand
Action key: b67a129538095ef2529e96303d414bb80c61d6cf0f7e3f201d24bd2a150cb666, 62daaeb89d652cfbfa69c1322a9de03ff015aff48e0a5cdd381a8e0ca05f8aa7
Progress message: Expanding template deploy.py
PrimaryInput: File:[/private/var/tmp/_bazel_dsilva/c65e62889b3448fbc3eab4f5ca51b208[source]]external/graknlabs_bazel_distribution/maven/templates/deploy.py
PrimaryOutput: File:[[<execution_root>]bazel-out/darwin-fastbuild/bin]deploy.py
Owner information: //:a_deploy_maven BuildConfigurationValue.Key[c459523ff98608ba9e7e7cae41b6408e94f5281c4537939d1c1bcfebf2509c7e] false, //:b_deploy_maven BuildConfigurationValue.Key[c459523ff98608ba9e7e7cae41b6408e94f5281c4537939d1c1bcfebf2509c7e] false
MandatoryInputs: are equal
Outputs: are equal
ERROR: com.google.devtools.build.lib.actions.MutableActionGraph$ActionConflictException: for deploy.py, previous action: action 'Expanding template deploy.py', attempted action: action 'Expanding template deploy.py'
```
Is that because deploy.py in https://github.com/graknlabs/bazel-distribution/blob/2618fa815cc1d2843db2d52c2df723b9faba0826/maven/templates/rules.bzl#L431 isn't prefixed with `ctx.attr.name`?
| 1.0 | conflicting actions from multiple deploy_maven usages in the same package - Similar to the issue in https://github.com/graknlabs/bazel-distribution/issues/215, if you define two java libraries, two assemble_maven targets, and two deploy_maven targets in the same package (same BUILD file), then the build fails with:
```
ERROR: file 'deploy.py' is generated by these conflicting actions:
Label: //:a_deploy_maven, //:b_deploy_maven
RuleClass: deploy_maven rule
Configuration: c459523ff98608ba9e7e7cae41b6408e94f5281c4537939d1c1bcfebf2509c7e
Mnemonic: TemplateExpand
Action key: b67a129538095ef2529e96303d414bb80c61d6cf0f7e3f201d24bd2a150cb666, 62daaeb89d652cfbfa69c1322a9de03ff015aff48e0a5cdd381a8e0ca05f8aa7
Progress message: Expanding template deploy.py
PrimaryInput: File:[/private/var/tmp/_bazel_dsilva/c65e62889b3448fbc3eab4f5ca51b208[source]]external/graknlabs_bazel_distribution/maven/templates/deploy.py
PrimaryOutput: File:[[<execution_root>]bazel-out/darwin-fastbuild/bin]deploy.py
Owner information: //:a_deploy_maven BuildConfigurationValue.Key[c459523ff98608ba9e7e7cae41b6408e94f5281c4537939d1c1bcfebf2509c7e] false, //:b_deploy_maven BuildConfigurationValue.Key[c459523ff98608ba9e7e7cae41b6408e94f5281c4537939d1c1bcfebf2509c7e] false
MandatoryInputs: are equal
Outputs: are equal
ERROR: com.google.devtools.build.lib.actions.MutableActionGraph$ActionConflictException: for deploy.py, previous action: action 'Expanding template deploy.py', attempted action: action 'Expanding template deploy.py'
```
Is that because deploy.py in https://github.com/graknlabs/bazel-distribution/blob/2618fa815cc1d2843db2d52c2df723b9faba0826/maven/templates/rules.bzl#L431 isn't prefixed with `ctx.attr.name`?
| non_test | conflicting actions from multiple deploy maven usages in the same package similar to the issue in if you define two java libraries two assemble maven targets and two deploy maven targets in the same package same build file then the build fails with error file deploy py is generated by these conflicting actions label a deploy maven b deploy maven ruleclass deploy maven rule configuration mnemonic templateexpand action key progress message expanding template deploy py primaryinput file external graknlabs bazel distribution maven templates deploy py primaryoutput file bazel out darwin fastbuild bin deploy py owner information a deploy maven buildconfigurationvalue key false b deploy maven buildconfigurationvalue key false mandatoryinputs are equal outputs are equal error com google devtools build lib actions mutableactiongraph actionconflictexception for deploy py previous action action expanding template deploy py attempted action action expanding template deploy py is that because deploy py in isn t prefixed with ctx attr name | 0 |
340,941 | 30,555,800,834 | IssuesEvent | 2023-07-20 11:37:12 | systemd/systemd | https://api.github.com/repos/systemd/systemd | opened | test: test-time-util fails again... | bug π util-lib tests | ### systemd version the issue has been seen with
HEAD
### Used distribution
_No response_
### Linux kernel version used
_No response_
### CPU architectures issue was seen on
None
### Component
_No response_
### Expected behaviour you didn't see
_No response_
### Unexpected behaviour you saw
https://github.com/systemd/systemd/pull/28463#issuecomment-1643721036
> @mrc0mmand have you seen this one before?
>
> ```
> 11:28:58 @1504938962980066 β Sat 2017-09-09 08:36:02 CAT β @1504942562000000 β Sat 2017-09-09 09:36:02 CAT
> 11:28:58 Assertion 'x / USEC_PER_SEC == y / USEC_PER_SEC' failed at src/test/test-time-util.c:406, function test_format_timestamp_impl(). Aborting.
> ```
### Steps to reproduce the problem
_No response_
### Additional program output to the terminal or log subsystem illustrating the issue
_No response_ | 1.0 | test: test-time-util fails again... - ### systemd version the issue has been seen with
HEAD
### Used distribution
_No response_
### Linux kernel version used
_No response_
### CPU architectures issue was seen on
None
### Component
_No response_
### Expected behaviour you didn't see
_No response_
### Unexpected behaviour you saw
https://github.com/systemd/systemd/pull/28463#issuecomment-1643721036
> @mrc0mmand have you seen this one before?
>
> ```
> 11:28:58 @1504938962980066 β Sat 2017-09-09 08:36:02 CAT β @1504942562000000 β Sat 2017-09-09 09:36:02 CAT
> 11:28:58 Assertion 'x / USEC_PER_SEC == y / USEC_PER_SEC' failed at src/test/test-time-util.c:406, function test_format_timestamp_impl(). Aborting.
> ```
### Steps to reproduce the problem
_No response_
### Additional program output to the terminal or log subsystem illustrating the issue
_No response_ | test | test test time util fails again systemd version the issue has been seen with head used distribution no response linux kernel version used no response cpu architectures issue was seen on none component no response expected behaviour you didn t see no response unexpected behaviour you saw have you seen this one before β sat cat β β sat cat assertion x usec per sec y usec per sec failed at src test test time util c function test format timestamp impl aborting steps to reproduce the problem no response additional program output to the terminal or log subsystem illustrating the issue no response | 1 |
4,769 | 7,242,629,613 | IssuesEvent | 2018-02-14 08:40:56 | MarcelloNicoletti/OS-BST-CountWords | https://api.github.com/repos/MarcelloNicoletti/OS-BST-CountWords | opened | Output file parallels input file | enhancement requirement | Professor will provide inputXX.txt files and I can choose what goes in XX. Professor also gives expected output for each scenario in outputXX.txt
Need to create/update "myoutputXX.txt" file with the output of my program and I need to ensure that XX matches between the output and input files. | 1.0 | Output file parallels input file - Professor will provide inputXX.txt files and I can choose what goes in XX. Professor also gives expected output for each scenario in outputXX.txt
Need to create/update "myoutputXX.txt" file with the output of my program and I need to ensure that XX matches between the output and input files. | non_test | output file parallels input file professor will provide inputxx txt files and i can choose what goes in xx professor also gives expected output for each scenario in outputxx txt need to create update myoutputxx txt file with the output of my program and i need to ensure that xx matches between the output and input files | 0 |
167,451 | 13,025,551,672 | IssuesEvent | 2020-07-27 13:43:02 | Realm667/WolfenDoom | https://api.github.com/repos/Realm667/WolfenDoom | closed | General discussion about Critters | actor gameplay playtesting suggestion | Personally I don't have a problem with biting rats, spiders and bats but I heard it a few times now already that some people tend to not like these - or actually find them annoying:
> GET RID OF THE RATS OR MAKE THEM HARMLESS β They add nothing to gameplay and are absolutely insufferable. Just completely get rid of Rats, Bats, or any other stupid little enemy. Donβt replace them with anything, just get rid of them and nothing will be missed. Not just me (that didn't like them), Iβve talked with many people, especially when we talk about it in the Cacoward discussion forums, who agree with just how grating these enemies are. And, from the viewpoint of both a player AND a developer myself, theyβre just bad gameplay. If It were me, I would leave them in, but make them harmless so they donβt attack or do any damage--players can choose to engage with them and shoot them or notβ¦ either way, I feel the same about them as I did playing Episode 2: they are annoying and add nothing to the existing solid gunplay with other enemies. No Bats or Ratsβ¦ Scorpions in Ep1 areβ¦ manageableβ¦ but still not fun.
This is what Scuba Steve told me. So what are your thoughts and how could we improve them?
| 1.0 | General discussion about Critters - Personally I don't have a problem with biting rats, spiders and bats but I heard it a few times now already that some people tend to not like these - or actually find them annoying:
> GET RID OF THE RATS OR MAKE THEM HARMLESS β They add nothing to gameplay and are absolutely insufferable. Just completely get rid of Rats, Bats, or any other stupid little enemy. Donβt replace them with anything, just get rid of them and nothing will be missed. Not just me (that didn't like them), Iβve talked with many people, especially when we talk about it in the Cacoward discussion forums, who agree with just how grating these enemies are. And, from the viewpoint of both a player AND a developer myself, theyβre just bad gameplay. If It were me, I would leave them in, but make them harmless so they donβt attack or do any damage--players can choose to engage with them and shoot them or notβ¦ either way, I feel the same about them as I did playing Episode 2: they are annoying and add nothing to the existing solid gunplay with other enemies. No Bats or Ratsβ¦ Scorpions in Ep1 areβ¦ manageableβ¦ but still not fun.
This is what Scuba Steve told me. So what are your thoughts and how could we improve them?
| test | general discussion about critters personally i don t have a problem with biting rats spiders and bats but i heard it a few times now already that some people tend to not like these or actually find them annoying get rid of the rats or make them harmless β they add nothing to gameplay and are absolutely insufferable just completely get rid of rats bats or any other stupid little enemy donβt replace them with anything just get rid of them and nothing will be missed not just me that didn t like them iβve talked with many people especially when we talk about it in the cacoward discussion forums who agree with just how grating these enemies are and from the viewpoint of both a player and a developer myself theyβre just bad gameplay if it were me i would leave them in but make them harmless so they donβt attack or do any damage players can choose to engage with them and shoot them or notβ¦ either way i feel the same about them as i did playing episode they are annoying and add nothing to the existing solid gunplay with other enemies no bats or ratsβ¦ scorpions in areβ¦ manageableβ¦ but still not fun this is what scuba steve told me so what are your thoughts and how could we improve them | 1 |
635,579 | 20,406,619,659 | IssuesEvent | 2022-02-23 06:40:52 | kubesphere/console | https://api.github.com/repos/kubesphere/console | closed | Edit federated service tag selector error | kind/bug kind/need-to-verify priority/low | **Describe the bug**
1. click more,click Edit Config Template
2. delete the key, click save,click to confirm
3. The update is successful, check that the resource status has not changed


**Versions used(KubeSphere/Kubernetes)**
KubeSphere: nightly-20210927
/kind bug
/@kubesphere/sig-console
/priority low | 1.0 | Edit federated service tag selector error - **Describe the bug**
1. click more,click Edit Config Template
2. delete the key, click save,click to confirm
3. The update is successful, check that the resource status has not changed


**Versions used(KubeSphere/Kubernetes)**
KubeSphere: nightly-20210927
/kind bug
/@kubesphere/sig-console
/priority low | non_test | edit federated service tag selector error describe the bug click more click edit config template delete the key click save click to confirm the update is successful check that the resource status has not changed versions used kubesphere kubernetes kubesphere nightly kind bug kubesphere sig console priority low | 0 |
207,890 | 15,857,025,604 | IssuesEvent | 2021-04-08 03:46:24 | jcsnorlax97/rentr | https://api.github.com/repos/jcsnorlax97/rentr | opened | Testing Login & Registration property with TestCafe | frontend testing | ### Task Description:
- This task aims to create tests for the Login & Registration property.
- Areas that involve updates:
- `/server/e2e_tests` (create a new file for this test) | 1.0 | Testing Login & Registration property with TestCafe - ### Task Description:
- This task aims to create tests for the Login & Registration property.
- Areas that involve updates:
- `/server/e2e_tests` (create a new file for this test) | test | testing login registration property with testcafe task description this task aims to create tests for the login registration property areas that involve updates server tests create a new file for this test | 1 |
235,987 | 19,477,288,354 | IssuesEvent | 2021-12-24 15:23:25 | IQSS/dataverse-client-r | https://api.github.com/repos/IQSS/dataverse-client-r | closed | Test with jenkins.dataverse.org | enhancement testing | Over at https://github.com/IQSS/dataverse/issues/5725 we're talking about a new continous integration service for the Dataverse community hosted by UNC (thanks!) at https://jenkins.dataverse.org
We should add a job for this client library, dataverse-client-r. | 1.0 | Test with jenkins.dataverse.org - Over at https://github.com/IQSS/dataverse/issues/5725 we're talking about a new continous integration service for the Dataverse community hosted by UNC (thanks!) at https://jenkins.dataverse.org
We should add a job for this client library, dataverse-client-r. | test | test with jenkins dataverse org over at we re talking about a new continous integration service for the dataverse community hosted by unc thanks at we should add a job for this client library dataverse client r | 1 |
67,083 | 7,034,637,281 | IssuesEvent | 2017-12-27 18:01:03 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | tests: please update pkg/kubelet/server TestServeExecInContainer and TestServeAttachContainer for Go1.9 | lifecycle/stale sig/node sig/testing | <!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
**What keywords did you search in Kubernetes issues before filing this one?** (If you have found any duplicates, you should instead reply there.):
TestServeExecInContainer
response location
---
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
* BUG REPORT
**Kubernetes version** (use `kubectl version`):
Kubernetes at commit 8bee44b65f2fbf170f70c535b5ec1b3cbfad44b6
Sorry, I haven't installed kubectl, I was just wandering around the code
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
OS X
- **Kernel** (e.g. `uname -a`):
Darwin Emmanuels-MacBook-Pro-2.local 15.6.0 Darwin Kernel Version 15.6.0: Mon Aug 29 20:21:34 PDT 2016; root:xnu-3248.60.11~1/RELEASE_X86_64 x86_64
- **Install tools**:
Installed from source
- **Others**:
**What happened**:
Tests:
* TestServeExecInContainer
* TestServeAttachContainer
fail as of Go tip https://github.com/golang/go/commit/6a6c792eef55eded7fb3165a330ec2b239b83960 which will be part of Go1.9
**What you expected to happen**:
Those tests to pass
**How to reproduce it** (as minimally and precisely as possible):
Run tests:
* TestServeExecInContainer
* TestServeAttachContainer
**Anything else we need to know**:
The tests seem to be adapted to an old/buggy behavior of Go's http.ServeMux not forwarding along
the query string during redirects as per https://github.com/golang/go/issues/17841.
The bug in Go has since been fixed by CL https://golang.org/cl/43779 and I got this failure when running `make test` in Kubernetes
in particular
```
server_test.go:1327: 7: response location: expected http://localhost:12345/exec, got http://localhost:12345/exec?ignore=1&command=ls&command=-a&output=1
```
or in detail
```shell
E0523 04:03:05.187580 89629 server.go:244] Authorization error (user=test, verb=, resource=, subresource=)%!(EXTRA *errors.errorString=Failed)
E0523 04:03:05.318837 89629 server.go:656] you must specify at least 1 of stdin, stdout, stderr
--- FAIL: TestServeExecInContainer (0.03s)
server_test.go:1327: 7: response location: expected http://localhost:12345/exec, got http://localhost:12345/exec?ignore=1&command=ls&command=-a&output=1
E0523 04:03:05.351045 89629 server.go:618] you must specify at least 1 of stdin, stdout, stderr
--- FAIL: TestServeAttachContainer (0.03s)
server_test.go:1327: 7: response location: expected http://localhost:12345/attach, got http://localhost:12345/attach?ignore=1&output=1
E0523 04:03:05.528501 89629 server.go:730] query parameter "port" cannot be empty
E0523 04:03:05.530809 89629 server.go:730] unable to parse "abc" as a port: strconv.ParseUint: parsing "abc": invalid syntax
E0523 04:03:05.532914 89629 server.go:730] unable to parse "-1" as a port: strconv.ParseUint: parsing "-1": invalid syntax
E0523 04:03:05.534807 89629 server.go:730] unable to parse "65536" as a port: strconv.ParseUint: parsing "65536": value out of range
E0523 04:03:05.536912 89629 server.go:730] port "0" must be > 0
```
or
<img width="1174" alt="screen shot 2017-05-23 at 4 14 28 am" src="https://cloud.githubusercontent.com/assets/4898263/26350283/df5b5604-3f6f-11e7-8b27-555c03d586b8.png">
Prior to that CL, the test passes
```shell
ok k8s.io/kubernetes/pkg/kubelet/server 0.525s
```
I am also going to add a small memo to the Go project's bug tracker that that fix broke things here, just to keep vigilant.
| 1.0 | tests: please update pkg/kubelet/server TestServeExecInContainer and TestServeAttachContainer for Go1.9 - <!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
**What keywords did you search in Kubernetes issues before filing this one?** (If you have found any duplicates, you should instead reply there.):
TestServeExecInContainer
response location
---
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
* BUG REPORT
**Kubernetes version** (use `kubectl version`):
Kubernetes at commit 8bee44b65f2fbf170f70c535b5ec1b3cbfad44b6
Sorry, I haven't installed kubectl, I was just wandering around the code
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
OS X
- **Kernel** (e.g. `uname -a`):
Darwin Emmanuels-MacBook-Pro-2.local 15.6.0 Darwin Kernel Version 15.6.0: Mon Aug 29 20:21:34 PDT 2016; root:xnu-3248.60.11~1/RELEASE_X86_64 x86_64
- **Install tools**:
Installed from source
- **Others**:
**What happened**:
Tests:
* TestServeExecInContainer
* TestServeAttachContainer
fail as of Go tip https://github.com/golang/go/commit/6a6c792eef55eded7fb3165a330ec2b239b83960 which will be part of Go1.9
**What you expected to happen**:
Those tests to pass
**How to reproduce it** (as minimally and precisely as possible):
Run tests:
* TestServeExecInContainer
* TestServeAttachContainer
**Anything else we need to know**:
The tests seem to be adapted to an old/buggy behavior of Go's http.ServeMux not forwarding along
the query string during redirects as per https://github.com/golang/go/issues/17841.
The bug in Go has since been fixed by CL https://golang.org/cl/43779 and I got this failure when running `make test` in Kubernetes
in particular
```
server_test.go:1327: 7: response location: expected http://localhost:12345/exec, got http://localhost:12345/exec?ignore=1&command=ls&command=-a&output=1
```
or in detail
```shell
E0523 04:03:05.187580 89629 server.go:244] Authorization error (user=test, verb=, resource=, subresource=)%!(EXTRA *errors.errorString=Failed)
E0523 04:03:05.318837 89629 server.go:656] you must specify at least 1 of stdin, stdout, stderr
--- FAIL: TestServeExecInContainer (0.03s)
server_test.go:1327: 7: response location: expected http://localhost:12345/exec, got http://localhost:12345/exec?ignore=1&command=ls&command=-a&output=1
E0523 04:03:05.351045 89629 server.go:618] you must specify at least 1 of stdin, stdout, stderr
--- FAIL: TestServeAttachContainer (0.03s)
server_test.go:1327: 7: response location: expected http://localhost:12345/attach, got http://localhost:12345/attach?ignore=1&output=1
E0523 04:03:05.528501 89629 server.go:730] query parameter "port" cannot be empty
E0523 04:03:05.530809 89629 server.go:730] unable to parse "abc" as a port: strconv.ParseUint: parsing "abc": invalid syntax
E0523 04:03:05.532914 89629 server.go:730] unable to parse "-1" as a port: strconv.ParseUint: parsing "-1": invalid syntax
E0523 04:03:05.534807 89629 server.go:730] unable to parse "65536" as a port: strconv.ParseUint: parsing "65536": value out of range
E0523 04:03:05.536912 89629 server.go:730] port "0" must be > 0
```
or
<img width="1174" alt="screen shot 2017-05-23 at 4 14 28 am" src="https://cloud.githubusercontent.com/assets/4898263/26350283/df5b5604-3f6f-11e7-8b27-555c03d586b8.png">
Prior to that CL, the test passes
```shell
ok k8s.io/kubernetes/pkg/kubelet/server 0.525s
```
I am also going to add a small memo to the Go project's bug tracker that that fix broke things here, just to keep vigilant.
| test | tests please update pkg kubelet server testserveexecincontainer and testserveattachcontainer for what keywords did you search in kubernetes issues before filing this one if you have found any duplicates you should instead reply there testserveexecincontainer response location is this a bug report or feature request choose one bug report kubernetes version use kubectl version kubernetes at commit sorry i haven t installed kubectl i was just wandering around the code environment cloud provider or hardware configuration os e g from etc os release os x kernel e g uname a darwin emmanuels macbook pro local darwin kernel version mon aug pdt root xnu release install tools installed from source others what happened tests testserveexecincontainer testserveattachcontainer fail as of go tip which will be part of what you expected to happen those tests to pass how to reproduce it as minimally and precisely as possible run tests testserveexecincontainer testserveattachcontainer anything else we need to know the tests seem to be adapted to an old buggy behavior of go s http servemux not forwarding along the query string during redirects as per the bug in go has since been fixed by cl and i got this failure when running make test in kubernetes in particular server test go response location expected got or in detail shell server go authorization error user test verb resource subresource extra errors errorstring failed server go you must specify at least of stdin stdout stderr fail testserveexecincontainer server test go response location expected got server go you must specify at least of stdin stdout stderr fail testserveattachcontainer server test go response location expected got server go query parameter port cannot be empty server go unable to parse abc as a port strconv parseuint parsing abc invalid syntax server go unable to parse as a port strconv parseuint parsing invalid syntax server go unable to parse as a port strconv parseuint parsing value out of range server go port must be or img width alt screen shot at am src prior to that cl the test passes shell ok io kubernetes pkg kubelet server i am also going to add a small memo to the go project s bug tracker that that fix broke things here just to keep vigilant | 1 |
1,969 | 2,868,991,257 | IssuesEvent | 2015-06-05 22:25:31 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | map files from 'pub build' | C5 enhancement Fixed Priority-High Pub-Build | <a href="https://github.com/jtalley"><img src="https://avatars.githubusercontent.com/u/4172903?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [jtalley](https://github.com/jtalley)**
_Originally opened as dart-lang/sdk#22174_
----
**What steps will reproduce the problem?**
1."dart2js -m test.dart" generates a source map file.
2.'pub build' does not generate a map file
**What is the expected output? What do you see instead?**
We want map files from pub build
**What version of the product are you using?**
1.8.3
**On what operating system?**
Mac/Win/Linux
**Please provide any additional information below.**
As we added transformers to our build we had to move away from calling dart2js directly and use 'pub build', which works, but we lost the ability to create map files for our release builds. I'd like to request optional map file generation for pub build.
| 1.0 | map files from 'pub build' - <a href="https://github.com/jtalley"><img src="https://avatars.githubusercontent.com/u/4172903?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [jtalley](https://github.com/jtalley)**
_Originally opened as dart-lang/sdk#22174_
----
**What steps will reproduce the problem?**
1."dart2js -m test.dart" generates a source map file.
2.'pub build' does not generate a map file
**What is the expected output? What do you see instead?**
We want map files from pub build
**What version of the product are you using?**
1.8.3
**On what operating system?**
Mac/Win/Linux
**Please provide any additional information below.**
As we added transformers to our build we had to move away from calling dart2js directly and use 'pub build', which works, but we lost the ability to create map files for our release builds. I'd like to request optional map file generation for pub build.
| non_test | map files from pub build issue by originally opened as dart lang sdk what steps will reproduce the problem quot m test dart quot generates a source map file pub build does not generate a map file what is the expected output what do you see instead we want map files from pub build what version of the product are you using on what operating system mac win linux please provide any additional information below as we added transformers to our build we had to move away from calling directly and use pub build which works but we lost the ability to create map files for our release builds i d like to request optional map file generation for pub build | 0 |
4,606 | 11,427,450,951 | IssuesEvent | 2020-02-04 00:51:19 | ietf-tapswg/api-drafts | https://api.github.com/repos/ietf-tapswg/api-drafts | closed | Add Framer to Figure 3 | Architecture ready for text | Reading something about framers in section 4.1.4 came actually a bit as a surprise to me as we don't talk a lot about framer earlier in the document (yes there is one forward reference in section 2.2 but that's it). Now that we have message in Figure 4, we should maybe also add framers...? | 1.0 | Add Framer to Figure 3 - Reading something about framers in section 4.1.4 came actually a bit as a surprise to me as we don't talk a lot about framer earlier in the document (yes there is one forward reference in section 2.2 but that's it). Now that we have message in Figure 4, we should maybe also add framers...? | non_test | add framer to figure reading something about framers in section came actually a bit as a surprise to me as we don t talk a lot about framer earlier in the document yes there is one forward reference in section but that s it now that we have message in figure we should maybe also add framers | 0 |
633,794 | 20,266,016,086 | IssuesEvent | 2022-02-15 12:08:28 | ooni/backend | https://api.github.com/repos/ooni/backend | closed | Investigate alternative databases | epic priority/medium | Investigate datastores for 2 different usecases:
- Replace PG with a more suitable alternative
- Keep PG in place and replace Pandas and the counters table on PG with an alternative
Options:
- clickhouse - see related issue
- https://prestodb.io/
- https://druid.apache.org/
- https://kylin.apache.org/ | 1.0 | Investigate alternative databases - Investigate datastores for 2 different usecases:
- Replace PG with a more suitable alternative
- Keep PG in place and replace Pandas and the counters table on PG with an alternative
Options:
- clickhouse - see related issue
- https://prestodb.io/
- https://druid.apache.org/
- https://kylin.apache.org/ | non_test | investigate alternative databases investigate datastores for different usecases replace pg with a more suitable alternative keep pg in place and replace pandas and the counters table on pg with an alternative options clickhouse see related issue | 0 |
44,029 | 5,726,049,193 | IssuesEvent | 2017-04-20 18:01:53 | phetsims/gene-expression-essentials | https://api.github.com/repos/phetsims/gene-expression-essentials | closed | need design and/or artwork for home screen and nav bar icons | design:general | As of this writing, the home screen and nav bar icons are blank. I'm not sure who should come up with the icons for these - I could probably do it if necessary - but I thought I'd log an issue and have @ariel-phet decide who should take care of it. | 1.0 | need design and/or artwork for home screen and nav bar icons - As of this writing, the home screen and nav bar icons are blank. I'm not sure who should come up with the icons for these - I could probably do it if necessary - but I thought I'd log an issue and have @ariel-phet decide who should take care of it. | non_test | need design and or artwork for home screen and nav bar icons as of this writing the home screen and nav bar icons are blank i m not sure who should come up with the icons for these i could probably do it if necessary but i thought i d log an issue and have ariel phet decide who should take care of it | 0 |
334,558 | 10,142,436,145 | IssuesEvent | 2019-08-04 00:33:12 | jenkins-x/jx | https://api.github.com/repos/jenkins-x/jx | closed | jx create env creates incorrect entries in the `config` configmap | area/preview kind/bug lifecycle/rotten priority/important-longterm | ### Summary
entries in the `config` (prow) configmap are generated incorrectly if you specify a different environment name
### Steps to reproduce the behavior
`jx create env`, specify a different environment name than the default, eg `shared-dev-staging`
### Expected behavior
entries in the configmap should refer to the `shared-dev-staging` repo
### Actual behavior
entries in the configmap get created for `environment-jx-production`
### Jx version
The output of `jx version` is:
```
NAME VERSION
jx ?[32m1.3.825?[0m
jenkins x platform ?[32m0.0.3321?[0m
Kubernetes cluster ?[32mv1.10.11-gke.1?[0m
kubectl ?[32mv1.13.2?[0m
helm client ?[32mv2.11.0+g2e55dbe?[0m
helm server ?[32mv2.12.2+g7d2b0c7?[0m
git ?[32mgit version 2.15.1.windows.2?[0m
Operating System ?[32mWindows 10 Pro 1809 build 17763?[0m
```
### Jenkins type
<!--
Select which Jenkins installation type are you using.
-->
- [ ] Classic Jenkins
- [x] Serverless Jenkins
### Kubernetes cluster
<!--
What kind of Kubernetes cluster are you using & how did you create it?
-->
### Operating system / Environment
<!--
In which environment are you running the jx CLI?
-->
| 1.0 | jx create env creates incorrect entries in the `config` configmap - ### Summary
entries in the `config` (prow) configmap are generated incorrectly if you specify a different environment name
### Steps to reproduce the behavior
`jx create env`, specify a different environment name than the default, eg `shared-dev-staging`
### Expected behavior
entries in the configmap should refer to the `shared-dev-staging` repo
### Actual behavior
entries in the configmap get created for `environment-jx-production`
### Jx version
The output of `jx version` is:
```
NAME VERSION
jx ?[32m1.3.825?[0m
jenkins x platform ?[32m0.0.3321?[0m
Kubernetes cluster ?[32mv1.10.11-gke.1?[0m
kubectl ?[32mv1.13.2?[0m
helm client ?[32mv2.11.0+g2e55dbe?[0m
helm server ?[32mv2.12.2+g7d2b0c7?[0m
git ?[32mgit version 2.15.1.windows.2?[0m
Operating System ?[32mWindows 10 Pro 1809 build 17763?[0m
```
### Jenkins type
<!--
Select which Jenkins installation type are you using.
-->
- [ ] Classic Jenkins
- [x] Serverless Jenkins
### Kubernetes cluster
<!--
What kind of Kubernetes cluster are you using & how did you create it?
-->
### Operating system / Environment
<!--
In which environment are you running the jx CLI?
-->
| non_test | jx create env creates incorrect entries in the config configmap summary entries in the config prow configmap are generated incorrectly if you specify a different environment name steps to reproduce the behavior jx create env specify a different environment name than the default eg shared dev staging expected behavior entries in the configmap should refer to the shared dev staging repo actual behavior entries in the configmap get created for environment jx production jx version the output of jx version is name version jx jenkins x platform kubernetes cluster gke kubectl helm client helm server git version windows operating system pro build jenkins type select which jenkins installation type are you using classic jenkins serverless jenkins kubernetes cluster what kind of kubernetes cluster are you using how did you create it operating system environment in which environment are you running the jx cli | 0 |
62,431 | 6,796,675,012 | IssuesEvent | 2017-11-01 19:51:15 | broadinstitute/gatk | https://api.github.com/repos/broadinstitute/gatk | closed | For projects built on GATK framework, automated testing is broken | bug tests | This applies to projects that import the GATK jar as part of the build process, but are not part of the GATK itself.
All unit and integration tests are (by default) broken, since the BaseTest class requires the mini fasta, even when it should not be required. This causes breakage, since a project built on the GATK should not be expected to have that file at the exact correct place in the filesystem.
The tests do not even start. | 1.0 | For projects built on GATK framework, automated testing is broken - This applies to projects that import the GATK jar as part of the build process, but are not part of the GATK itself.
All unit and integration tests are (by default) broken, since the BaseTest class requires the mini fasta, even when it should not be required. This causes breakage, since a project built on the GATK should not be expected to have that file at the exact correct place in the filesystem.
The tests do not even start. | test | for projects built on gatk framework automated testing is broken this applies to projects that import the gatk jar as part of the build process but are not part of the gatk itself all unit and integration tests are by default broken since the basetest class requires the mini fasta even when it should not be required this causes breakage since a project built on the gatk should not be expected to have that file at the exact correct place in the filesystem the tests do not even start | 1 |
114,155 | 17,192,766,996 | IssuesEvent | 2021-07-16 13:22:24 | turkdevops/grafana | https://api.github.com/repos/turkdevops/grafana | closed | WS-2020-0097 (High) detected in papaparse-4.6.3.js, papaparse-4.6.3.tgz - autoclosed | security vulnerability | ## WS-2020-0097 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>papaparse-4.6.3.js</b>, <b>papaparse-4.6.3.tgz</b></p></summary>
<p>
<details><summary><b>papaparse-4.6.3.js</b></p></summary>
<p>Fast and powerful CSV parser for the browser that supports web workers and streaming large files. Converts CSV to JSON and JSON to CSV.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/PapaParse/4.6.3/papaparse.js">https://cdnjs.cloudflare.com/ajax/libs/PapaParse/4.6.3/papaparse.js</a></p>
<p>Path to dependency file: grafana/node_modules/papaparse/player/player.html</p>
<p>Path to vulnerable library: grafana/node_modules/papaparse/player/../papaparse.js</p>
<p>
Dependency Hierarchy:
- :x: **papaparse-4.6.3.js** (Vulnerable Library)
</details>
<details><summary><b>papaparse-4.6.3.tgz</b></p></summary>
<p>Fast and powerful CSV parser for the browser that supports web workers and streaming large files. Converts CSV to JSON and JSON to CSV.</p>
<p>Library home page: <a href="https://registry.npmjs.org/papaparse/-/papaparse-4.6.3.tgz">https://registry.npmjs.org/papaparse/-/papaparse-4.6.3.tgz</a></p>
<p>Path to dependency file: grafana/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/papaparse</p>
<p>
Dependency Hierarchy:
- :x: **papaparse-4.6.3.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/grafana/commit/494d3ade5b02fb069ecdd7a9a278fd2016f5f577">494d3ade5b02fb069ecdd7a9a278fd2016f5f577</a></p>
<p>Found in base branch: <b>datasource-meta</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
papaparse before 5.2.0 are vulnerable to Regular Expression Denial of Service (ReDos). The parse function contains a malformed regular expression that takes exponentially longer to process non-numerical inputs. This allows attackers to stall systems and lead to Denial of Service.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://github.com/mholt/PapaParse/pull/779>WS-2020-0097</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1515">https://www.npmjs.com/advisories/1515</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: papaparse - 5.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2020-0097 (High) detected in papaparse-4.6.3.js, papaparse-4.6.3.tgz - autoclosed - ## WS-2020-0097 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>papaparse-4.6.3.js</b>, <b>papaparse-4.6.3.tgz</b></p></summary>
<p>
<details><summary><b>papaparse-4.6.3.js</b></p></summary>
<p>Fast and powerful CSV parser for the browser that supports web workers and streaming large files. Converts CSV to JSON and JSON to CSV.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/PapaParse/4.6.3/papaparse.js">https://cdnjs.cloudflare.com/ajax/libs/PapaParse/4.6.3/papaparse.js</a></p>
<p>Path to dependency file: grafana/node_modules/papaparse/player/player.html</p>
<p>Path to vulnerable library: grafana/node_modules/papaparse/player/../papaparse.js</p>
<p>
Dependency Hierarchy:
- :x: **papaparse-4.6.3.js** (Vulnerable Library)
</details>
<details><summary><b>papaparse-4.6.3.tgz</b></p></summary>
<p>Fast and powerful CSV parser for the browser that supports web workers and streaming large files. Converts CSV to JSON and JSON to CSV.</p>
<p>Library home page: <a href="https://registry.npmjs.org/papaparse/-/papaparse-4.6.3.tgz">https://registry.npmjs.org/papaparse/-/papaparse-4.6.3.tgz</a></p>
<p>Path to dependency file: grafana/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/papaparse</p>
<p>
Dependency Hierarchy:
- :x: **papaparse-4.6.3.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/grafana/commit/494d3ade5b02fb069ecdd7a9a278fd2016f5f577">494d3ade5b02fb069ecdd7a9a278fd2016f5f577</a></p>
<p>Found in base branch: <b>datasource-meta</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
papaparse before 5.2.0 are vulnerable to Regular Expression Denial of Service (ReDos). The parse function contains a malformed regular expression that takes exponentially longer to process non-numerical inputs. This allows attackers to stall systems and lead to Denial of Service.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://github.com/mholt/PapaParse/pull/779>WS-2020-0097</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1515">https://www.npmjs.com/advisories/1515</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: papaparse - 5.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | ws high detected in papaparse js papaparse tgz autoclosed ws high severity vulnerability vulnerable libraries papaparse js papaparse tgz papaparse js fast and powerful csv parser for the browser that supports web workers and streaming large files converts csv to json and json to csv library home page a href path to dependency file grafana node modules papaparse player player html path to vulnerable library grafana node modules papaparse player papaparse js dependency hierarchy x papaparse js vulnerable library papaparse tgz fast and powerful csv parser for the browser that supports web workers and streaming large files converts csv to json and json to csv library home page a href path to dependency file grafana package json path to vulnerable library grafana node modules papaparse dependency hierarchy x papaparse tgz vulnerable library found in head commit a href found in base branch datasource meta vulnerability details papaparse before are vulnerable to regular expression denial of service redos the parse function contains a malformed regular expression that takes exponentially longer to process non numerical inputs this allows attackers to stall systems and lead to denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution papaparse step up your open source security game with whitesource | 0 |
124,999 | 26,574,525,830 | IssuesEvent | 2023-01-21 16:34:01 | mjordan/islandora_workbench | https://api.github.com/repos/mjordan/islandora_workbench | opened | Make export_csv() a subset of get_data_from_view() | code cleanup | Most of the code in `export_csv()` also exists in `get_data_from_view()`, so it might be worth making `export_csv` tasks use this instead. The only real difference is that input for `export_csv` tasks is a CSV list of node IDs. | 1.0 | Make export_csv() a subset of get_data_from_view() - Most of the code in `export_csv()` also exists in `get_data_from_view()`, so it might be worth making `export_csv` tasks use this instead. The only real difference is that input for `export_csv` tasks is a CSV list of node IDs. | non_test | make export csv a subset of get data from view most of the code in export csv also exists in get data from view so it might be worth making export csv tasks use this instead the only real difference is that input for export csv tasks is a csv list of node ids | 0 |
810,850 | 30,264,436,220 | IssuesEvent | 2023-07-07 10:40:14 | sablier-labs/v2-periphery | https://api.github.com/repos/sablier-labs/v2-periphery | closed | Re-export Permit2 and PRBProxy types | feature priority1 | The goal is to not require end users to install third-party packages.
See `Tokens.sol` and `Math.sol` in V2 Core. | 1.0 | Re-export Permit2 and PRBProxy types - The goal is to not require end users to install third-party packages.
See `Tokens.sol` and `Math.sol` in V2 Core. | non_test | re export and prbproxy types the goal is to not require end users to install third party packages see tokens sol and math sol in core | 0 |
799,293 | 28,304,112,022 | IssuesEvent | 2023-04-10 09:13:51 | prometheus/prometheus | https://api.github.com/repos/prometheus/prometheus | closed | I don't get a summary values when using protobuf | kind/bug priority/P1 kind/more-info-needed component/scraping | ### What did you do?
I've used the ```--enable-feature=native-histograms``` which supports protobuf.
I get all metrics but the summaries, I've decode the message and saw that the values are there.
Maybe there's some non-compatibility of summaries with the old protobuf representations?
I didn't see any errors in the log.
I'm using Promethues 2.42.0 in a container.
### What did you expect to see?
To see the summaries
### What did you see instead? Under which circumstances?
I see the metric but it's empty (i.e. no quantil and values)
### System information
docker
### Prometheus version
```text
$ prometheus --version
prometheus, version 2.42.0 (branch: HEAD, revision: 225c61122d88b01d1f0eaaee0e05b6f3e0567ac0)
build user: root@c67d48967507
build date: 20230201-07:53:32
go version: go1.19.5
platform: linux/amd64
```
### Prometheus configuration file
_No response_
### Alertmanager version
_No response_
### Alertmanager configuration file
_No response_
### Logs
_No response_ | 1.0 | I don't get a summary values when using protobuf - ### What did you do?
I've used the ```--enable-feature=native-histograms``` which supports protobuf.
I get all metrics but the summaries, I've decode the message and saw that the values are there.
Maybe there's some non-compatibility of summaries with the old protobuf representations?
I didn't see any errors in the log.
I'm using Promethues 2.42.0 in a container.
### What did you expect to see?
To see the summaries
### What did you see instead? Under which circumstances?
I see the metric but it's empty (i.e. no quantil and values)
### System information
docker
### Prometheus version
```text
$ prometheus --version
prometheus, version 2.42.0 (branch: HEAD, revision: 225c61122d88b01d1f0eaaee0e05b6f3e0567ac0)
build user: root@c67d48967507
build date: 20230201-07:53:32
go version: go1.19.5
platform: linux/amd64
```
### Prometheus configuration file
_No response_
### Alertmanager version
_No response_
### Alertmanager configuration file
_No response_
### Logs
_No response_ | non_test | i don t get a summary values when using protobuf what did you do i ve used the enable feature native histograms which supports protobuf i get all metrics but the summaries i ve decode the message and saw that the values are there maybe there s some non compatibility of summaries with the old protobuf representations i didn t see any errors in the log i m using promethues in a container what did you expect to see to see the summaries what did you see instead under which circumstances i see the metric but it s empty i e no quantil and values system information docker prometheus version text prometheus version prometheus version branch head revision build user root build date go version platform linux prometheus configuration file no response alertmanager version no response alertmanager configuration file no response logs no response | 0 |
226,700 | 18,043,920,195 | IssuesEvent | 2021-09-18 14:48:11 | logicmoo/logicmoo_workspace | https://api.github.com/repos/logicmoo/logicmoo_workspace | opened | logicmoo.pfc.test.sanity_base.FILE_01A JUnit | Test_9999 logicmoo.pfc.test.sanity_base unit_test FILE_01A | (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif file_01a.pfc)
GH_MASTER_ISSUE_FINFO=
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AFILE_01A
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/813ec17487381a026b83350c360d0c79a9e2d0ae
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/813ec17487381a026b83350c360d0c79a9e2d0ae/packs_sys/pfc/t/sanity_base/file_01a.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/FILE_01A/logicmoo_pfc_test_sanity_base_FILE_01A_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/65/testReport/logicmoo.pfc.test.sanity_base/FILE_01A/logicmoo_pfc_test_sanity_base_FILE_01A_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/813ec17487381a026b83350c360d0c79a9e2d0ae
https://github.com/logicmoo/logicmoo_workspace/blob/813ec17487381a026b83350c360d0c79a9e2d0ae/packs_sys/pfc/t/sanity_base/file_01a.pfc
```
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/file_01a.pfc'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- expects_dialect(pfc).
header_sane:(must_clause_asserted(G):- cwc, must(clause_asserted_u(G))).
:- ain((must_clause_asserted(G):- cwc, must(clause_asserted_u(G)))).
must_clause_asserted(G):- cwc, must(clause_asserted_u(G)).
:- listing(must_clause_asserted).
%~ skipped( listing(must_clause_asserted))
:- sanity(predicate_property(must_clause_asserted(_),number_of_clauses(_))).
a.
:- listing(a).
%~ skipped( listing(a))
:- header_sane:listing(a).
% @TODO - fails here bc must_clause_asserted/1 needs love
%~ skipped( listing(a))
% @TODO - fails here bc must_clause_asserted/1 needs love
:- must_clause_asserted(a).
sHOW_MUST_go_on_failed_F__A__I__L_(baseKB:clause_asserted_u(a))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/file_01a.pfc#L33
%~ error( sHOW_MUST_go_on_failed_F__A__I__L_( baseKB : clause_asserted_u(a)))
%~ FILE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/file_01a.pfc#L33
```
totalTime=10
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AFILE_01A
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/813ec17487381a026b83350c360d0c79a9e2d0ae
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/813ec17487381a026b83350c360d0c79a9e2d0ae/packs_sys/pfc/t/sanity_base/file_01a.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/FILE_01A/logicmoo_pfc_test_sanity_base_FILE_01A_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/65/testReport/logicmoo.pfc.test.sanity_base/FILE_01A/logicmoo_pfc_test_sanity_base_FILE_01A_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/813ec17487381a026b83350c360d0c79a9e2d0ae
https://github.com/logicmoo/logicmoo_workspace/blob/813ec17487381a026b83350c360d0c79a9e2d0ae/packs_sys/pfc/t/sanity_base/file_01a.pfc
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k file_01a.pfc (returned 137)
| 3.0 | logicmoo.pfc.test.sanity_base.FILE_01A JUnit - (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif file_01a.pfc)
GH_MASTER_ISSUE_FINFO=
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AFILE_01A
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/813ec17487381a026b83350c360d0c79a9e2d0ae
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/813ec17487381a026b83350c360d0c79a9e2d0ae/packs_sys/pfc/t/sanity_base/file_01a.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/FILE_01A/logicmoo_pfc_test_sanity_base_FILE_01A_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/65/testReport/logicmoo.pfc.test.sanity_base/FILE_01A/logicmoo_pfc_test_sanity_base_FILE_01A_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/813ec17487381a026b83350c360d0c79a9e2d0ae
https://github.com/logicmoo/logicmoo_workspace/blob/813ec17487381a026b83350c360d0c79a9e2d0ae/packs_sys/pfc/t/sanity_base/file_01a.pfc
```
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/file_01a.pfc'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- expects_dialect(pfc).
header_sane:(must_clause_asserted(G):- cwc, must(clause_asserted_u(G))).
:- ain((must_clause_asserted(G):- cwc, must(clause_asserted_u(G)))).
must_clause_asserted(G):- cwc, must(clause_asserted_u(G)).
:- listing(must_clause_asserted).
%~ skipped( listing(must_clause_asserted))
:- sanity(predicate_property(must_clause_asserted(_),number_of_clauses(_))).
a.
:- listing(a).
%~ skipped( listing(a))
:- header_sane:listing(a).
% @TODO - fails here bc must_clause_asserted/1 needs love
%~ skipped( listing(a))
% @TODO - fails here bc must_clause_asserted/1 needs love
:- must_clause_asserted(a).
sHOW_MUST_go_on_failed_F__A__I__L_(baseKB:clause_asserted_u(a))
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/file_01a.pfc#L33
%~ error( sHOW_MUST_go_on_failed_F__A__I__L_( baseKB : clause_asserted_u(a)))
%~ FILE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/file_01a.pfc#L33
```
totalTime=10
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AFILE_01A
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/813ec17487381a026b83350c360d0c79a9e2d0ae
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/813ec17487381a026b83350c360d0c79a9e2d0ae/packs_sys/pfc/t/sanity_base/file_01a.pfc
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/FILE_01A/logicmoo_pfc_test_sanity_base_FILE_01A_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/65/testReport/logicmoo.pfc.test.sanity_base/FILE_01A/logicmoo_pfc_test_sanity_base_FILE_01A_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/813ec17487381a026b83350c360d0c79a9e2d0ae
https://github.com/logicmoo/logicmoo_workspace/blob/813ec17487381a026b83350c360d0c79a9e2d0ae/packs_sys/pfc/t/sanity_base/file_01a.pfc
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k file_01a.pfc (returned 137)
| test | logicmoo pfc test sanity base file junit cd var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base timeout foreground preserve status s sigkill k lmoo clif file pfc gh master issue finfo issue search gitlab latest this build github running var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base file pfc this test might need use module library logicmoo plarkc expects dialect pfc header sane must clause asserted g cwc must clause asserted u g ain must clause asserted g cwc must clause asserted u g must clause asserted g cwc must clause asserted u g listing must clause asserted skipped listing must clause asserted sanity predicate property must clause asserted number of clauses a listing a skipped listing a header sane listing a todo fails here bc must clause asserted needs love skipped listing a todo fails here bc must clause asserted needs love must clause asserted a show must go on failed f a i l basekb clause asserted u a file error show must go on failed f a i l basekb clause asserted u a file totaltime issue search gitlab latest this build github failed var lib jenkins workspace logicmoo workspace bin lmoo junit minor k file pfc returned | 1 |
387,131 | 26,714,116,620 | IssuesEvent | 2023-01-28 09:02:25 | Azure/azure-storage-fuse | https://api.github.com/repos/Azure/azure-storage-fuse | closed | Blobfuse2 failed to connect to private endpoint in adls connection type but works in block type. | triaged documentation | ### Which version of blobfuse was used?
blobfuse2 version 2.0.1
### Which OS distribution and version are you using?
Ubuntu 20.04.5 LTS
### If relevant, please share your mount command.
blobfuse2 mount azure_data --config-file=./fuse2_conf4.yaml
blobfuse2 mount azure_data --config-file ./fuse2_conf4_block.yaml
### What was the issue encountered?
The blob storage with hierarchical namespace can be mounted with type "block"

But no directories visible
With type adls mount fails completely

blobfuse v1 mounts this without problem and shows directory too

Storage account properties

Networking for the storage account


If I enable public access for my VNET in Networking (firewall and virtual networks), then mount with adls type starts to work. But this is not a suitable solution because we need to make all work through the private endpoint.
### Have you found a mitigation/solution?
No
### Please share logs if available.
| 1.0 | Blobfuse2 failed to connect to private endpoint in adls connection type but works in block type. - ### Which version of blobfuse was used?
blobfuse2 version 2.0.1
### Which OS distribution and version are you using?
Ubuntu 20.04.5 LTS
### If relevant, please share your mount command.
blobfuse2 mount azure_data --config-file=./fuse2_conf4.yaml
blobfuse2 mount azure_data --config-file ./fuse2_conf4_block.yaml
### What was the issue encountered?
The blob storage with hierarchical namespace can be mounted with type "block"

But no directories visible
With type adls mount fails completely

blobfuse v1 mounts this without problem and shows directory too

Storage account properties

Networking for the storage account


If I enable public access for my VNET in Networking (firewall and virtual networks), then mount with adls type starts to work. But this is not a suitable solution because we need to make all work through the private endpoint.
### Have you found a mitigation/solution?
No
### Please share logs if available.
| non_test | failed to connect to private endpoint in adls connection type but works in block type which version of blobfuse was used version which os distribution and version are you using ubuntu lts if relevant please share your mount command mount azure data config file yaml mount azure data config file block yaml what was the issue encountered the blob storage with hierarchical namespace can be mounted with type block but no directories visible with type adls mount fails completely blobfuse mounts this without problem and shows directory too storage account properties networking for the storage account if i enable public access for my vnet in networking firewall and virtual networks then mount with adls type starts to work but this is not a suitable solution because we need to make all work through the private endpoint have you found a mitigation solution no please share logs if available | 0 |
329,877 | 24,237,833,315 | IssuesEvent | 2022-09-27 02:13:15 | robert-altom/test | https://api.github.com/repos/robert-altom/test | closed | Fix link for download AUT Alpha package | documentation revert 1.7.0 gitlab | Link for downloading the AUT Alpha package ( https://altom.com/apphttps://gitlab.com/altom/altunity/altunitytester/uploads/altUnityProAlpha/AltUnityTester.unitypackage) should be fixed.

---
<sub>You can find the original issue from GitLab [here](https://gitlab.com/altom/altunity/altunitytester/-/issues/697).</sub>
| 1.0 | Fix link for download AUT Alpha package - Link for downloading the AUT Alpha package ( https://altom.com/apphttps://gitlab.com/altom/altunity/altunitytester/uploads/altUnityProAlpha/AltUnityTester.unitypackage) should be fixed.

---
<sub>You can find the original issue from GitLab [here](https://gitlab.com/altom/altunity/altunitytester/-/issues/697).</sub>
| non_test | fix link for download aut alpha package link for downloading the aut alpha package should be fixed you can find the original issue from gitlab | 0 |
330,344 | 28,370,643,103 | IssuesEvent | 2023-04-12 16:42:40 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | opened | Spnego_fat.1 setup fails if only 1 KDC machine is available | test bug team:Core Security | Current logic needs to be updated so that if one of the two KDC machines goes offline, the FAT will still be able to run successfully | 1.0 | Spnego_fat.1 setup fails if only 1 KDC machine is available - Current logic needs to be updated so that if one of the two KDC machines goes offline, the FAT will still be able to run successfully | test | spnego fat setup fails if only kdc machine is available current logic needs to be updated so that if one of the two kdc machines goes offline the fat will still be able to run successfully | 1 |
22,249 | 4,781,982,976 | IssuesEvent | 2016-10-28 11:31:28 | IgniteUI/igniteui-js-blocks | https://api.github.com/repos/IgniteUI/igniteui-js-blocks | closed | Update the README with the list of components | documentation | The README should contain full list of components and directives we provide. Also include theming information together with icon set. | 1.0 | Update the README with the list of components - The README should contain full list of components and directives we provide. Also include theming information together with icon set. | non_test | update the readme with the list of components the readme should contain full list of components and directives we provide also include theming information together with icon set | 0 |
30,727 | 13,304,251,633 | IssuesEvent | 2020-08-25 16:38:39 | dotnet/fsharp | https://api.github.com/repos/dotnet/fsharp | closed | printf format specifier incorrectly identified in multiline strings | Area-IDE Language Service Urgency-Soon bug | Having this in a .fsx file:
```fsharp
sprintf
"%A\n\
Dependencies -\n\
%s\n\
Source - %A\n\
Install Settings\n\
%A" "" "" "" ""
```
In VS2015 there is no error reported, but VS2017 wrongly reports this error:
> Type mismatch. Expecting a
> ''a -> string -> 'b -> 'c -> 'd'
>but given a
> ''a -> string -> 'b -> string'
>The type ''a -> 'b' does not match the type 'string'
The snippet executes fine in FSI from VS in both cases.
**Workaround** - use ``"""..."""`` multiline strings for the format specifier, which work correctly. These don't require ``\`` at the end of line | 1.0 | printf format specifier incorrectly identified in multiline strings - Having this in a .fsx file:
```fsharp
sprintf
"%A\n\
Dependencies -\n\
%s\n\
Source - %A\n\
Install Settings\n\
%A" "" "" "" ""
```
In VS2015 there is no error reported, but VS2017 wrongly reports this error:
> Type mismatch. Expecting a
> ''a -> string -> 'b -> 'c -> 'd'
>but given a
> ''a -> string -> 'b -> string'
>The type ''a -> 'b' does not match the type 'string'
The snippet executes fine in FSI from VS in both cases.
**Workaround** - use ``"""..."""`` multiline strings for the format specifier, which work correctly. These don't require ``\`` at the end of line | non_test | printf format specifier incorrectly identified in multiline strings having this in a fsx file fsharp sprintf a n dependencies n s n source a n install settings n a in there is no error reported but wrongly reports this error type mismatch expecting a a string b c d but given a a string b string the type a b does not match the type string the snippet executes fine in fsi from vs in both cases workaround use multiline strings for the format specifier which work correctly these don t require at the end of line | 0 |
321,835 | 27,559,980,157 | IssuesEvent | 2023-03-07 21:06:32 | MissouriMRR/SUAS-2023 | https://api.github.com/repos/MissouriMRR/SUAS-2023 | opened | Add Unit Test for bottle_reader.py | testing vision unit test | # Add Unit Test for bottle_reader.py
## Problem
Create a unit test to test the functionality of vision > competition_inputs > bottle_reader.py
## Solution
Create a unit test that will test all functions and classes in the file.
## Additional Information
Unit tests should test the following:
Given the specified input that the function is expecting, the function should return the correct type without crashing. Function/class should behave as specified/expected in the docstring or any other descriptions.
| 2.0 | Add Unit Test for bottle_reader.py - # Add Unit Test for bottle_reader.py
## Problem
Create a unit test to test the functionality of vision > competition_inputs > bottle_reader.py
## Solution
Create a unit test that will test all functions and classes in the file.
## Additional Information
Unit tests should test the following:
Given the specified input that the function is expecting, the function should return the correct type without crashing. Function/class should behave as specified/expected in the docstring or any other descriptions.
| test | add unit test for bottle reader py add unit test for bottle reader py problem create a unit test to test the functionality of vision competition inputs bottle reader py solution create a unit test that will test all functions and classes in the file additional information unit tests should test the following given the specified input that the function is expecting the function should return the correct type without crashing function class should behave as specified expected in the docstring or any other descriptions | 1 |
61,241 | 6,730,638,062 | IssuesEvent | 2017-10-18 02:19:06 | Qihoo360/floyd | https://api.github.com/repos/Qihoo360/floyd | closed | floyd need add some extreme case to test it's safety | test | right now there is not enough case to test floyd's implementation of raft is safety, we should add some more extreme test case, or we should add some test tool like jepsen? | 1.0 | floyd need add some extreme case to test it's safety - right now there is not enough case to test floyd's implementation of raft is safety, we should add some more extreme test case, or we should add some test tool like jepsen? | test | floyd need add some extreme case to test it s safety right now there is not enough case to test floyd s implementation of raft is safety we should add some more extreme test case or we should add some test tool like jepsen | 1 |
270,987 | 20,617,966,031 | IssuesEvent | 2022-03-07 14:55:15 | bounswe/bounswe2022group8 | https://api.github.com/repos/bounswe/bounswe2022group8 | opened | Create a Wiki Page to Document Group's Research on Git | documentation Status: Help Wanted Effort: Medium Priority: High Task: assignment | **To Do:** Create a wiki page, explaining what git and GitHub are, containing useful links for GitHub tutorials and some useful commands. | 1.0 | Create a Wiki Page to Document Group's Research on Git - **To Do:** Create a wiki page, explaining what git and GitHub are, containing useful links for GitHub tutorials and some useful commands. | non_test | create a wiki page to document group s research on git to do create a wiki page explaining what git and github are containing useful links for github tutorials and some useful commands | 0 |
24,666 | 2,671,470,749 | IssuesEvent | 2015-03-24 07:02:14 | adobe/brackets | https://api.github.com/repos/adobe/brackets | closed | Using CTRL-G with split screen causes selected line to be at different locations in each window | F QuickOpen medium priority | When using the "Go to line" functionality, the selected line is at a different height on my screen: the left split screen is 3 lines higher than the right side.
This is on a 1680x1050 screen.
This is a slight issue when I'm doing diffs between files and I want to compare lines above and below content that doesn't match between files. Having an offset between them requires me to align them manually for each discrepancy.
To reproduce:
1. Change to split-screen mode
2. Fill up both screens with the same content
3. Use "Go to line" to move to some line

| 1.0 | Using CTRL-G with split screen causes selected line to be at different locations in each window - When using the "Go to line" functionality, the selected line is at a different height on my screen: the left split screen is 3 lines higher than the right side.
This is on a 1680x1050 screen.
This is a slight issue when I'm doing diffs between files and I want to compare lines above and below content that doesn't match between files. Having an offset between them requires me to align them manually for each discrepancy.
To reproduce:
1. Change to split-screen mode
2. Fill up both screens with the same content
3. Use "Go to line" to move to some line

| non_test | using ctrl g with split screen causes selected line to be at different locations in each window when using the go to line functionality the selected line is at a different height on my screen the left split screen is lines higher than the right side this is on a screen this is a slight issue when i m doing diffs between files and i want to compare lines above and below content that doesn t match between files having an offset between them requires me to align them manually for each discrepancy to reproduce change to split screen mode fill up both screens with the same content use go to line to move to some line | 0 |
335,409 | 30,028,688,262 | IssuesEvent | 2023-06-27 08:09:41 | saleor/saleor-dashboard | https://api.github.com/repos/saleor/saleor-dashboard | opened | Cypress test fail: should not be able see product variant discount not assigned to channel. TC: SALEOR_1804 | tests | **Known bug for versions:**
v314: false
**Additional Info:**
Spec: Sales discounts for variant | 1.0 | Cypress test fail: should not be able see product variant discount not assigned to channel. TC: SALEOR_1804 - **Known bug for versions:**
v314: false
**Additional Info:**
Spec: Sales discounts for variant | test | cypress test fail should not be able see product variant discount not assigned to channel tc saleor known bug for versions false additional info spec sales discounts for variant | 1 |
39,914 | 8,703,887,156 | IssuesEvent | 2018-12-05 17:49:18 | SuperTux/supertux | https://api.github.com/repos/SuperTux/supertux | closed | Editor: Tile Selector: Grid UI needs improvements | category:code involves:editor priority:low status:needs-work type:bug | The tile selector's tile grid does not match with the sector grid seen in the background. This can cause confusion. To improve this feature, it should be avoided that the sector grid shows up in the background. Possibly, the tile selector could have its own grid. | 1.0 | Editor: Tile Selector: Grid UI needs improvements - The tile selector's tile grid does not match with the sector grid seen in the background. This can cause confusion. To improve this feature, it should be avoided that the sector grid shows up in the background. Possibly, the tile selector could have its own grid. | non_test | editor tile selector grid ui needs improvements the tile selector s tile grid does not match with the sector grid seen in the background this can cause confusion to improve this feature it should be avoided that the sector grid shows up in the background possibly the tile selector could have its own grid | 0 |
104,704 | 11,420,027,214 | IssuesEvent | 2020-02-03 09:18:12 | zimmerman-zimmerman/iati.cloud | https://api.github.com/repos/zimmerman-zimmerman/iati.cloud | closed | Review Monitoring approach in Postman | DOCUMENTATION | Review Monitoring approach in Postman - currently we run a daily test on the docs, but I see some are failing, who controls the output? | 1.0 | Review Monitoring approach in Postman - Review Monitoring approach in Postman - currently we run a daily test on the docs, but I see some are failing, who controls the output? | non_test | review monitoring approach in postman review monitoring approach in postman currently we run a daily test on the docs but i see some are failing who controls the output | 0 |
229,325 | 18,291,348,589 | IssuesEvent | 2021-10-05 15:33:31 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | Deprecate Nightwatch in content-build | VSP-testing-team | With the Cypress a11y scan taking over the Nightwatch scan, we should start the process of deprecating Nightwatch in content build. The steps needed to take are as follows:
**Ensure approval to start each step of this process with some form of management before merging**
- [x] Identify any existing Nightwatch specs that do not have a Cypress counterpart and convert these tests.
- [x] Ensure that Cypress is dockerized in GHA to avoid port conflicts and other Cypress issues.
- [x] Allow any new Cypress tests to run in CI for a minimum of 5 days to catch any test flakiness.
- [x] After the waiting period has passed, remove any unused Nightwatch specs and/or helpers. Attempt to separate these PRs by codeowner.
- [x] Remove Nightwatch as a dependency and all related configs and global helpers, once all specs and spec helper files are gone.
Specs that need to be rewritten:
src/platform/site-wide/tests/accessible-modal.e2e.spec
src/site/tests/home/00-required.e2e.spec | 1.0 | Deprecate Nightwatch in content-build - With the Cypress a11y scan taking over the Nightwatch scan, we should start the process of deprecating Nightwatch in content build. The steps needed to take are as follows:
**Ensure approval to start each step of this process with some form of management before merging**
- [x] Identify any existing Nightwatch specs that do not have a Cypress counterpart and convert these tests.
- [x] Ensure that Cypress is dockerized in GHA to avoid port conflicts and other Cypress issues.
- [x] Allow any new Cypress tests to run in CI for a minimum of 5 days to catch any test flakiness.
- [x] After the waiting period has passed, remove any unused Nightwatch specs and/or helpers. Attempt to separate these PRs by codeowner.
- [x] Remove Nightwatch as a dependency and all related configs and global helpers, once all specs and spec helper files are gone.
Specs that need to be rewritten:
src/platform/site-wide/tests/accessible-modal.e2e.spec
src/site/tests/home/00-required.e2e.spec | test | deprecate nightwatch in content build with the cypress scan taking over the nightwatch scan we should start the process of deprecating nightwatch in content build the steps needed to take are as follows ensure approval to start each step of this process with some form of management before merging identify any existing nightwatch specs that do not have a cypress counterpart and convert these tests ensure that cypress is dockerized in gha to avoid port conflicts and other cypress issues allow any new cypress tests to run in ci for a minimum of days to catch any test flakiness after the waiting period has passed remove any unused nightwatch specs and or helpers attempt to separate these prs by codeowner remove nightwatch as a dependency and all related configs and global helpers once all specs and spec helper files are gone specs that need to be rewritten src platform site wide tests accessible modal spec src site tests home required spec | 1 |
49,119 | 6,009,941,563 | IssuesEvent | 2017-06-06 11:57:58 | ClassicWoW/Nefarian_1.12.1_Bugtracker | https://api.github.com/repos/ClassicWoW/Nefarian_1.12.1_Bugtracker | closed | [Naxxramas]Steinhautgargoyle | Auf Testserver Dev Behoben Script Kreaturen | Welches Verhalten wird beobachtet?
die Gargoyles sollte erhΓΆhte unsichtbarkeits entdeckung besitzen, mometan kann ich als Schurke von ganz oben bis zu Noth durch schleichen ohne geskillte Verstohlenheit, gleiches gilt fΓΌr einen Unsichtbatkeits Trank. Wenn es so bleibt kΓΆnnen Schurken ganz einfach die Gefrorene Rune be Noth einsacken und teuer verkaufen. Dies ging in classic zwar auch schon allerdings etwas schwieriger.
Wie sollte es sich verhalten?
Man sollte nicht zwischen den Gargoyles durch schleichen kΓΆnnen. Man solllte aufgedeckt werden im Verstohlenheits modus.
Schritte zur Reproduzierung
mit Verstohlenheit zwischen den Gargoyles bei Noth durch schleichen
ZusΓ€tzliche Informationen (Screenshots, Videos, Klasse, Rasse, Level, etc.)
https://www.youtube.com/watch?v=1QximOcF5T8&t=5s
Alle Kreaturen, Items, Objekte, Quests, Zauber etc. mΓΌssen aus unserer Datenbank verlinkt sein.
https://datenbank.classic-wow.org/?npc=16168#abilities
https://datenbank.classic-wow.org/?spell=18950 vielleicht dieser? | 1.0 | [Naxxramas]Steinhautgargoyle - Welches Verhalten wird beobachtet?
die Gargoyles sollte erhΓΆhte unsichtbarkeits entdeckung besitzen, mometan kann ich als Schurke von ganz oben bis zu Noth durch schleichen ohne geskillte Verstohlenheit, gleiches gilt fΓΌr einen Unsichtbatkeits Trank. Wenn es so bleibt kΓΆnnen Schurken ganz einfach die Gefrorene Rune be Noth einsacken und teuer verkaufen. Dies ging in classic zwar auch schon allerdings etwas schwieriger.
Wie sollte es sich verhalten?
Man sollte nicht zwischen den Gargoyles durch schleichen kΓΆnnen. Man solllte aufgedeckt werden im Verstohlenheits modus.
Schritte zur Reproduzierung
mit Verstohlenheit zwischen den Gargoyles bei Noth durch schleichen
ZusΓ€tzliche Informationen (Screenshots, Videos, Klasse, Rasse, Level, etc.)
https://www.youtube.com/watch?v=1QximOcF5T8&t=5s
Alle Kreaturen, Items, Objekte, Quests, Zauber etc. mΓΌssen aus unserer Datenbank verlinkt sein.
https://datenbank.classic-wow.org/?npc=16168#abilities
https://datenbank.classic-wow.org/?spell=18950 vielleicht dieser? | test | steinhautgargoyle welches verhalten wird beobachtet die gargoyles sollte erhΓΆhte unsichtbarkeits entdeckung besitzen mometan kann ich als schurke von ganz oben bis zu noth durch schleichen ohne geskillte verstohlenheit gleiches gilt fΓΌr einen unsichtbatkeits trank wenn es so bleibt kΓΆnnen schurken ganz einfach die gefrorene rune be noth einsacken und teuer verkaufen dies ging in classic zwar auch schon allerdings etwas schwieriger wie sollte es sich verhalten man sollte nicht zwischen den gargoyles durch schleichen kΓΆnnen man solllte aufgedeckt werden im verstohlenheits modus schritte zur reproduzierung mit verstohlenheit zwischen den gargoyles bei noth durch schleichen zusΓ€tzliche informationen screenshots videos klasse rasse level etc alle kreaturen items objekte quests zauber etc mΓΌssen aus unserer datenbank verlinkt sein vielleicht dieser | 1 |
45,950 | 2,942,272,415 | IssuesEvent | 2015-07-02 13:28:38 | molgenis/molgenis | https://api.github.com/repos/molgenis/molgenis | closed | MappingProjectMetaData.owner xref MolgenisUser instead of string | bug data-mapper priority: first | MappingProjectMetaData.owner can be a xref since #2054 is closed.
```
// FIXME use xref to MolgenisUser when https://github.com/molgenis/molgenis/issues/2054 is fixed
addAttribute(OWNER).setDataType(STRING);
``` | 1.0 | MappingProjectMetaData.owner xref MolgenisUser instead of string - MappingProjectMetaData.owner can be a xref since #2054 is closed.
```
// FIXME use xref to MolgenisUser when https://github.com/molgenis/molgenis/issues/2054 is fixed
addAttribute(OWNER).setDataType(STRING);
``` | non_test | mappingprojectmetadata owner xref molgenisuser instead of string mappingprojectmetadata owner can be a xref since is closed fixme use xref to molgenisuser when is fixed addattribute owner setdatatype string | 0 |
284,831 | 24,624,314,226 | IssuesEvent | 2022-10-16 10:09:55 | roeszler/reabook | https://api.github.com/repos/roeszler/reabook | closed | User Story: receive an email conformation after booking an appointment | feature test User (Customer) | As a **purchaser user**, I can **receive an email conformation after booking an appointment** so that **keep the conformation of what I have booked for my own records**.
| 1.0 | User Story: receive an email conformation after booking an appointment - As a **purchaser user**, I can **receive an email conformation after booking an appointment** so that **keep the conformation of what I have booked for my own records**.
| test | user story receive an email conformation after booking an appointment as a purchaser user i can receive an email conformation after booking an appointment so that keep the conformation of what i have booked for my own records | 1 |
280,263 | 24,288,183,340 | IssuesEvent | 2022-09-29 01:39:48 | apache/incubator-eventmesh | https://api.github.com/repos/apache/incubator-eventmesh | closed | [Unit Test] add unit test of GRPC protocol metrics | testing | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/eventmesh/issues?q=is%3Aissue) and found no similar issues.
### Read the unit testing guidelines
- [X] I have read.
### Unit test request
#1411
### Describe the unit tests you want to do
add unit test of GRPC protocol metrics
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR! | 1.0 | [Unit Test] add unit test of GRPC protocol metrics - ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/eventmesh/issues?q=is%3Aissue) and found no similar issues.
### Read the unit testing guidelines
- [X] I have read.
### Unit test request
#1411
### Describe the unit tests you want to do
add unit test of GRPC protocol metrics
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR! | test | add unit test of grpc protocol metrics search before asking i had searched in the and found no similar issues read the unit testing guidelines i have read unit test request describe the unit tests you want to do add unit test of grpc protocol metrics are you willing to submit pr yes i am willing to submit a pr | 1 |
791,130 | 27,851,874,742 | IssuesEvent | 2023-03-20 19:21:10 | watertap-org/watertap-ui | https://api.github.com/repos/watertap-org/watertap-ui | closed | Consider a DOF display that updates on input tab | enhancement Priority:Normal | I think it would be nice to have the degrees of freedom (DOF) displayed on the input tab. Later, if we enable users to fix/unfix certain variables of their choosing, the "Degrees of Freedom" entry would display the latest value of DOF as the user toggles "fixed" on or off for particular vars. | 1.0 | Consider a DOF display that updates on input tab - I think it would be nice to have the degrees of freedom (DOF) displayed on the input tab. Later, if we enable users to fix/unfix certain variables of their choosing, the "Degrees of Freedom" entry would display the latest value of DOF as the user toggles "fixed" on or off for particular vars. | non_test | consider a dof display that updates on input tab i think it would be nice to have the degrees of freedom dof displayed on the input tab later if we enable users to fix unfix certain variables of their choosing the degrees of freedom entry would display the latest value of dof as the user toggles fixed on or off for particular vars | 0 |
274,897 | 23,877,458,875 | IssuesEvent | 2022-09-07 20:34:29 | wazuh/wazuh-qa | https://api.github.com/repos/wazuh/wazuh-qa | opened | QA testing - Avoid multigroup files deletion in worker nodes when the node is restarted | team/qa type/qa-testing status/not-tracked | | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
| 4.3.8 | https://github.com/wazuh/wazuh/issues/14814 | https://github.com/wazuh/wazuh/pull/14825 |
## Description
In order to validate the changes of the branch https://github.com/wazuh/wazuh/tree/14814_avoid_multigroup_deletion, some manual testing is required.
A performance issue was detected when rebooting a wazuh node in a cluster architecture. This was related to the fact that all multigroups are deleted when rebooting a node, but this task is not necessary to perform on worker nodes since the cluster already controls when there are changes that require re-synchronization between master and workers.
## Configuration
Configure at least a master node and a worker node.
https://documentation.wazuh.com/current/user-manual/configuring-cluster/index.html
## Proposed checks
- [ ] Cluster doesn't re-synchronize multigroups files when a worker node is restarted.
- [ ] Cluster doesn't re-synchronize multigroups files when a the master node is restarted.
- [ ] Cluster only synchronize multigroups files when a change is detected in a multigroup. It only synchronize the multigroups that changed, not all of them.
## Steps to reproduce
- Create a new group.
- Add an agent to the group.
- Restart any node.
- Verify that the cluster doesn't re-synchronize multigroups files. | 1.0 | QA testing - Avoid multigroup files deletion in worker nodes when the node is restarted - | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
| 4.3.8 | https://github.com/wazuh/wazuh/issues/14814 | https://github.com/wazuh/wazuh/pull/14825 |
## Description
In order to validate the changes of the branch https://github.com/wazuh/wazuh/tree/14814_avoid_multigroup_deletion, some manual testing is required.
A performance issue was detected when rebooting a wazuh node in a cluster architecture. This was related to the fact that all multigroups are deleted when rebooting a node, but this task is not necessary to perform on worker nodes since the cluster already controls when there are changes that require re-synchronization between master and workers.
## Configuration
Configure at least a master node and a worker node.
https://documentation.wazuh.com/current/user-manual/configuring-cluster/index.html
## Proposed checks
- [ ] Cluster doesn't re-synchronize multigroups files when a worker node is restarted.
- [ ] Cluster doesn't re-synchronize multigroups files when a the master node is restarted.
- [ ] Cluster only synchronize multigroups files when a change is detected in a multigroup. It only synchronize the multigroups that changed, not all of them.
## Steps to reproduce
- Create a new group.
- Add an agent to the group.
- Restart any node.
- Verify that the cluster doesn't re-synchronize multigroups files. | test | qa testing avoid multigroup files deletion in worker nodes when the node is restarted target version related issue related pr description in order to validate the changes of the branch some manual testing is required a performance issue was detected when rebooting a wazuh node in a cluster architecture this was related to the fact that all multigroups are deleted when rebooting a node but this task is not necessary to perform on worker nodes since the cluster already controls when there are changes that require re synchronization between master and workers configuration configure at least a master node and a worker node proposed checks cluster doesn t re synchronize multigroups files when a worker node is restarted cluster doesn t re synchronize multigroups files when a the master node is restarted cluster only synchronize multigroups files when a change is detected in a multigroup it only synchronize the multigroups that changed not all of them steps to reproduce create a new group add an agent to the group restart any node verify that the cluster doesn t re synchronize multigroups files | 1 |
22,744 | 3,794,037,929 | IssuesEvent | 2016-03-22 15:44:19 | eregs/notice-and-comment | https://api.github.com/repos/eregs/notice-and-comment | opened | Style and implement table of contents | design | The table of contents needs to be modified to accommodate the preamble format and fit the overall style of the eRegs N&C site.
Here is a rough wireframe of what this may look like.
 | 1.0 | Style and implement table of contents - The table of contents needs to be modified to accommodate the preamble format and fit the overall style of the eRegs N&C site.
Here is a rough wireframe of what this may look like.
 | non_test | style and implement table of contents the table of contents needs to be modified to accommodate the preamble format and fit the overall style of the eregs n c site here is a rough wireframe of what this may look like | 0 |
2,838 | 12,689,937,372 | IssuesEvent | 2020-06-21 09:14:15 | bandprotocol/bandchain | https://api.github.com/repos/bandprotocol/bandchain | closed | Update devnet deployment flow 1 | automation | - [ ] Change the deployment process, not generate new key when deploying just copy key from old chain and init with the same key
- [ ] Add requester address to genesis account with 1 BAND (`band1lxv84wp9sc409l09gqmxa4fskkxmlrd4zrh7z8`) | 1.0 | Update devnet deployment flow 1 - - [ ] Change the deployment process, not generate new key when deploying just copy key from old chain and init with the same key
- [ ] Add requester address to genesis account with 1 BAND (`band1lxv84wp9sc409l09gqmxa4fskkxmlrd4zrh7z8`) | non_test | update devnet deployment flow change the deployment process not generate new key when deploying just copy key from old chain and init with the same key add requester address to genesis account with band | 0 |
11,932 | 14,072,088,692 | IssuesEvent | 2020-11-04 00:50:41 | Electroblob77/Wizardry | https://api.github.com/repos/Electroblob77/Wizardry | closed | Possible incompatibility with summons | bug compatibility external bug stale | Minecraft version: 1.12.2
Wizardry version: 4.2.11
Environment: Server/Single
Issue details: When casting summon zombie/skeleton/wither skeley, the summon is hostile towards everything always. was expecting it to only be hostile towards things that are hostile towards me.
Other mods involved: rough mobs 2, zombie awareness(probably the problem)
| True | Possible incompatibility with summons - Minecraft version: 1.12.2
Wizardry version: 4.2.11
Environment: Server/Single
Issue details: When casting summon zombie/skeleton/wither skeley, the summon is hostile towards everything always. was expecting it to only be hostile towards things that are hostile towards me.
Other mods involved: rough mobs 2, zombie awareness(probably the problem)
| non_test | possible incompatibility with summons minecraft version wizardry version environment server single issue details when casting summon zombie skeleton wither skeley the summon is hostile towards everything always was expecting it to only be hostile towards things that are hostile towards me other mods involved rough mobs zombie awareness probably the problem | 0 |
203,135 | 15,351,000,216 | IssuesEvent | 2021-03-01 03:57:15 | Slimefun/Slimefun4 | https://api.github.com/repos/Slimefun/Slimefun4 | opened | Block Placer doesn't drop item in it when broken | π― Needs testing π Bug Report | <!-- FILL IN THE FORM BELOW -->
## :round_pushpin: Description (REQUIRED)
<!-- A clear and detailed description of what went wrong. -->
<!-- The more information you can provide, the easier we can handle this problem. -->
<!-- Start writing below this line -->
Block Placer doesn't drop item in it when broken
## :bookmark_tabs: Steps to reproduce the Issue (REQUIRED)
<!-- Tell us the exact steps to reproduce this issue, the more detailed the easier we can reproduce it. -->
<!-- Youtube Videos and Screenshots are recommended!!! -->
<!-- Start writing below this line -->
1. Place a item placer
2. Put any item in it
3. Break the placer
## :bulb: Expected behavior (REQUIRED)
<!-- What were you expecting to happen? -->
<!-- What do you think would have been the correct behaviour? -->
<!-- Start writing below this line -->
Item in the placer gets deleted
## :scroll: Server Log
<!-- Take a look at your Server Log and post any errors you can find via https://pastebin.com/ -->
<!-- If you are unsure about it, post your full log, you can find it under /logs/latest.log -->
<!-- Paste your link(s) below this line -->
## :open_file_folder: /error-reports/ Folder
<!-- Check the folder /plugins/Slimefun/error-reports/ and upload any files inside that folder. -->
<!-- You can also post these files via https://pastebin.com/ -->
<!-- Paste your link(s) below this line -->
## :compass: Environment (REQUIRED)
<!-- Any issue without the exact version numbers will be closed! -->
<!-- "latest" IS NOT A VERSION NUMBER. -->
<!-- We recommend running "/sf versions" and showing us a screenshot of that. -->
<!-- Make sure that the screenshot covers the entire output of that command. -->
<!-- If your issue is related to other plugins, make sure to include the versions of these plugins too! -->
- Server Software: Paper-503 MC 1.16.5
- Minecraft Version: 1.16.5 (1.16.5_RO.1-SNAPSHOT)
- Slimefun Version: DEV - 823 (git 84ad08be)
| 1.0 | Block Placer doesn't drop item in it when broken - <!-- FILL IN THE FORM BELOW -->
## :round_pushpin: Description (REQUIRED)
<!-- A clear and detailed description of what went wrong. -->
<!-- The more information you can provide, the easier we can handle this problem. -->
<!-- Start writing below this line -->
Block Placer doesn't drop item in it when broken
## :bookmark_tabs: Steps to reproduce the Issue (REQUIRED)
<!-- Tell us the exact steps to reproduce this issue, the more detailed the easier we can reproduce it. -->
<!-- Youtube Videos and Screenshots are recommended!!! -->
<!-- Start writing below this line -->
1. Place a item placer
2. Put any item in it
3. Break the placer
## :bulb: Expected behavior (REQUIRED)
<!-- What were you expecting to happen? -->
<!-- What do you think would have been the correct behaviour? -->
<!-- Start writing below this line -->
Item in the placer gets deleted
## :scroll: Server Log
<!-- Take a look at your Server Log and post any errors you can find via https://pastebin.com/ -->
<!-- If you are unsure about it, post your full log, you can find it under /logs/latest.log -->
<!-- Paste your link(s) below this line -->
## :open_file_folder: /error-reports/ Folder
<!-- Check the folder /plugins/Slimefun/error-reports/ and upload any files inside that folder. -->
<!-- You can also post these files via https://pastebin.com/ -->
<!-- Paste your link(s) below this line -->
## :compass: Environment (REQUIRED)
<!-- Any issue without the exact version numbers will be closed! -->
<!-- "latest" IS NOT A VERSION NUMBER. -->
<!-- We recommend running "/sf versions" and showing us a screenshot of that. -->
<!-- Make sure that the screenshot covers the entire output of that command. -->
<!-- If your issue is related to other plugins, make sure to include the versions of these plugins too! -->
- Server Software: Paper-503 MC 1.16.5
- Minecraft Version: 1.16.5 (1.16.5_RO.1-SNAPSHOT)
- Slimefun Version: DEV - 823 (git 84ad08be)
| test | block placer doesn t drop item in it when broken round pushpin description required block placer doesn t drop item in it when broken bookmark tabs steps to reproduce the issue required place a item placer put any item in it break the placer bulb expected behavior required item in the placer gets deleted scroll server log open file folder error reports folder compass environment required server software paper mc minecraft version ro snapshot slimefun version dev git | 1 |
240,102 | 20,011,306,530 | IssuesEvent | 2022-02-01 06:57:32 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | flaky editor line numbers test | *duplicate smoke-test-failure | ```
1) VSCode Smoke Tests (Electron)
Preferences
turns off editor line numbers and verifies the live change:
Error: Timeout: get elements '.line-numbers' after 20 seconds.
at poll (D:\a\1\s\test\automation\src\code.ts:109:10)
at Code.waitForElements (D:\a\1\s\test\automation\src\code.ts:258:10)
at Context.<anonymous> (src\areas\preferences\preferences.test.ts:23:4)
```
https://monacotools.visualstudio.com/DefaultCollection/Monaco/_build/results?buildId=153133&view=logs&j=672276a2-8d3a-5fab-615d-090c51352f92&t=0699ae84-7245-5a45-5eee-80b086af2725&l=73 | 1.0 | flaky editor line numbers test - ```
1) VSCode Smoke Tests (Electron)
Preferences
turns off editor line numbers and verifies the live change:
Error: Timeout: get elements '.line-numbers' after 20 seconds.
at poll (D:\a\1\s\test\automation\src\code.ts:109:10)
at Code.waitForElements (D:\a\1\s\test\automation\src\code.ts:258:10)
at Context.<anonymous> (src\areas\preferences\preferences.test.ts:23:4)
```
https://monacotools.visualstudio.com/DefaultCollection/Monaco/_build/results?buildId=153133&view=logs&j=672276a2-8d3a-5fab-615d-090c51352f92&t=0699ae84-7245-5a45-5eee-80b086af2725&l=73 | test | flaky editor line numbers test vscode smoke tests electron preferences turns off editor line numbers and verifies the live change error timeout get elements line numbers after seconds at poll d a s test automation src code ts at code waitforelements d a s test automation src code ts at context src areas preferences preferences test ts | 1 |
37,875 | 5,146,880,217 | IssuesEvent | 2017-01-13 03:39:30 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Test failure: ReadAndWrite/OutputEncoding fail with "Assert+WrapperXunitException" | area-System.Console test bug test-run-desktop | Opened on behalf of @Jiayili1
The test `ReadAndWrite/OutputEncoding` has failed.
```
Assert+WrapperXunitException : File path: D:\A\_work\2\s\src\System.Console\tests\ReadAndWrite.cs. Line: 202\r
---- Assert.Equal() Failure\r
Expected: Byte[] []\r
Actual: Byte[] [239, 187, 191]
```
Stack Trace:
```
at Assert.WrapException(Exception e, String callerFilePath, Int32 callerLineNumber) in D:\A\_work\2\s\src\Common\tests\System\Diagnostics\AssertWithCallerAttributes.cs:line 583
at Assert.Equal[T](T expected, T actual, String path, Int32 line) in D:\A\_work\2\s\src\Common\tests\System\Diagnostics\AssertWithCallerAttributes.cs:line 172
at ReadAndWrite.ValidateConsoleEncoding(Encoding encoding) in D:\A\_work\2\s\src\System.Console\tests\ReadAndWrite.cs:line 202
at ReadAndWrite.OutputEncoding() in D:\A\_work\2\s\src\System.Console\tests\ReadAndWrite.cs:line 275
----- Inner Stack Trace -----
at Assert.Equal[T](T expected, T actual, String path, Int32 line) in D:\A\_work\2\s\src\Common\tests\System\Diagnostics\AssertWithCallerAttributes.cs:line 171
```
Failing configurations:
- Windows.10.Amd64
- AnyCPU-Debug
- AnyCPU-Release
link: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster/type/test~2Ffunctional~2Fdesktop~2Fcli~2F/build/20160816.02/workItem/System.Console.Tests/analysis/xunit/ReadAndWrite~2FOutputEncoding
| 2.0 | Test failure: ReadAndWrite/OutputEncoding fail with "Assert+WrapperXunitException" - Opened on behalf of @Jiayili1
The test `ReadAndWrite/OutputEncoding` has failed.
```
Assert+WrapperXunitException : File path: D:\A\_work\2\s\src\System.Console\tests\ReadAndWrite.cs. Line: 202\r
---- Assert.Equal() Failure\r
Expected: Byte[] []\r
Actual: Byte[] [239, 187, 191]
```
Stack Trace:
```
at Assert.WrapException(Exception e, String callerFilePath, Int32 callerLineNumber) in D:\A\_work\2\s\src\Common\tests\System\Diagnostics\AssertWithCallerAttributes.cs:line 583
at Assert.Equal[T](T expected, T actual, String path, Int32 line) in D:\A\_work\2\s\src\Common\tests\System\Diagnostics\AssertWithCallerAttributes.cs:line 172
at ReadAndWrite.ValidateConsoleEncoding(Encoding encoding) in D:\A\_work\2\s\src\System.Console\tests\ReadAndWrite.cs:line 202
at ReadAndWrite.OutputEncoding() in D:\A\_work\2\s\src\System.Console\tests\ReadAndWrite.cs:line 275
----- Inner Stack Trace -----
at Assert.Equal[T](T expected, T actual, String path, Int32 line) in D:\A\_work\2\s\src\Common\tests\System\Diagnostics\AssertWithCallerAttributes.cs:line 171
```
Failing configurations:
- Windows.10.Amd64
- AnyCPU-Debug
- AnyCPU-Release
link: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster/type/test~2Ffunctional~2Fdesktop~2Fcli~2F/build/20160816.02/workItem/System.Console.Tests/analysis/xunit/ReadAndWrite~2FOutputEncoding
| test | test failure readandwrite outputencoding fail with assert wrapperxunitexception opened on behalf of the test readandwrite outputencoding has failed assert wrapperxunitexception file path d a work s src system console tests readandwrite cs line r assert equal failure r expected byte r actual byte stack trace at assert wrapexception exception e string callerfilepath callerlinenumber in d a work s src common tests system diagnostics assertwithcallerattributes cs line at assert equal t expected t actual string path line in d a work s src common tests system diagnostics assertwithcallerattributes cs line at readandwrite validateconsoleencoding encoding encoding in d a work s src system console tests readandwrite cs line at readandwrite outputencoding in d a work s src system console tests readandwrite cs line inner stack trace at assert equal t expected t actual string path line in d a work s src common tests system diagnostics assertwithcallerattributes cs line failing configurations windows anycpu debug anycpu release link | 1 |
91,827 | 8,319,872,001 | IssuesEvent | 2018-09-25 18:28:26 | nasa-gibs/worldview | https://api.github.com/repos/nasa-gibs/worldview | closed | [settings.nav.7] Layer Settings window does not close when you change to the Events tab | bug testing | The Layer Settings window does not close when you change to the Events tab.
Expected behavior:
The layer settings window should close when you change to the Events or Data download tabs. | 1.0 | [settings.nav.7] Layer Settings window does not close when you change to the Events tab - The Layer Settings window does not close when you change to the Events tab.
Expected behavior:
The layer settings window should close when you change to the Events or Data download tabs. | test | layer settings window does not close when you change to the events tab the layer settings window does not close when you change to the events tab expected behavior the layer settings window should close when you change to the events or data download tabs | 1 |
291,061 | 25,117,876,169 | IssuesEvent | 2022-11-09 04:42:06 | ZcashFoundation/zebra | https://api.github.com/repos/ZcashFoundation/zebra | closed | ci: logs longer than GitHub's limit do not allow to see the actual error in the console | A-rust A-devops C-enhancement S-needs-triage P-Low :snowflake: I-usability C-testing | ## Motivation
Some error logs might be "hidden" if the logs are too long, needing extra steps for developer to find the actual error. And making it harder to _link_ the errors in issues/PRs
Example: https://github.com/ZcashFoundation/zebra/actions/runs/3332682537/jobs/5548148889
### Designs
We can approach this in two different ways:
- Confirm the actual limits on GitHub Actions, and just show the _end_ of the logs or the amount supported by GHA
- Make Zebra logs on certain tests less verbose
| 1.0 | ci: logs longer than GitHub's limit do not allow to see the actual error in the console - ## Motivation
Some error logs might be "hidden" if the logs are too long, needing extra steps for developer to find the actual error. And making it harder to _link_ the errors in issues/PRs
Example: https://github.com/ZcashFoundation/zebra/actions/runs/3332682537/jobs/5548148889
### Designs
We can approach this in two different ways:
- Confirm the actual limits on GitHub Actions, and just show the _end_ of the logs or the amount supported by GHA
- Make Zebra logs on certain tests less verbose
| test | ci logs longer than github s limit do not allow to see the actual error in the console motivation some error logs might be hidden if the logs are too long needing extra steps for developer to find the actual error and making it harder to link the errors in issues prs example designs we can approach this in two different ways confirm the actual limits on github actions and just show the end of the logs or the amount supported by gha make zebra logs on certain tests less verbose | 1 |
268,546 | 23,378,808,381 | IssuesEvent | 2022-08-11 07:27:00 | kubernetes-sigs/cluster-api-provider-aws | https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-aws | closed | GPU e2e test fails | kind/failing-test priority/important-soon triage/accepted | Since yesterday, the GPU test is failing:
https://testgrid.k8s.io/sig-cluster-lifecycle-cluster-api-provider-aws#periodic-e2e-main&show-stale-tests=
cc @Ankitasw
/kind failing-test | 1.0 | GPU e2e test fails - Since yesterday, the GPU test is failing:
https://testgrid.k8s.io/sig-cluster-lifecycle-cluster-api-provider-aws#periodic-e2e-main&show-stale-tests=
cc @Ankitasw
/kind failing-test | test | gpu test fails since yesterday the gpu test is failing cc ankitasw kind failing test | 1 |
59,416 | 14,582,779,473 | IssuesEvent | 2020-12-18 12:58:00 | ccr/ccr | https://api.github.com/repos/ccr/ccr | opened | Can we do static builds? | build | I have static builds turned off in configure.ac. Why? Well, I thought that we couldn't use HDF5 filters except as dynamically loaded libraries. But when I mentioned that to Elena once, she looked at me like I was crazy.
However, she does that a lot.
But I thought I should check this. Static builds are preferred by NOAA and are much easier to debug with a debugger. | 1.0 | Can we do static builds? - I have static builds turned off in configure.ac. Why? Well, I thought that we couldn't use HDF5 filters except as dynamically loaded libraries. But when I mentioned that to Elena once, she looked at me like I was crazy.
However, she does that a lot.
But I thought I should check this. Static builds are preferred by NOAA and are much easier to debug with a debugger. | non_test | can we do static builds i have static builds turned off in configure ac why well i thought that we couldn t use filters except as dynamically loaded libraries but when i mentioned that to elena once she looked at me like i was crazy however she does that a lot but i thought i should check this static builds are preferred by noaa and are much easier to debug with a debugger | 0 |
314,632 | 27,014,761,517 | IssuesEvent | 2023-02-10 18:17:54 | opensearch-project/OpenSearch | https://api.github.com/repos/opensearch-project/OpenSearch | closed | [BUG] GeoTileGridIT.testGeoShapes failure | bug >test-failure Geospatial | Seems to be reproducible:
```
./gradlew ':modules:geo:internalClusterTest' --tests "org.opensearch.geo.search.aggregations.bucket.GeoTileGridIT.testGeoShapes" -Dtests.seed=EC3F504EF22998E0
org.opensearch.geo.search.aggregations.bucket.GeoTileGridIT > testGeoShapes FAILED
java.lang.AssertionError: Geotile 3/7/4 has wrong doc count expected:<36> but was:<1>
at __randomizedtesting.SeedInfo.seed([EC3F504EF22998E0:682E8817D55CBFD]:0)
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at org.opensearch.geo.search.aggregations.bucket.GeoTileGridIT.testGeoShapes(GeoTileGridIT.java:78)
```
https://build.ci.opensearch.org/job/gradle-check/10625/consoleFull
/cc @navneet1v | 1.0 | [BUG] GeoTileGridIT.testGeoShapes failure - Seems to be reproducible:
```
./gradlew ':modules:geo:internalClusterTest' --tests "org.opensearch.geo.search.aggregations.bucket.GeoTileGridIT.testGeoShapes" -Dtests.seed=EC3F504EF22998E0
org.opensearch.geo.search.aggregations.bucket.GeoTileGridIT > testGeoShapes FAILED
java.lang.AssertionError: Geotile 3/7/4 has wrong doc count expected:<36> but was:<1>
at __randomizedtesting.SeedInfo.seed([EC3F504EF22998E0:682E8817D55CBFD]:0)
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at org.opensearch.geo.search.aggregations.bucket.GeoTileGridIT.testGeoShapes(GeoTileGridIT.java:78)
```
https://build.ci.opensearch.org/job/gradle-check/10625/consoleFull
/cc @navneet1v | test | geotilegridit testgeoshapes failure seems to be reproducible gradlew modules geo internalclustertest tests org opensearch geo search aggregations bucket geotilegridit testgeoshapes dtests seed org opensearch geo search aggregations bucket geotilegridit testgeoshapes failed java lang assertionerror geotile has wrong doc count expected but was at randomizedtesting seedinfo seed at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at org opensearch geo search aggregations bucket geotilegridit testgeoshapes geotilegridit java cc | 1 |
758,336 | 26,551,370,221 | IssuesEvent | 2023-01-20 08:06:42 | MTES-MCT/monitorenv | https://api.github.com/repos/MTES-MCT/monitorenv | closed | ContrΓ΄les - Date/heure liΓ© entre 2 actions similaires | bug Priority | Si on ajoute plusieurs actions de contrΓ΄les, la date/heure de chaque contrΓ΄le sont liΓ©es entre elles. C'est-a-dire que si on change la date ou l'heure de l'action de contrΓ΄le 1, Γ§a met la mΓͺme date / heure sur toutes les autres actions de contrΓ΄les. MΓͺme bug pour les actions de surveillance.
TrΓ¨s problΓ©matique pour les missions sur plusieurs jours. | 1.0 | ContrΓ΄les - Date/heure liΓ© entre 2 actions similaires - Si on ajoute plusieurs actions de contrΓ΄les, la date/heure de chaque contrΓ΄le sont liΓ©es entre elles. C'est-a-dire que si on change la date ou l'heure de l'action de contrΓ΄le 1, Γ§a met la mΓͺme date / heure sur toutes les autres actions de contrΓ΄les. MΓͺme bug pour les actions de surveillance.
TrΓ¨s problΓ©matique pour les missions sur plusieurs jours. | non_test | contrΓ΄les date heure liΓ© entre actions similaires si on ajoute plusieurs actions de contrΓ΄les la date heure de chaque contrΓ΄le sont liΓ©es entre elles c est a dire que si on change la date ou l heure de l action de contrΓ΄le Γ§a met la mΓͺme date heure sur toutes les autres actions de contrΓ΄les mΓͺme bug pour les actions de surveillance trΓ¨s problΓ©matique pour les missions sur plusieurs jours | 0 |
255,926 | 21,967,014,955 | IssuesEvent | 2022-05-24 21:29:48 | yeatmanlab/pyAFQ | https://api.github.com/repos/yeatmanlab/pyAFQ | closed | Try downsampling the data in the tests | testing | Should reduce memory requirements. Might resolve the CI issues in #160? | 1.0 | Try downsampling the data in the tests - Should reduce memory requirements. Might resolve the CI issues in #160? | test | try downsampling the data in the tests should reduce memory requirements might resolve the ci issues in | 1 |
346,523 | 24,886,950,847 | IssuesEvent | 2022-10-28 08:36:44 | Guanzhou03/ped | https://api.github.com/repos/Guanzhou03/ped | opened | Format inconsistent for some commands | severity.VeryLow type.DocumentationBug | 
Some inconsistencies in the formatting, for example in the above example CLIENT_INDEX is used, but in the example below the format does not follow the same style.

<!--session: 1666944163573-5c30bee4-ab8e-4b28-a596-5f2e188c5155-->
<!--Version: Web v3.4.4--> | 1.0 | Format inconsistent for some commands - 
Some inconsistencies in the formatting, for example in the above example CLIENT_INDEX is used, but in the example below the format does not follow the same style.

<!--session: 1666944163573-5c30bee4-ab8e-4b28-a596-5f2e188c5155-->
<!--Version: Web v3.4.4--> | non_test | format inconsistent for some commands some inconsistencies in the formatting for example in the above example client index is used but in the example below the format does not follow the same style | 0 |
278,834 | 21,097,978,659 | IssuesEvent | 2022-04-04 12:11:21 | TEIC/TEI | https://api.github.com/repos/TEIC/TEI | closed | TEI tagdoc calls a namespace declaration an attribute | Status: Needs Discussion TEI: Guidelines & Documentation | From the TEI tagdoc:
~~~xml
<remarks versionDate="2015-03-10" xml:lang="en">
<p>This element is required. It is customary to specify the
TEI namespace <code>http://www.tei-c.org/ns/1.0</code> on it, using
the <att>xmlns</att> attribute.</p>
</remarks>
~~~
The problem is that we generally use the XDM, in which `@xmlns` is **not** an attribute, but rather a namespace declaration. (To prove this to yourself without having to do a lot of reading, try adding `<xsl:attribute name="xmlns" select="'http://www.example.org/does/not/work'"/>` to your XSLT 2.0 or 3.0 program. Alright, thatβs not proof, but it is mighty strong evidence.)
It is not at all obvious that we should change this, though. From the XSLT (or XQuery or Schematron or XPath) programmerβs point of view, the distinction is important. Thus we should change it. But from the document creatorβs perspective there is no difference whatsoever, and what is written is quite clear β thus on the theory that we should leave it obvious for the end user, and let the programmers figure it out, we should not change it.
See also #1871 and #2233. | 1.0 | TEI tagdoc calls a namespace declaration an attribute - From the TEI tagdoc:
~~~xml
<remarks versionDate="2015-03-10" xml:lang="en">
<p>This element is required. It is customary to specify the
TEI namespace <code>http://www.tei-c.org/ns/1.0</code> on it, using
the <att>xmlns</att> attribute.</p>
</remarks>
~~~
The problem is that we generally use the XDM, in which `@xmlns` is **not** an attribute, but rather a namespace declaration. (To prove this to yourself without having to do a lot of reading, try adding `<xsl:attribute name="xmlns" select="'http://www.example.org/does/not/work'"/>` to your XSLT 2.0 or 3.0 program. Alright, thatβs not proof, but it is mighty strong evidence.)
It is not at all obvious that we should change this, though. From the XSLT (or XQuery or Schematron or XPath) programmerβs point of view, the distinction is important. Thus we should change it. But from the document creatorβs perspective there is no difference whatsoever, and what is written is quite clear β thus on the theory that we should leave it obvious for the end user, and let the programmers figure it out, we should not change it.
See also #1871 and #2233. | non_test | tei tagdoc calls a namespace declaration an attribute from the tei tagdoc xml this element is required it is customary to specify the tei namespace on it using the xmlns attribute the problem is that we generally use the xdm in which xmlns is not an attribute but rather a namespace declaration to prove this to yourself without having to do a lot of reading try adding xsl attribute name xmlns select to your xslt or program alright thatβs not proof but it is mighty strong evidence it is not at all obvious that we should change this though from the xslt or xquery or schematron or xpath programmerβs point of view the distinction is important thus we should change it but from the document creatorβs perspective there is no difference whatsoever and what is written is quite clear β thus on the theory that we should leave it obvious for the end user and let the programmers figure it out we should not change it see also and | 0 |
218,865 | 16,772,991,315 | IssuesEvent | 2021-06-14 17:00:47 | Illusive7Man/ObservableForms | https://api.github.com/repos/Illusive7Man/ObservableForms | closed | Add graphic representation of FormGroup | documentation | Image that demonstrates how Form Group is created from html. | 1.0 | Add graphic representation of FormGroup - Image that demonstrates how Form Group is created from html. | non_test | add graphic representation of formgroup image that demonstrates how form group is created from html | 0 |
145,154 | 11,660,435,817 | IssuesEvent | 2020-03-03 03:20:03 | cityofaustin/atd-amanda | https://api.github.com/repos/cityofaustin/atd-amanda | closed | Test Possible Issue: Post Payment | Product: AMANDA Project: ROWMAN Ph 3 Type: Bug Report Type: Testing Workgroup: ROW | Per Kim, when an activation fees are billed separately, and only one is paid, the folder will auto-issue and it shouldn't.
Story: PLA bills for activation fees, but they are not paid. Then something changes, more activation fees or investigation fees are added. If only one of these bills are paid, the permit will issue itself. | 1.0 | Test Possible Issue: Post Payment - Per Kim, when an activation fees are billed separately, and only one is paid, the folder will auto-issue and it shouldn't.
Story: PLA bills for activation fees, but they are not paid. Then something changes, more activation fees or investigation fees are added. If only one of these bills are paid, the permit will issue itself. | test | test possible issue post payment per kim when an activation fees are billed separately and only one is paid the folder will auto issue and it shouldn t story pla bills for activation fees but they are not paid then something changes more activation fees or investigation fees are added if only one of these bills are paid the permit will issue itself | 1 |
219,440 | 17,091,782,566 | IssuesEvent | 2021-07-08 18:30:45 | MoarVM/MoarVM | https://api.github.com/repos/MoarVM/MoarVM | closed | MVM_panic in t/02-rakudo/12-proto-arity-count.t | testneeded | `This is Rakudo version 2018.10-76-gd3f0286c3 built on MoarVM version 2018.10-74-g2fdde4a21`
```
Starting program: /home/dan/Source/perl6/install/bin/moar --execname=./perl6-gdb-m --libpath=. --libpath=blib --libpath=/home/dan/Source/perl6/install/share/nqp/lib --libpath=/home/dan/Source/perl6/install/share/nqp/lib --libpath=/home/dan/Source/perl6/install/share/nqp/lib /home/dan/Source/perl6/rakudo/perl6.moarvm --nqp-lib=blib t/02-rakudo/12-proto-arity-count.t
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[New Thread 0x7ffff6fc0700 (LWP 22775)]
1..351
[Switching to Thread 0x7ffff6fc0700 (LWP 22775)]
Thread 2 "moar" hit Breakpoint 1, MVM_panic (exitCode=exitCode@entry=1, messageFormat=messageFormat@entry=0x7ffff7dc53c8 "Register types do not match between value and node") at src/core/exceptions.c:821
821 MVM_NO_RETURN void MVM_panic(MVMint32 exitCode, const char *messageFormat, ...) {
(gdb) bt
#0 MVM_panic (exitCode=exitCode@entry=1, messageFormat=messageFormat@entry=0x7ffff7dc53c8 "Register types do not match between value and node") at src/core/exceptions.c:821
#1 0x00007ffff7ab5bb1 in determine_live_ranges (tc=0x5555555e4f70, list=0x7ffff0144680, alc=0x7ffff6fbf4c0) at src/jit/linear_scan.c:579
#2 MVM_jit_linear_scan_allocate (tc=tc@entry=0x5555555e4f70, compiler=compiler@entry=0x7ffff6fbf630, list=list@entry=0x7ffff007aca0) at src/jit/linear_scan.c:1141
#3 0x00007ffff7aacbe2 in MVM_jit_compile_expr_tree (tc=0x5555555e4f70, compiler=0x7ffff6fbf630, jg=<optimized out>, tree=0x7ffff0144680) at src/jit/compile.c:296
#4 0x00007ffff7aacee2 in MVM_jit_compile_graph (tc=tc@entry=0x5555555e4f70, jg=jg@entry=0x7ffff02524ea) at src/jit/compile.c:77
#5 0x00007ffff7a34186 in MVM_spesh_candidate_add (tc=0x5555555e4f70, p=0x7ffff0100130) at src/spesh/candidate.c:118
#6 0x00007ffff7a454f1 in worker (tc=0x5555555e4f70, callsite=<optimized out>, args=<optimized out>) at src/spesh/worker.c:16
#7 0x00007ffff79b7d91 in thread_initial_invoke (tc=0x5555555e4f70, data=<optimized out>) at src/core/threads.c:59
#8 0x00007ffff79944ea in MVM_interp_run (tc=0x1, tc@entry=0x5555555e4f70, initial_invoke=0x0, invoke_data=0x7ffff0043da8, invoke_data@entry=0x5555555e5f30) at src/core/interp.c:110
#9 0x00007ffff79b7e16 in start_thread (data=0x5555555e5f30) at src/core/threads.c:87
#10 0x00007ffff73fea9d in start_thread () from /usr/lib/libpthread.so.0
#11 0x00007ffff7698b23 in clone () from /usr/lib/libc.so.6
(gdb)
``` | 1.0 | MVM_panic in t/02-rakudo/12-proto-arity-count.t - `This is Rakudo version 2018.10-76-gd3f0286c3 built on MoarVM version 2018.10-74-g2fdde4a21`
```
Starting program: /home/dan/Source/perl6/install/bin/moar --execname=./perl6-gdb-m --libpath=. --libpath=blib --libpath=/home/dan/Source/perl6/install/share/nqp/lib --libpath=/home/dan/Source/perl6/install/share/nqp/lib --libpath=/home/dan/Source/perl6/install/share/nqp/lib /home/dan/Source/perl6/rakudo/perl6.moarvm --nqp-lib=blib t/02-rakudo/12-proto-arity-count.t
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[New Thread 0x7ffff6fc0700 (LWP 22775)]
1..351
[Switching to Thread 0x7ffff6fc0700 (LWP 22775)]
Thread 2 "moar" hit Breakpoint 1, MVM_panic (exitCode=exitCode@entry=1, messageFormat=messageFormat@entry=0x7ffff7dc53c8 "Register types do not match between value and node") at src/core/exceptions.c:821
821 MVM_NO_RETURN void MVM_panic(MVMint32 exitCode, const char *messageFormat, ...) {
(gdb) bt
#0 MVM_panic (exitCode=exitCode@entry=1, messageFormat=messageFormat@entry=0x7ffff7dc53c8 "Register types do not match between value and node") at src/core/exceptions.c:821
#1 0x00007ffff7ab5bb1 in determine_live_ranges (tc=0x5555555e4f70, list=0x7ffff0144680, alc=0x7ffff6fbf4c0) at src/jit/linear_scan.c:579
#2 MVM_jit_linear_scan_allocate (tc=tc@entry=0x5555555e4f70, compiler=compiler@entry=0x7ffff6fbf630, list=list@entry=0x7ffff007aca0) at src/jit/linear_scan.c:1141
#3 0x00007ffff7aacbe2 in MVM_jit_compile_expr_tree (tc=0x5555555e4f70, compiler=0x7ffff6fbf630, jg=<optimized out>, tree=0x7ffff0144680) at src/jit/compile.c:296
#4 0x00007ffff7aacee2 in MVM_jit_compile_graph (tc=tc@entry=0x5555555e4f70, jg=jg@entry=0x7ffff02524ea) at src/jit/compile.c:77
#5 0x00007ffff7a34186 in MVM_spesh_candidate_add (tc=0x5555555e4f70, p=0x7ffff0100130) at src/spesh/candidate.c:118
#6 0x00007ffff7a454f1 in worker (tc=0x5555555e4f70, callsite=<optimized out>, args=<optimized out>) at src/spesh/worker.c:16
#7 0x00007ffff79b7d91 in thread_initial_invoke (tc=0x5555555e4f70, data=<optimized out>) at src/core/threads.c:59
#8 0x00007ffff79944ea in MVM_interp_run (tc=0x1, tc@entry=0x5555555e4f70, initial_invoke=0x0, invoke_data=0x7ffff0043da8, invoke_data@entry=0x5555555e5f30) at src/core/interp.c:110
#9 0x00007ffff79b7e16 in start_thread (data=0x5555555e5f30) at src/core/threads.c:87
#10 0x00007ffff73fea9d in start_thread () from /usr/lib/libpthread.so.0
#11 0x00007ffff7698b23 in clone () from /usr/lib/libc.so.6
(gdb)
``` | test | mvm panic in t rakudo proto arity count t this is rakudo version built on moarvm version starting program home dan source install bin moar execname gdb m libpath libpath blib libpath home dan source install share nqp lib libpath home dan source install share nqp lib libpath home dan source install share nqp lib home dan source rakudo moarvm nqp lib blib t rakudo proto arity count t using host libthread db library usr lib libthread db so thread moar hit breakpoint mvm panic exitcode exitcode entry messageformat messageformat entry register types do not match between value and node at src core exceptions c mvm no return void mvm panic exitcode const char messageformat gdb bt mvm panic exitcode exitcode entry messageformat messageformat entry register types do not match between value and node at src core exceptions c in determine live ranges tc list alc at src jit linear scan c mvm jit linear scan allocate tc tc entry compiler compiler entry list list entry at src jit linear scan c in mvm jit compile expr tree tc compiler jg tree at src jit compile c in mvm jit compile graph tc tc entry jg jg entry at src jit compile c in mvm spesh candidate add tc p at src spesh candidate c in worker tc callsite args at src spesh worker c in thread initial invoke tc data at src core threads c in mvm interp run tc tc entry initial invoke invoke data invoke data entry at src core interp c in start thread data at src core threads c in start thread from usr lib libpthread so in clone from usr lib libc so gdb | 1 |
96,721 | 12,153,170,289 | IssuesEvent | 2020-04-25 01:00:12 | SLB-Pizza/radio-pizza | https://api.github.com/repos/SLB-Pizza/radio-pizza | closed | Pass artist to globalState from SingleMixCard | data design/layout | ## What's the current behavior?
The artist for the current show/mix only displays on initial page load, when the TopNav is in *NOT LIVE* layout.
## What's the desired behavior?
The artist for the current show/mix only displays on all layouts, whether or not it's the live stream that's playing or a different mix.
## Tasks
**NOT LIVE**
- [x] Static Tablet-up text
- [ ] Dynamic Mobile Ticker text
**LIVE**
- [ ] Static Tablet-up text
- [ ] Dynamic Mobile Ticker text
## Related Issues
#35 - @richdacuban already added `artist` to `globalState` and `CHANGE_URL` reducer action
#53 - Make sure to format the result in a way that closes this issue
#66 - Now that it's closed, we can make this change once to reflect everywhere | 1.0 | Pass artist to globalState from SingleMixCard - ## What's the current behavior?
The artist for the current show/mix only displays on initial page load, when the TopNav is in *NOT LIVE* layout.
## What's the desired behavior?
The artist for the current show/mix only displays on all layouts, whether or not it's the live stream that's playing or a different mix.
## Tasks
**NOT LIVE**
- [x] Static Tablet-up text
- [ ] Dynamic Mobile Ticker text
**LIVE**
- [ ] Static Tablet-up text
- [ ] Dynamic Mobile Ticker text
## Related Issues
#35 - @richdacuban already added `artist` to `globalState` and `CHANGE_URL` reducer action
#53 - Make sure to format the result in a way that closes this issue
#66 - Now that it's closed, we can make this change once to reflect everywhere | non_test | pass artist to globalstate from singlemixcard what s the current behavior the artist for the current show mix only displays on initial page load when the topnav is in not live layout what s the desired behavior the artist for the current show mix only displays on all layouts whether or not it s the live stream that s playing or a different mix tasks not live static tablet up text dynamic mobile ticker text live static tablet up text dynamic mobile ticker text related issues richdacuban already added artist to globalstate and change url reducer action make sure to format the result in a way that closes this issue now that it s closed we can make this change once to reflect everywhere | 0 |
236,488 | 7,749,482,528 | IssuesEvent | 2018-05-30 11:38:40 | Spirals-Team/repairnator | https://api.github.com/repos/Spirals-Team/repairnator | closed | git.checkout().setStartPoint(commitCheckout).addPaths(paths).call() seems not to work as expected. | bug priority:critical | When trying to solve #490, I got a case when `git.checkout().setStartPoint(commitCheckout).addPaths(paths).call()` seems not to work as we expected.
Example: https://github.com/INRIA/spoon/commit/551f0fb28c49f4b1eeab93f7748c04ef82cfde39 (this commit is a fixer commit).
There are 3 test classes involved:
1) src/test/java/spoon/test/imports/ImportTest.java
2) src/test/java/spoon/test/imports/testclasses/TransportIndicesShardStoresAction.java (NEW)
3) src/test/java/spoon/test/imports/testclasses2/AbstractMapBasedMultimap.java
Consider I have checked out that commit.
Then I have its previous commit called `commitCheckout`, and a variable `paths` containing `src/test/java`.
When I call `git.checkout().setStartPoint(commitCheckout).addPaths(paths).call()`, files 1 and 3 are updated for their previous state, but file 2, the new one, remains checked out.
I see no other solution than discovering the new files and deleting them by force. I also still don't know if the opposite happens (i.e., if a deleted file is not checked out in the commit it existed). | 1.0 | git.checkout().setStartPoint(commitCheckout).addPaths(paths).call() seems not to work as expected. - When trying to solve #490, I got a case when `git.checkout().setStartPoint(commitCheckout).addPaths(paths).call()` seems not to work as we expected.
Example: https://github.com/INRIA/spoon/commit/551f0fb28c49f4b1eeab93f7748c04ef82cfde39 (this commit is a fixer commit).
There are 3 test classes involved:
1) src/test/java/spoon/test/imports/ImportTest.java
2) src/test/java/spoon/test/imports/testclasses/TransportIndicesShardStoresAction.java (NEW)
3) src/test/java/spoon/test/imports/testclasses2/AbstractMapBasedMultimap.java
Consider I have checked out that commit.
Then I have its previous commit called `commitCheckout`, and a variable `paths` containing `src/test/java`.
When I call `git.checkout().setStartPoint(commitCheckout).addPaths(paths).call()`, files 1 and 3 are updated for their previous state, but file 2, the new one, remains checked out.
I see no other solution than discovering the new files and deleting them by force. I also still don't know if the opposite happens (i.e., if a deleted file is not checked out in the commit it existed). | non_test | git checkout setstartpoint commitcheckout addpaths paths call seems not to work as expected when trying to solve i got a case when git checkout setstartpoint commitcheckout addpaths paths call seems not to work as we expected example this commit is a fixer commit there are test classes involved src test java spoon test imports importtest java src test java spoon test imports testclasses transportindicesshardstoresaction java new src test java spoon test imports abstractmapbasedmultimap java consider i have checked out that commit then i have its previous commit called commitcheckout and a variable paths containing src test java when i call git checkout setstartpoint commitcheckout addpaths paths call files and are updated for their previous state but file the new one remains checked out i see no other solution than discovering the new files and deleting them by force i also still don t know if the opposite happens i e if a deleted file is not checked out in the commit it existed | 0 |
8,537 | 7,320,249,113 | IssuesEvent | 2018-03-02 06:04:15 | brave/browser-laptop | https://api.github.com/repos/brave/browser-laptop | closed | Backing up wallet to file creates recovery file in AppData | PR/pending-review feature/ledger sec-low security | ### Description
The `brave_wallet_recovery.txt` file is stored in `%appdata%\brave\` before offering it for saving.
The file's content contains this sentence "Save this key in a safe place, separate from your Brave browser.", so Brave acts against its own advice by saving this file in the profile.
### Steps to Reproduce
1. Go to about:preferences#payments
2. Click the cogwheel
3. Click 'Back up your wallet'
4. Click 'Save recovery file...'
**Actual result:**
The file being stored in `%appdata%\brave\` as well as the location specified by the user.
**Expected result:**
The file only being stored in the location indicated by the user.
**Reproduces how often:**
Each time.
### Brave Version
**about:brave info:**
Brave | 0.19.37
-- | --
rev | c6ee3b2
Muon | 4.4.25
libchromiumcontent | 61.0.3163.100
V8 | 6.1.534.41
Node.js | 7.9.0
Update Channel | Beta
OS Platform | Microsoft Windows
OS Release | 10.0.16296
OS Architecture | x64
**Reproducible on current live release:**
This is the live beta release.
### Additional Information
Technically, all information to get the wallet would already be available in `%appdata%\brave\`, but having the recovery file there seems a bit overdoing it.
| True | Backing up wallet to file creates recovery file in AppData - ### Description
The `brave_wallet_recovery.txt` file is stored in `%appdata%\brave\` before offering it for saving.
The file's content contains this sentence "Save this key in a safe place, separate from your Brave browser.", so Brave acts against its own advice by saving this file in the profile.
### Steps to Reproduce
1. Go to about:preferences#payments
2. Click the cogwheel
3. Click 'Back up your wallet'
4. Click 'Save recovery file...'
**Actual result:**
The file being stored in `%appdata%\brave\` as well as the location specified by the user.
**Expected result:**
The file only being stored in the location indicated by the user.
**Reproduces how often:**
Each time.
### Brave Version
**about:brave info:**
Brave | 0.19.37
-- | --
rev | c6ee3b2
Muon | 4.4.25
libchromiumcontent | 61.0.3163.100
V8 | 6.1.534.41
Node.js | 7.9.0
Update Channel | Beta
OS Platform | Microsoft Windows
OS Release | 10.0.16296
OS Architecture | x64
**Reproducible on current live release:**
This is the live beta release.
### Additional Information
Technically, all information to get the wallet would already be available in `%appdata%\brave\`, but having the recovery file there seems a bit overdoing it.
| non_test | backing up wallet to file creates recovery file in appdata description the brave wallet recovery txt file is stored in appdata brave before offering it for saving the file s content contains this sentence save this key in a safe place separate from your brave browser so brave acts against its own advice by saving this file in the profile steps to reproduce go to about preferences payments click the cogwheel click back up your wallet click save recovery file actual result the file being stored in appdata brave as well as the location specified by the user expected result the file only being stored in the location indicated by the user reproduces how often each time brave version about brave info brave rev muon libchromiumcontent node js update channel beta os platform microsoft windows os release os architecture reproducible on current live release this is the live beta release additional information technically all information to get the wallet would already be available in appdata brave but having the recovery file there seems a bit overdoing it | 0 |
68,470 | 9,195,237,698 | IssuesEvent | 2019-03-07 01:34:13 | blocknative/assist | https://api.github.com/repos/blocknative/assist | opened | Change `readme.md` to reflect issues with versions of `web3.js` | documentation | Some `web3.js` versions have bugs/errors when interacting with MetaMask and `assist` reflects these errors.
Bugs exist on these versions:
`beta-38`
`beta-39`
`beta-40`
`beta-41`
`beta-43`
`beta-44`
Docs need to be updated to reflect this. | 1.0 | Change `readme.md` to reflect issues with versions of `web3.js` - Some `web3.js` versions have bugs/errors when interacting with MetaMask and `assist` reflects these errors.
Bugs exist on these versions:
`beta-38`
`beta-39`
`beta-40`
`beta-41`
`beta-43`
`beta-44`
Docs need to be updated to reflect this. | non_test | change readme md to reflect issues with versions of js some js versions have bugs errors when interacting with metamask and assist reflects these errors bugs exist on these versions beta beta beta beta beta beta docs need to be updated to reflect this | 0 |
54,529 | 6,826,279,894 | IssuesEvent | 2017-11-08 13:38:37 | stellargo/meseek | https://api.github.com/repos/stellargo/meseek | closed | [FAR] A personal bot | Needs Design | Is this a bug (yay/nay) :
nay
Is this a feature request (yay/nay) :
yay
Is this a request for change in some structure (yay/nay) :
nay
Will this affect documentation (yay/nay) :
nay
Would you be interested in taking up this issue (yay/nay) :
yay
Description:
A bot for this repo will be amazing for a lot of handling. | 1.0 | [FAR] A personal bot - Is this a bug (yay/nay) :
nay
Is this a feature request (yay/nay) :
yay
Is this a request for change in some structure (yay/nay) :
nay
Will this affect documentation (yay/nay) :
nay
Would you be interested in taking up this issue (yay/nay) :
yay
Description:
A bot for this repo will be amazing for a lot of handling. | non_test | a personal bot is this a bug yay nay nay is this a feature request yay nay yay is this a request for change in some structure yay nay nay will this affect documentation yay nay nay would you be interested in taking up this issue yay nay yay description a bot for this repo will be amazing for a lot of handling | 0 |
266,891 | 23,266,833,700 | IssuesEvent | 2022-08-04 18:15:35 | xmos/sln_voice | https://api.github.com/repos/xmos/sln_voice | closed | DFU functional test | status:help wanted size:M type:testing | Implement DFU functional test for the STLP voice application.
- [ ] USB
- [ ] I2C | 1.0 | DFU functional test - Implement DFU functional test for the STLP voice application.
- [ ] USB
- [ ] I2C | test | dfu functional test implement dfu functional test for the stlp voice application usb | 1 |
153,805 | 12,167,127,327 | IssuesEvent | 2020-04-27 10:22:22 | apache/airflow | https://api.github.com/repos/apache/airflow | closed | Prepare backport packages for providers: samba | area:backport-packages area:system-tests |
We would like to release the `samba` backport package as described in [using hooks and operators from master in Airflow 1.10](https://github.com/apache/airflow/blob/master/README.md#using-hooks-and-operators-from-master-in-airflow-110)
The providers package needs to have set of system tests that need to test operators/hooks/sensors released in the package.
How to prepare system tests is described in [System tests](https://github.com/apache/airflow/blob/master/TESTING.rst#airflow-system-tests).
- [ ] [example dags] must exist in [airflow/providers/samba](https://github.com/apache/airflow/blob/master/airflow/providers/samba) package
- [ ] the example dags should be ready to execute them e2e
- [ ] the example dags should be configurable via environment variables
- [ ] pytest test running the DAGs must exist in [tests/providers/samba](https://github.com/apache/airflow/tree/master/tests/providers/samba) package
- [ ] the example dags should authenticate using
- forwarded credentials or
- the `@pytest.mark.credential_file` annotation
- [ ] The [BACKPORT_README.md](https://github.com/apache/airflow/blob/master/airflow/providers/samba/BACKPORT_README.md) file must describe release notes for the package
Current status of backport packages is described in [Backport packages status page](https://cwiki.apache.org/confluence/display/AIRFLOW/Backported+providers+packages+for+Airflow+1.10.*+series)
| 1.0 | Prepare backport packages for providers: samba -
We would like to release the `samba` backport package as described in [using hooks and operators from master in Airflow 1.10](https://github.com/apache/airflow/blob/master/README.md#using-hooks-and-operators-from-master-in-airflow-110)
The providers package needs to have set of system tests that need to test operators/hooks/sensors released in the package.
How to prepare system tests is described in [System tests](https://github.com/apache/airflow/blob/master/TESTING.rst#airflow-system-tests).
- [ ] [example dags] must exist in [airflow/providers/samba](https://github.com/apache/airflow/blob/master/airflow/providers/samba) package
- [ ] the example dags should be ready to execute them e2e
- [ ] the example dags should be configurable via environment variables
- [ ] pytest test running the DAGs must exist in [tests/providers/samba](https://github.com/apache/airflow/tree/master/tests/providers/samba) package
- [ ] the example dags should authenticate using
- forwarded credentials or
- the `@pytest.mark.credential_file` annotation
- [ ] The [BACKPORT_README.md](https://github.com/apache/airflow/blob/master/airflow/providers/samba/BACKPORT_README.md) file must describe release notes for the package
Current status of backport packages is described in [Backport packages status page](https://cwiki.apache.org/confluence/display/AIRFLOW/Backported+providers+packages+for+Airflow+1.10.*+series)
| test | prepare backport packages for providers samba we would like to release the samba backport package as described in the providers package needs to have set of system tests that need to test operators hooks sensors released in the package how to prepare system tests is described in must exist in package the example dags should be ready to execute them the example dags should be configurable via environment variables pytest test running the dags must exist in package the example dags should authenticate using forwarded credentials or the pytest mark credential file annotation the file must describe release notes for the package current status of backport packages is described in | 1 |
312,423 | 26,863,376,572 | IssuesEvent | 2023-02-03 20:39:36 | ValveSoftware/Proton | https://api.github.com/repos/ValveSoftware/Proton | closed | Ubisoft Connect broke all Ubisoft games using it | Need Retest | When you go to launch a Ubisoft game, it will load Ubisoft Connect and then [give an error message](https://www.gamingonlinux.com/2023/02/ubisoft-broke-their-games-on-linux-desktop-and-steam-deck/).
This affects all Ubisoft games using Ubisoft Connect including:
- [Ghost Recon Breakpoint](https://store.steampowered.com/app/2231380/Tom_Clancys_Ghost_Recon_Breakpoint/)
- [The Division 2](https://store.steampowered.com/app/2221490/Tom_Clancys_The_Division_2/)
- [Watch Dogs Legion](https://store.steampowered.com/app/2239550/Watch_Dogs_Legion/)
- [Assassin's Creed Valhalla](https://store.steampowered.com/app/2208920/Assassins_Creed_Valhalla/)
- etc
## System Information
- GPU: Steam Deck / NVIDIA 2080 Ti
- Driver/LLVM version: Steam Deck Stable + Fedora 37
- Kernel version: Steam Deck Stable + 6.1.6 on Fedora
- Proton version: Experimental
## I confirm:
- [x] that I haven't found an existing compatibility report for this game.
- [x] that I have checked whether there are updates for my system available.
Here's my log of trying to run Breakpoint on Fedora where Ubisoft Connect now fails:
[steam-2231380.log](https://github.com/ValveSoftware/Proton/files/10558972/steam-2231380.log)
A log of The Division 2 on Fedora, that log file should have captured the update attempt too.
[steam-2221490.log](https://github.com/ValveSoftware/Proton/files/10558973/steam-2221490.log)
Here's a log of The Division 2 on Steam Deck, after clearing Proton Files:
[steam-2221490.log](https://github.com/ValveSoftware/Proton/files/10559083/steam-2221490.log)
| 1.0 | Ubisoft Connect broke all Ubisoft games using it - When you go to launch a Ubisoft game, it will load Ubisoft Connect and then [give an error message](https://www.gamingonlinux.com/2023/02/ubisoft-broke-their-games-on-linux-desktop-and-steam-deck/).
This affects all Ubisoft games using Ubisoft Connect including:
- [Ghost Recon Breakpoint](https://store.steampowered.com/app/2231380/Tom_Clancys_Ghost_Recon_Breakpoint/)
- [The Division 2](https://store.steampowered.com/app/2221490/Tom_Clancys_The_Division_2/)
- [Watch Dogs Legion](https://store.steampowered.com/app/2239550/Watch_Dogs_Legion/)
- [Assassin's Creed Valhalla](https://store.steampowered.com/app/2208920/Assassins_Creed_Valhalla/)
- etc
## System Information
- GPU: Steam Deck / NVIDIA 2080 Ti
- Driver/LLVM version: Steam Deck Stable + Fedora 37
- Kernel version: Steam Deck Stable + 6.1.6 on Fedora
- Proton version: Experimental
## I confirm:
- [x] that I haven't found an existing compatibility report for this game.
- [x] that I have checked whether there are updates for my system available.
Here's my log of trying to run Breakpoint on Fedora where Ubisoft Connect now fails:
[steam-2231380.log](https://github.com/ValveSoftware/Proton/files/10558972/steam-2231380.log)
A log of The Division 2 on Fedora, that log file should have captured the update attempt too.
[steam-2221490.log](https://github.com/ValveSoftware/Proton/files/10558973/steam-2221490.log)
Here's a log of The Division 2 on Steam Deck, after clearing Proton Files:
[steam-2221490.log](https://github.com/ValveSoftware/Proton/files/10559083/steam-2221490.log)
| test | ubisoft connect broke all ubisoft games using it when you go to launch a ubisoft game it will load ubisoft connect and then this affects all ubisoft games using ubisoft connect including etc system information gpu steam deck nvidia ti driver llvm version steam deck stable fedora kernel version steam deck stable on fedora proton version experimental i confirm that i haven t found an existing compatibility report for this game that i have checked whether there are updates for my system available here s my log of trying to run breakpoint on fedora where ubisoft connect now fails a log of the division on fedora that log file should have captured the update attempt too here s a log of the division on steam deck after clearing proton files | 1 |
21,720 | 6,208,826,359 | IssuesEvent | 2017-07-07 01:24:14 | ahmedahamid/test | https://api.github.com/repos/ahmedahamid/test | closed | Need Component for Source Code tab. | bug CodePlexMigrationInitiated impact: Medium Souce Code | Can't enter issues in Issue Tracker for the Source Code tab because it is not a listed Component.
#### Migrated CodePlex Work Item Details
CodePlex Work Item ID: '454'
Vote count: '0'
| 2.0 | Need Component for Source Code tab. - Can't enter issues in Issue Tracker for the Source Code tab because it is not a listed Component.
#### Migrated CodePlex Work Item Details
CodePlex Work Item ID: '454'
Vote count: '0'
| non_test | need component for source code tab can t enter issues in issue tracker for the source code tab because it is not a listed component migrated codeplex work item details codeplex work item id vote count | 0 |
160,376 | 6,087,746,384 | IssuesEvent | 2017-06-18 15:36:19 | Gaspard--/Swords-Scrolls-and-Knuckles | https://api.github.com/repos/Gaspard--/Swords-Scrolls-and-Knuckles | closed | Mob spawning | Kellen_j priority the world is burning | This one is a bit more uncertain. I believe mobs should spawn depending on some triggers (like entering a room, cleaning an area). Also mob spawners would be cool. | 1.0 | Mob spawning - This one is a bit more uncertain. I believe mobs should spawn depending on some triggers (like entering a room, cleaning an area). Also mob spawners would be cool. | non_test | mob spawning this one is a bit more uncertain i believe mobs should spawn depending on some triggers like entering a room cleaning an area also mob spawners would be cool | 0 |
96,151 | 27,761,279,549 | IssuesEvent | 2023-03-16 08:27:45 | ziglang/zig | https://api.github.com/repos/ziglang/zig | closed | WriteFileStep.add does not work with relative path | bug standard library zig build system | This causes index.html to appear in /tmp:
```
b.addWriteFile("/tmp/index.html", content);
```
But this causes no file to appear:
```
b.addWriteFile("index.html", content);
```
Zig version: 0.9.0-dev.453+7ef854682 | 1.0 | WriteFileStep.add does not work with relative path - This causes index.html to appear in /tmp:
```
b.addWriteFile("/tmp/index.html", content);
```
But this causes no file to appear:
```
b.addWriteFile("index.html", content);
```
Zig version: 0.9.0-dev.453+7ef854682 | non_test | writefilestep add does not work with relative path this causes index html to appear in tmp b addwritefile tmp index html content but this causes no file to appear b addwritefile index html content zig version dev | 0 |
153,642 | 12,155,546,674 | IssuesEvent | 2020-04-25 13:38:35 | whamcloud/integrated-manager-for-lustre | https://api.github.com/repos/whamcloud/integrated-manager-for-lustre | opened | Intermittent test failure: action_plugins::lpurge::lpurge_conf_tests::works | bug failing tests | Rust unit testing has an intermittent failure:
https://github.com/whamcloud/integrated-manager-for-lustre/runs/617943042
<img width="1443" alt="Screen Shot 2020-04-25 at 9 37 46 AM" src="https://user-images.githubusercontent.com/458717/80281359-83c4eb80-86d8-11ea-8925-6aee8ffc7d47.png">
| 1.0 | Intermittent test failure: action_plugins::lpurge::lpurge_conf_tests::works - Rust unit testing has an intermittent failure:
https://github.com/whamcloud/integrated-manager-for-lustre/runs/617943042
<img width="1443" alt="Screen Shot 2020-04-25 at 9 37 46 AM" src="https://user-images.githubusercontent.com/458717/80281359-83c4eb80-86d8-11ea-8925-6aee8ffc7d47.png">
| test | intermittent test failure action plugins lpurge lpurge conf tests works rust unit testing has an intermittent failure img width alt screen shot at am src | 1 |
173,457 | 13,401,964,946 | IssuesEvent | 2020-09-03 18:10:51 | Agoric/agoric-sdk | https://api.github.com/repos/Agoric/agoric-sdk | closed | multipoolAutoswap revamp | Zoe bug enhancement test zoe-alpha-release | Largely parallel to #1586, but applies to mutlipoolAutoswap
* [x] match autoswap API
* [x] fix addLiquidity bug (#1600)
* [x] test using jig
* [x] distinguish inputPrice/outputPrice
* [x] distinguish swapIn/swapOut
* [x] update API type declarations
* [x] Verify that we share curve logic with autoswap
* [x] review outstanding todos (#428, #1539)
* [ ] test using jig in swingset
* [ ] test removeLiquidity (and #921)
* [ ] add a todo for a notifier
* [x] contract documentation
fixes #1600
closes: #428, #1539, #921
| 1.0 | multipoolAutoswap revamp - Largely parallel to #1586, but applies to mutlipoolAutoswap
* [x] match autoswap API
* [x] fix addLiquidity bug (#1600)
* [x] test using jig
* [x] distinguish inputPrice/outputPrice
* [x] distinguish swapIn/swapOut
* [x] update API type declarations
* [x] Verify that we share curve logic with autoswap
* [x] review outstanding todos (#428, #1539)
* [ ] test using jig in swingset
* [ ] test removeLiquidity (and #921)
* [ ] add a todo for a notifier
* [x] contract documentation
fixes #1600
closes: #428, #1539, #921
| test | multipoolautoswap revamp largely parallel to but applies to mutlipoolautoswap match autoswap api fix addliquidity bug test using jig distinguish inputprice outputprice distinguish swapin swapout update api type declarations verify that we share curve logic with autoswap review outstanding todos test using jig in swingset test removeliquidity and add a todo for a notifier contract documentation fixes closes | 1 |
303,774 | 26,228,110,580 | IssuesEvent | 2023-01-04 20:47:45 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | kv/kvserver: TestSingleKey failed | C-test-failure O-robot branch-master | [(kv/kvserver).TestSingleKey failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2355430&tab=buildLog) on [master@e824da5a2b33168fa4ef93c83295c8205acfadb7](https://github.com/cockroachdb/cockroach/commits/e824da5a2b33168fa4ef93c83295c8205acfadb7):
```
=== RUN TestSingleKey
--- FAIL: TestSingleKey (0.00s)
test_log_scope.go:107: can't use TestLogScope with secondary loggers active
```
<details><summary>More</summary><p>
Parameters:
- TAGS=
- GOFLAGS=-race -parallel=2
```
make stressrace TESTS=TestSingleKey PKG=./pkg/kv/kvserver TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestSingleKey.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
Jira issue: CRDB-3667 | 1.0 | kv/kvserver: TestSingleKey failed - [(kv/kvserver).TestSingleKey failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2355430&tab=buildLog) on [master@e824da5a2b33168fa4ef93c83295c8205acfadb7](https://github.com/cockroachdb/cockroach/commits/e824da5a2b33168fa4ef93c83295c8205acfadb7):
```
=== RUN TestSingleKey
--- FAIL: TestSingleKey (0.00s)
test_log_scope.go:107: can't use TestLogScope with secondary loggers active
```
<details><summary>More</summary><p>
Parameters:
- TAGS=
- GOFLAGS=-race -parallel=2
```
make stressrace TESTS=TestSingleKey PKG=./pkg/kv/kvserver TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestSingleKey.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
Jira issue: CRDB-3667 | test | kv kvserver testsinglekey failed on run testsinglekey fail testsinglekey test log scope go can t use testlogscope with secondary loggers active more parameters tags goflags race parallel make stressrace tests testsinglekey pkg pkg kv kvserver testtimeout stressflags timeout powered by jira issue crdb | 1 |
36,628 | 5,074,694,825 | IssuesEvent | 2016-12-27 15:40:56 | italia-it/designer.italia.it | https://api.github.com/repos/italia-it/designer.italia.it | closed | Elenchi e ricerche di servizi | ambito:discussione aperta ambito:testi linee guida priorita:normale | Tutte le PA presentano in modo variegato un elenco di servizi:
Occorre individuare una buona pratica di riferimento per mostrare/ filtrare elenchi di servizi.
con particolare attenzione all'omoneizzazione delle categorie e delle dimensioni delle categorie stesse.
Esempi:
- 1. http://www.spid.gov.it/cerca
- 2. http://servizi.regione.fvg.it/portale/
- 3. https://www.inps.it/portale/default.aspx?sID=%3b0%3b11513%3b&lastMenu=11513&iMenu=2&p4=2
- 4. http://servizi.toscana.it
- 5. https://www.comune.roma.it/pcr/it/elenco_servizi_online.page
hanno approcci molto diversi.
Come primo contributo si suggerisce di taggare (per poi poter ricercare) ogni servizio per
- dimensione territoriale [Nazionale, interregionale , Regionale, intercomunale, Comunale] taggando in modo codificato unico (si faccia riferimaneto a ipa o istat)
- area funzionale (vedi esempio fvg o prese4ntazione catalogo servizi)
-tipologia utente:(Extra eu, Eu,it) X cittadino, professionista , impresa.
-evento della vita di riferimento
- altro
Qundi indivisare una unica ux di riferimento da rendere comune.
| 1.0 | Elenchi e ricerche di servizi - Tutte le PA presentano in modo variegato un elenco di servizi:
Occorre individuare una buona pratica di riferimento per mostrare/ filtrare elenchi di servizi.
con particolare attenzione all'omoneizzazione delle categorie e delle dimensioni delle categorie stesse.
Esempi:
- 1. http://www.spid.gov.it/cerca
- 2. http://servizi.regione.fvg.it/portale/
- 3. https://www.inps.it/portale/default.aspx?sID=%3b0%3b11513%3b&lastMenu=11513&iMenu=2&p4=2
- 4. http://servizi.toscana.it
- 5. https://www.comune.roma.it/pcr/it/elenco_servizi_online.page
hanno approcci molto diversi.
Come primo contributo si suggerisce di taggare (per poi poter ricercare) ogni servizio per
- dimensione territoriale [Nazionale, interregionale , Regionale, intercomunale, Comunale] taggando in modo codificato unico (si faccia riferimaneto a ipa o istat)
- area funzionale (vedi esempio fvg o prese4ntazione catalogo servizi)
-tipologia utente:(Extra eu, Eu,it) X cittadino, professionista , impresa.
-evento della vita di riferimento
- altro
Qundi indivisare una unica ux di riferimento da rendere comune.
| test | elenchi e ricerche di servizi tutte le pa presentano in modo variegato un elenco di servizi occorre individuare una buona pratica di riferimento per mostrare filtrare elenchi di servizi con particolare attenzione all omoneizzazione delle categorie e delle dimensioni delle categorie stesse esempi hanno approcci molto diversi come primo contributo si suggerisce di taggare per poi poter ricercare ogni servizio per dimensione territoriale taggando in modo codificato unico si faccia riferimaneto a ipa o istat area funzionale vedi esempio fvg o catalogo servizi tipologia utente extra eu eu it x cittadino professionista impresa evento della vita di riferimento altro qundi indivisare una unica ux di riferimento da rendere comune | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.