Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
309,211
26,656,771,345
IssuesEvent
2023-01-25 17:26:49
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
Remove dashes from estimated ads earnings in `grandfathered unverified` state
suggestion feature/rewards QA/Yes QA/Test-Plan-Specified feature/ads OS/Desktop
Not sure why they exist, just leave `Learn More` ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. Install 1.47.x 1. Clean profile 1. Enable rewards and ads 1. Upgrade to 1.48.x 1. Open `brave://rewards` 1. Open NTP ## Actual result: <!--Please add screenshots if needed--> dashes are shown next to `Learn More` `brave://rewards` ![image](https://user-images.githubusercontent.com/34715963/214157623-c25b389b-0b8b-4c31-90a0-0eb9034045fc.png) `NTP` ![image](https://user-images.githubusercontent.com/34715963/214160956-645d7470-c5be-481d-837e-579d129f5e3a.png) ## Expected result: dashes are not shown next to `Learn More` `brave://rewards` ![image](https://user-images.githubusercontent.com/34715963/214159493-c08371d0-9256-4de4-b960-41f348fa5ac1.png) `NTP` ![image](https://user-images.githubusercontent.com/34715963/214160902-2700fd51-684b-484f-851d-9535edd5702f.png) ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> Easily reproduced ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.48.132 Chromium: 109.0.5414.87 (Official Build) beta (64-bit) -- | -- Revision | 2dc18eb511c56e012081b4abc9e38c81c885f7d4-refs/branch-heads/5414@{#1241} OS | Linux cc @tmancey @Miyayes @LaurenWags
1.0
Remove dashes from estimated ads earnings in `grandfathered unverified` state - Not sure why they exist, just leave `Learn More` ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. Install 1.47.x 1. Clean profile 1. Enable rewards and ads 1. Upgrade to 1.48.x 1. Open `brave://rewards` 1. Open NTP ## Actual result: <!--Please add screenshots if needed--> dashes are shown next to `Learn More` `brave://rewards` ![image](https://user-images.githubusercontent.com/34715963/214157623-c25b389b-0b8b-4c31-90a0-0eb9034045fc.png) `NTP` ![image](https://user-images.githubusercontent.com/34715963/214160956-645d7470-c5be-481d-837e-579d129f5e3a.png) ## Expected result: dashes are not shown next to `Learn More` `brave://rewards` ![image](https://user-images.githubusercontent.com/34715963/214159493-c08371d0-9256-4de4-b960-41f348fa5ac1.png) `NTP` ![image](https://user-images.githubusercontent.com/34715963/214160902-2700fd51-684b-484f-851d-9535edd5702f.png) ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> Easily reproduced ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.48.132 Chromium: 109.0.5414.87 (Official Build) beta (64-bit) -- | -- Revision | 2dc18eb511c56e012081b4abc9e38c81c885f7d4-refs/branch-heads/5414@{#1241} OS | Linux cc @tmancey @Miyayes @LaurenWags
non_process
remove dashes from estimated ads earnings in grandfathered unverified state not sure why they exist just leave learn more steps to reproduce install x clean profile enable rewards and ads upgrade to x open brave rewards open ntp actual result dashes are shown next to learn more brave rewards ntp expected result dashes are not shown next to learn more brave rewards ntp reproduces how often easily reproduced brave version brave version info brave chromium   official build  beta  bit revision refs branch heads os linux cc tmancey miyayes laurenwags
0
528,026
15,358,631,393
IssuesEvent
2021-03-01 15:00:12
CultureFoundryCA/WebModules
https://api.github.com/repos/CultureFoundryCA/WebModules
closed
In the basket game, if you highlight the text in the baskets and try to move the text, the application fails
Priority: low back end bug
This is similar to the issue we had when the baskets would move. The baskets do not throw the error anymore, but the text does.
1.0
In the basket game, if you highlight the text in the baskets and try to move the text, the application fails - This is similar to the issue we had when the baskets would move. The baskets do not throw the error anymore, but the text does.
non_process
in the basket game if you highlight the text in the baskets and try to move the text the application fails this is similar to the issue we had when the baskets would move the baskets do not throw the error anymore but the text does
0
12,100
14,740,189,744
IssuesEvent
2021-01-07 08:40:22
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
opened
Terminated Accounts Generating invoices if usage
anc-process anp-1 ant-enhancement ant-parent/primary grt-billing pl-foran
In GitLab by @kdjstudios on Oct 16, 2018, 10:26 **Submitted by:** "Cori Bartlett" <cori.bartlett@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/5876253 **Server:** All **Client/Site:** All **Account:** All **Issue:** I know we made a recent change, but would like to change again. We do not want terminated accounts to generate an invoice if usage. Thanks, Cori
1.0
Terminated Accounts Generating invoices if usage - In GitLab by @kdjstudios on Oct 16, 2018, 10:26 **Submitted by:** "Cori Bartlett" <cori.bartlett@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/5876253 **Server:** All **Client/Site:** All **Account:** All **Issue:** I know we made a recent change, but would like to change again. We do not want terminated accounts to generate an invoice if usage. Thanks, Cori
process
terminated accounts generating invoices if usage in gitlab by kdjstudios on oct submitted by cori bartlett helpdesk server all client site all account all issue i know we made a recent change but would like to change again we do not want terminated accounts to generate an invoice if usage thanks cori
1
13,119
15,504,851,504
IssuesEvent
2021-03-11 14:44:24
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Auto sync feature is not working for azure git repo.
Pri2 assigned-to-author automation/svc process-automation/subsvc product-question triaged
Is this by design? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 83c90e64-b615-711f-a53d-fc76606e2ecd * Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea * Content: [Source Control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration#feedback) * Content Source: [articles/automation/source-control-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/source-control-integration.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**
1.0
Auto sync feature is not working for azure git repo. - Is this by design? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 83c90e64-b615-711f-a53d-fc76606e2ecd * Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea * Content: [Source Control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration#feedback) * Content Source: [articles/automation/source-control-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/source-control-integration.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**
process
auto sync feature is not working for azure git repo is this by design document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
1
8,227
11,414,375,590
IssuesEvent
2020-02-02 02:21:13
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Densify by interval always resorts to default value if field is given
Bug Processing
**Describe the bug** QGIS 3.10.1 > Vector geometry > Densify by interval The tool works properly when a direct value is given. When a field type is given, it always resorts to the default value of 1,0. **How to Reproduce** Open the tool. Select a layer with linestrings which has attributes/fields. Click on the "data defined override" button and select a field. In my case it is a field of type double called "ps_width". When the job has run the log shows that the script tried to use the field ('INTERVAL' : QgsProperty.fromExpression('"ps_width"'). The value of the field is 2,8. The value used for the calculations is still 1,0 (the default) ![image](https://user-images.githubusercontent.com/2045223/71125342-0a2b6700-21e7-11ea-8808-01c0abbbe16c.png) ![image](https://user-images.githubusercontent.com/2045223/71125849-03e9ba80-21e8-11ea-975e-a4d3e99cd1f1.png) **QGIS and OS versions** ![image](https://user-images.githubusercontent.com/2045223/71126054-7bb7e500-21e8-11ea-8321-253c43d779bb.png)
1.0
Densify by interval always resorts to default value if field is given - **Describe the bug** QGIS 3.10.1 > Vector geometry > Densify by interval The tool works properly when a direct value is given. When a field type is given, it always resorts to the default value of 1,0. **How to Reproduce** Open the tool. Select a layer with linestrings which has attributes/fields. Click on the "data defined override" button and select a field. In my case it is a field of type double called "ps_width". When the job has run the log shows that the script tried to use the field ('INTERVAL' : QgsProperty.fromExpression('"ps_width"'). The value of the field is 2,8. The value used for the calculations is still 1,0 (the default) ![image](https://user-images.githubusercontent.com/2045223/71125342-0a2b6700-21e7-11ea-8808-01c0abbbe16c.png) ![image](https://user-images.githubusercontent.com/2045223/71125849-03e9ba80-21e8-11ea-975e-a4d3e99cd1f1.png) **QGIS and OS versions** ![image](https://user-images.githubusercontent.com/2045223/71126054-7bb7e500-21e8-11ea-8321-253c43d779bb.png)
process
densify by interval always resorts to default value if field is given describe the bug qgis vector geometry densify by interval the tool works properly when a direct value is given when a field type is given it always resorts to the default value of how to reproduce open the tool select a layer with linestrings which has attributes fields click on the data defined override button and select a field in my case it is a field of type double called ps width when the job has run the log shows that the script tried to use the field interval qgsproperty fromexpression ps width the value of the field is the value used for the calculations is still the default qgis and os versions
1
162,832
25,602,370,007
IssuesEvent
2022-12-01 21:28:43
NCIOCPL/cgov-digital-platform
https://api.github.com/repos/NCIOCPL/cgov-digital-platform
closed
Page Options accessibility labeling issues
Drupal - Redesign
1. The SVG icons for the page options gets flagged by Google as an accessibility issue because the id is the same at the top and bottom on the <title> element. 2. The text of the SVG icon title element is the English text. 3. The aria-label attribute for the Page options is not translated. Item 1 is a problem for Google lighthouse. Accessibility score of 91 vs 99 on a couple of pages. This may trip other accessibility checkers. Items 2 & 3 are usability issues for Spanish speakers. Additionally, the Axe tool is flagging Page Options as also having other issues in addition to duplicate ids. 1. All page content should be contained by landmarks. (This applies to the top and bottom) 2. Elements must only use allowed ARIA attributes. (This applies to the top and bottom)
1.0
Page Options accessibility labeling issues - 1. The SVG icons for the page options gets flagged by Google as an accessibility issue because the id is the same at the top and bottom on the <title> element. 2. The text of the SVG icon title element is the English text. 3. The aria-label attribute for the Page options is not translated. Item 1 is a problem for Google lighthouse. Accessibility score of 91 vs 99 on a couple of pages. This may trip other accessibility checkers. Items 2 & 3 are usability issues for Spanish speakers. Additionally, the Axe tool is flagging Page Options as also having other issues in addition to duplicate ids. 1. All page content should be contained by landmarks. (This applies to the top and bottom) 2. Elements must only use allowed ARIA attributes. (This applies to the top and bottom)
non_process
page options accessibility labeling issues the svg icons for the page options gets flagged by google as an accessibility issue because the id is the same at the top and bottom on the element the text of the svg icon title element is the english text the aria label attribute for the page options is not translated item is a problem for google lighthouse accessibility score of vs on a couple of pages this may trip other accessibility checkers items are usability issues for spanish speakers additionally the axe tool is flagging page options as also having other issues in addition to duplicate ids all page content should be contained by landmarks this applies to the top and bottom elements must only use allowed aria attributes this applies to the top and bottom
0
560,524
16,598,871,334
IssuesEvent
2021-06-01 16:31:50
canonical-web-and-design/jaas-dashboard
https://api.github.com/repos/canonical-web-and-design/jaas-dashboard
closed
ActionLogs page fails when charm has been removed
Priority: High
To reproduce: - Add a charm and run a few actions. - Then remove the charm. - View the Action Logs page. ` Uncaught TypeError: modelStatusData.applications[appName] is undefined`
1.0
ActionLogs page fails when charm has been removed - To reproduce: - Add a charm and run a few actions. - Then remove the charm. - View the Action Logs page. ` Uncaught TypeError: modelStatusData.applications[appName] is undefined`
non_process
actionlogs page fails when charm has been removed to reproduce add a charm and run a few actions then remove the charm view the action logs page uncaught typeerror modelstatusdata applications is undefined
0
185,174
21,785,096,289
IssuesEvent
2022-05-14 02:28:41
dmartinez777/AzureDevOpsAngular
https://api.github.com/repos/dmartinez777/AzureDevOpsAngular
closed
WS-2019-0424 (Medium) detected in elliptic-6.5.2.tgz - autoclosed
security vulnerability
## WS-2019-0424 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.2.tgz</b></p></summary> <p>EC cryptography</p> <p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/AzureDevOpsAngular/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/AzureDevOpsAngular/node_modules/elliptic/package.json</p> <p> Dependency Hierarchy: - build-angular-0.803.20.tgz (Root Library) - webpack-4.39.2.tgz - node-libs-browser-2.2.1.tgz - crypto-browserify-3.12.0.tgz - browserify-sign-4.0.4.tgz - :x: **elliptic-6.5.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/xlordt/AzureDevOpsAngular/commit/f76a189753b8cb4af5a6ff78866473d143697338">f76a189753b8cb4af5a6ff78866473d143697338</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> all versions of elliptic are vulnerable to Timing Attack through side-channels. <p>Publish Date: 2019-11-13 <p>URL: <a href=https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a>WS-2019-0424</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Adjacent - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0424 (Medium) detected in elliptic-6.5.2.tgz - autoclosed - ## WS-2019-0424 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.2.tgz</b></p></summary> <p>EC cryptography</p> <p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/AzureDevOpsAngular/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/AzureDevOpsAngular/node_modules/elliptic/package.json</p> <p> Dependency Hierarchy: - build-angular-0.803.20.tgz (Root Library) - webpack-4.39.2.tgz - node-libs-browser-2.2.1.tgz - crypto-browserify-3.12.0.tgz - browserify-sign-4.0.4.tgz - :x: **elliptic-6.5.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/xlordt/AzureDevOpsAngular/commit/f76a189753b8cb4af5a6ff78866473d143697338">f76a189753b8cb4af5a6ff78866473d143697338</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> all versions of elliptic are vulnerable to Timing Attack through side-channels. <p>Publish Date: 2019-11-13 <p>URL: <a href=https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a>WS-2019-0424</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Adjacent - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws medium detected in elliptic tgz autoclosed ws medium severity vulnerability vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file tmp ws scm azuredevopsangular package json path to vulnerable library tmp ws scm azuredevopsangular node modules elliptic package json dependency hierarchy build angular tgz root library webpack tgz node libs browser tgz crypto browserify tgz browserify sign tgz x elliptic tgz vulnerable library found in head commit a href vulnerability details all versions of elliptic are vulnerable to timing attack through side channels publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact high availability impact none for more information on scores click a href step up your open source security game with whitesource
0
211,673
23,835,723,796
IssuesEvent
2022-09-06 05:34:40
ioana-nicolae/keycloak
https://api.github.com/repos/ioana-nicolae/keycloak
opened
CVE-2022-38752 (Medium) detected in multiple libraries
security vulnerability
## CVE-2022-38752 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>snakeyaml-1.27.jar</b>, <b>snakeyaml-1.19.jar</b>, <b>snakeyaml-1.17.jar</b>, <b>snakeyaml-1.14.jar</b></p></summary> <p> <details><summary><b>snakeyaml-1.27.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /quarkus/runtime/pom.xml</p> <p>Path to vulnerable library: /2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar</p> <p> Dependency Hierarchy: - :x: **snakeyaml-1.27.jar** (Vulnerable Library) </details> <details><summary><b>snakeyaml-1.19.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /misc/spring-boot-starter/keycloak-spring-boot-starter/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library) - spring-boot-starter-2.0.5.RELEASE.jar - :x: **snakeyaml-1.19.jar** (Vulnerable Library) </details> <details><summary><b>snakeyaml-1.17.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /adapters/oidc/spring-boot-adapter-core/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.17/snakeyaml-1.17.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.17/snakeyaml-1.17.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.17/snakeyaml-1.17.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.17/snakeyaml-1.17.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-1.5.16.RELEASE.jar (Root Library) - spring-boot-starter-1.5.16.RELEASE.jar - :x: **snakeyaml-1.17.jar** (Vulnerable Library) </details> <details><summary><b>snakeyaml-1.14.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /testsuite/integration-arquillian/servers/auth-server/undertow/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar</p> <p> Dependency Hierarchy: - integration-arquillian-testsuite-providers-13.0.0-SNAPSHOT.jar (Root Library) - keycloak-dependencies-server-all-13.0.0-SNAPSHOT.pom - openshift-restclient-java-8.0.0.Final.jar - :x: **snakeyaml-1.14.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/ioana-nicolae/keycloak/commit/34eee947640ca637662cb41e649c6acf8b6d8c2e">34eee947640ca637662cb41e649c6acf8b6d8c2e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack-overflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752>CVE-2022-38752</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p>
True
CVE-2022-38752 (Medium) detected in multiple libraries - ## CVE-2022-38752 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>snakeyaml-1.27.jar</b>, <b>snakeyaml-1.19.jar</b>, <b>snakeyaml-1.17.jar</b>, <b>snakeyaml-1.14.jar</b></p></summary> <p> <details><summary><b>snakeyaml-1.27.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /quarkus/runtime/pom.xml</p> <p>Path to vulnerable library: /2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar</p> <p> Dependency Hierarchy: - :x: **snakeyaml-1.27.jar** (Vulnerable Library) </details> <details><summary><b>snakeyaml-1.19.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /misc/spring-boot-starter/keycloak-spring-boot-starter/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library) - spring-boot-starter-2.0.5.RELEASE.jar - :x: **snakeyaml-1.19.jar** (Vulnerable Library) </details> <details><summary><b>snakeyaml-1.17.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /adapters/oidc/spring-boot-adapter-core/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.17/snakeyaml-1.17.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.17/snakeyaml-1.17.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.17/snakeyaml-1.17.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.17/snakeyaml-1.17.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-1.5.16.RELEASE.jar (Root Library) - spring-boot-starter-1.5.16.RELEASE.jar - :x: **snakeyaml-1.17.jar** (Vulnerable Library) </details> <details><summary><b>snakeyaml-1.14.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /testsuite/integration-arquillian/servers/auth-server/undertow/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.14/snakeyaml-1.14.jar</p> <p> Dependency Hierarchy: - integration-arquillian-testsuite-providers-13.0.0-SNAPSHOT.jar (Root Library) - keycloak-dependencies-server-all-13.0.0-SNAPSHOT.pom - openshift-restclient-java-8.0.0.Final.jar - :x: **snakeyaml-1.14.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/ioana-nicolae/keycloak/commit/34eee947640ca637662cb41e649c6acf8b6d8c2e">34eee947640ca637662cb41e649c6acf8b6d8c2e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack-overflow. <p>Publish Date: 2022-09-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752>CVE-2022-38752</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p>
non_process
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries snakeyaml jar snakeyaml jar snakeyaml jar snakeyaml jar snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file quarkus runtime pom xml path to vulnerable library repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy x snakeyaml jar vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file misc spring boot starter keycloak spring boot starter pom xml path to vulnerable library home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter web release jar root library spring boot starter release jar x snakeyaml jar vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file adapters oidc spring boot adapter core pom xml path to vulnerable library home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter web release jar root library spring boot starter release jar x snakeyaml jar vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file testsuite integration arquillian servers auth server undertow pom xml path to vulnerable library home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy integration arquillian testsuite providers snapshot jar root library keycloak dependencies server all snapshot pom openshift restclient java final jar x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stack overflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
0
347,912
10,436,290,885
IssuesEvent
2019-09-17 19:13:29
supergrecko/QuestionBot
https://api.github.com/repos/supergrecko/QuestionBot
closed
core: Implement preconditions
effort:low priority:high scope:dev status:work-in-progress type:feature-request
Need to implement preconditions to limit commands to certain groups of users
1.0
core: Implement preconditions - Need to implement preconditions to limit commands to certain groups of users
non_process
core implement preconditions need to implement preconditions to limit commands to certain groups of users
0
189,941
22,047,158,215
IssuesEvent
2022-05-30 04:00:58
pazhanivel07/linux-4.19.72
https://api.github.com/repos/pazhanivel07/linux-4.19.72
closed
CVE-2019-19037 (Medium) detected in linux-yoctov5.4.51 - autoclosed
security vulnerability
## CVE-2019-19037 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/linux-4.19.72/commit/ce28e4f7a922d93d9b737061ae46827305c8c30a">ce28e4f7a922d93d9b737061ae46827305c8c30a</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ext4_empty_dir in fs/ext4/namei.c in the Linux kernel through 5.3.12 allows a NULL pointer dereference because ext4_read_dirblock(inode,0,DIRENT_HTREE) can be zero. WhiteSource Note: After conducting further research, WhiteSource has determined that versions v2.6.30-rc1-v4.9.207, v4.10-rc1-v4.14.160, v4.15-rc1--v4.19.91, v5.0-rc1--v5.4.6 and v5.5-rc1--v5.5-rc2 of Linux Kernel are vulnerable to CVE-2019-19037. <p>Publish Date: 2019-11-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19037>CVE-2019-19037</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-19037">https://www.linuxkernelcves.com/cves/CVE-2019-19037</a></p> <p>Release Date: 2019-11-21</p> <p>Fix Resolution: v4.9.208, v4.14.161, v4.19.92, v5.4.7, v5.5-rc3,</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-19037 (Medium) detected in linux-yoctov5.4.51 - autoclosed - ## CVE-2019-19037 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/linux-4.19.72/commit/ce28e4f7a922d93d9b737061ae46827305c8c30a">ce28e4f7a922d93d9b737061ae46827305c8c30a</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ext4_empty_dir in fs/ext4/namei.c in the Linux kernel through 5.3.12 allows a NULL pointer dereference because ext4_read_dirblock(inode,0,DIRENT_HTREE) can be zero. WhiteSource Note: After conducting further research, WhiteSource has determined that versions v2.6.30-rc1-v4.9.207, v4.10-rc1-v4.14.160, v4.15-rc1--v4.19.91, v5.0-rc1--v5.4.6 and v5.5-rc1--v5.5-rc2 of Linux Kernel are vulnerable to CVE-2019-19037. <p>Publish Date: 2019-11-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19037>CVE-2019-19037</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-19037">https://www.linuxkernelcves.com/cves/CVE-2019-19037</a></p> <p>Release Date: 2019-11-21</p> <p>Fix Resolution: v4.9.208, v4.14.161, v4.19.92, v5.4.7, v5.5-rc3,</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linux autoclosed cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details empty dir in fs namei c in the linux kernel through allows a null pointer dereference because read dirblock inode dirent htree can be zero whitesource note after conducting further research whitesource has determined that versions and of linux kernel are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
373,953
11,053,229,181
IssuesEvent
2019-12-10 10:56:03
PRIDE-Archive/pride-web
https://api.github.com/repos/PRIDE-Archive/pride-web
closed
Private submissions are not shown
bug high-priority
@shabai517 we need to check the private datasets for submitters because they are not shown. The API call is given the right results but the page is not showing the projects.
1.0
Private submissions are not shown - @shabai517 we need to check the private datasets for submitters because they are not shown. The API call is given the right results but the page is not showing the projects.
non_process
private submissions are not shown we need to check the private datasets for submitters because they are not shown the api call is given the right results but the page is not showing the projects
0
6,472
3,023,794,177
IssuesEvent
2015-08-01 22:00:25
edelbluth/blackred
https://api.github.com/repos/edelbluth/blackred
closed
Add Code Samples to Documentation
Accepted Documentation Enhancement Next Version
Some explained/explaining code samples would help to understand how to use BlackRed. Examples: - Actual login sequence with Django - Using global and per-new-instance settings
1.0
Add Code Samples to Documentation - Some explained/explaining code samples would help to understand how to use BlackRed. Examples: - Actual login sequence with Django - Using global and per-new-instance settings
non_process
add code samples to documentation some explained explaining code samples would help to understand how to use blackred examples actual login sequence with django using global and per new instance settings
0
18,876
24,809,915,999
IssuesEvent
2022-10-25 08:37:20
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
NTR inhibition of ectopic tissue mineralization
organism-level process
See https://github.com/geneontology/go-annotation/issues/4338 To annotate genes that increase levels of inorganic pyrophosphate, a mineralization inhibitor that prevents ossification/mineralization of non-bone tissue. Parent could be: GO:0001894 tissue homeostasis definition: A homeostatic process involved in the maintenance of non-mineral tissue, by preventing ~ectopic~ mineralization of non-bone tissue. References: PMID:21490328 PMID:30030150
1.0
NTR inhibition of ectopic tissue mineralization - See https://github.com/geneontology/go-annotation/issues/4338 To annotate genes that increase levels of inorganic pyrophosphate, a mineralization inhibitor that prevents ossification/mineralization of non-bone tissue. Parent could be: GO:0001894 tissue homeostasis definition: A homeostatic process involved in the maintenance of non-mineral tissue, by preventing ~ectopic~ mineralization of non-bone tissue. References: PMID:21490328 PMID:30030150
process
ntr inhibition of ectopic tissue mineralization see to annotate genes that increase levels of inorganic pyrophosphate a mineralization inhibitor that prevents ossification mineralization of non bone tissue parent could be go tissue homeostasis definition a homeostatic process involved in the maintenance of non mineral tissue by preventing ectopic mineralization of non bone tissue references pmid pmid
1
72,645
19,400,147,716
IssuesEvent
2021-12-19 02:27:50
docker/hub-feedback
https://api.github.com/repos/docker/hub-feedback
closed
A feature request for being able to automate tagging images with a build date
Stale Autobuild
## Problem description Docker images that are intended as build environments and contain only build tools and 3rd-party packages, such as C++ build images with GCC, make, and various library packages, need to be updated only when Dockerfile on their branch changes sufficiently to require another image build. This means that most of the time they are built manually, either locally and then pushed to Docker Hub or by clicking `Trigger` on Docker Hub. However, the latter requires that build configuration is removed and new one is added to accommodate new image tags, which is a bit of a hassle that can be avoided. Docker images with build dependencies cannot be tagged with branch names because there can be different dependencies on the same branch on the way to some release. Consider the following timeline: ``` -----*------------*-----------*----------------*--------> master dep A dep B dep C planned 20210301 20210405 20210520 2.0.0 ``` Within a few months I may introduce dependencies `A`, `B` and `C` and for each of them will build a Docker image that adds each dependency on the date mentioned. All of those images are intended for the upcoming release `2.0.0`, but each is supposed to be pulled from image repository only when building source before the next image is created. So, for example, if I use `git bisect`, each build will be made with the correct Docker image Using `latest` image doesn't work for this, as some dependencies may change in the way that makes them incompatible with earlier code (e.g. dependency `A` is updated to a newer version, so old source will break). Same goes for using release branch names, as there may be more than one build image with different dependencies for the same release branch. The only way to make it work reliably is to tag images with dates. What I have been doing on the command line was to tag each new image with a date and add `latest`, for convenience, mostly, as it only works for manual references and not automated tools because a pipeline referring to `latest` in the past would pull the most updated image that wouldn't work with the source at that commit. Recently I learned that Docker Hub allows combining tags, which isn't well documented, but works quite nicely, so having a tag `20210620,latest` creates both tags after the build. This is nice because it saves the trouble of having to use the command line. However, tags in Docker Hub cannot be edited, so each time I need to update an image, I have to delete the configuration and add a new one with the current date, which is a bit of an unnecessary hassle. If there was a special tag, something along the lines of `{date}`, which would resolve to the current date, maybe with an optional format, then one would have build tags configured as `{date},latest` and would just need to click `Trigger` to make a new build tagged with a date.
1.0
A feature request for being able to automate tagging images with a build date - ## Problem description Docker images that are intended as build environments and contain only build tools and 3rd-party packages, such as C++ build images with GCC, make, and various library packages, need to be updated only when Dockerfile on their branch changes sufficiently to require another image build. This means that most of the time they are built manually, either locally and then pushed to Docker Hub or by clicking `Trigger` on Docker Hub. However, the latter requires that build configuration is removed and new one is added to accommodate new image tags, which is a bit of a hassle that can be avoided. Docker images with build dependencies cannot be tagged with branch names because there can be different dependencies on the same branch on the way to some release. Consider the following timeline: ``` -----*------------*-----------*----------------*--------> master dep A dep B dep C planned 20210301 20210405 20210520 2.0.0 ``` Within a few months I may introduce dependencies `A`, `B` and `C` and for each of them will build a Docker image that adds each dependency on the date mentioned. All of those images are intended for the upcoming release `2.0.0`, but each is supposed to be pulled from image repository only when building source before the next image is created. So, for example, if I use `git bisect`, each build will be made with the correct Docker image Using `latest` image doesn't work for this, as some dependencies may change in the way that makes them incompatible with earlier code (e.g. dependency `A` is updated to a newer version, so old source will break). Same goes for using release branch names, as there may be more than one build image with different dependencies for the same release branch. The only way to make it work reliably is to tag images with dates. What I have been doing on the command line was to tag each new image with a date and add `latest`, for convenience, mostly, as it only works for manual references and not automated tools because a pipeline referring to `latest` in the past would pull the most updated image that wouldn't work with the source at that commit. Recently I learned that Docker Hub allows combining tags, which isn't well documented, but works quite nicely, so having a tag `20210620,latest` creates both tags after the build. This is nice because it saves the trouble of having to use the command line. However, tags in Docker Hub cannot be edited, so each time I need to update an image, I have to delete the configuration and add a new one with the current date, which is a bit of an unnecessary hassle. If there was a special tag, something along the lines of `{date}`, which would resolve to the current date, maybe with an optional format, then one would have build tags configured as `{date},latest` and would just need to click `Trigger` to make a new build tagged with a date.
non_process
a feature request for being able to automate tagging images with a build date problem description docker images that are intended as build environments and contain only build tools and party packages such as c build images with gcc make and various library packages need to be updated only when dockerfile on their branch changes sufficiently to require another image build this means that most of the time they are built manually either locally and then pushed to docker hub or by clicking trigger on docker hub however the latter requires that build configuration is removed and new one is added to accommodate new image tags which is a bit of a hassle that can be avoided docker images with build dependencies cannot be tagged with branch names because there can be different dependencies on the same branch on the way to some release consider the following timeline master dep a dep b dep c planned within a few months i may introduce dependencies a b and c and for each of them will build a docker image that adds each dependency on the date mentioned all of those images are intended for the upcoming release but each is supposed to be pulled from image repository only when building source before the next image is created so for example if i use git bisect each build will be made with the correct docker image using latest image doesn t work for this as some dependencies may change in the way that makes them incompatible with earlier code e g dependency a is updated to a newer version so old source will break same goes for using release branch names as there may be more than one build image with different dependencies for the same release branch the only way to make it work reliably is to tag images with dates what i have been doing on the command line was to tag each new image with a date and add latest for convenience mostly as it only works for manual references and not automated tools because a pipeline referring to latest in the past would pull the most updated image that wouldn t work with the source at that commit recently i learned that docker hub allows combining tags which isn t well documented but works quite nicely so having a tag latest creates both tags after the build this is nice because it saves the trouble of having to use the command line however tags in docker hub cannot be edited so each time i need to update an image i have to delete the configuration and add a new one with the current date which is a bit of an unnecessary hassle if there was a special tag something along the lines of date which would resolve to the current date maybe with an optional format then one would have build tags configured as date latest and would just need to click trigger to make a new build tagged with a date
0
4,000
6,927,137,634
IssuesEvent
2017-11-30 21:41:40
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Strange behavior with Process creation and termination
area-System.Diagnostics.Process bug
@livarcocc commented on [Thu Mar 16 2017](https://github.com/dotnet/core-setup/issues/1762) _From @mellinoe on April 29, 2016 21:36_ ## Steps to reproduce Consider this program: ``` CSharp public static void Main(string[] args) { ProcessStartInfo psi = new ProcessStartInfo("ping", "github.com"); psi.RedirectStandardOutput = true; Process.Start(psi); Console.WriteLine("Process started."); } ``` NOTE: This is just an easy example, my use case is slightly different (using xdg-open to launch a browser). Ping works here because it usually takes a while to exit. ## Expected behavior The program starts a process, prints out "Process started", and terminates. ## Actual behavior When executed via `dotnet run`, the program does not terminate until the "ping" process terminates. Executing `killall ping` causes the `dotnet` process to terminate, as well, once `ping` has been killed. When I run the same program using corerun or coreconsole, the above behavior is not exhibited. A ping process is started, but the program immediately terminates. Ping continues in the background. ## Environment data `dotnet --info` output: ``` .NET Command Line Tools (1.0.0-rc2-002543) Product Information: Version: 1.0.0-rc2-002543 Commit Sha: 38d0c28a1e Runtime Environment: OS Name: ubuntu OS Version: 15.04 OS Platform: Linux RID: ubuntu.15.04-x64 ``` _Copied from original issue: dotnet/cli#2776_ --- @livarcocc commented on [Thu Mar 16 2017](https://github.com/dotnet/core-setup/issues/1762#issuecomment-287265358) _From @mellinoe on April 29, 2016 21:43_ Another point of data: If I execute `dotnet <test.dll>`, then the correct behavior is exhibited. This repros on both Ubuntu and Windows (all I've tried). --- @livarcocc commented on [Thu Mar 16 2017](https://github.com/dotnet/core-setup/issues/1762#issuecomment-287265362) _From @brthor on May 2, 2016 18:53_ It seems like we should be a little more robust in how we manage child processes in `dotnet run` here, it sounds like we are waiting for the entire process tree instead of the immediate child. I don't think this is a blocker for RC2 though, comment if you feel otherwise. --- @livarcocc commented on [Thu Mar 16 2017](https://github.com/dotnet/core-setup/issues/1762#issuecomment-287265367) @mellinoe I just tried this in 1.0.1 and I hit the same thing, except that when I published the app and run it using dotnet <dll> it exhibited the same delay there. So, wondering if that's an issue with dotnet.exe, since that's also what we use during dotnet run. I will move this issue to core-setup. --- @gkhanna79 commented on [Fri Mar 17 2017](https://github.com/dotnet/core-setup/issues/1762#issuecomment-287443578) Dotnet.exe is not launching processes @livarcocc. It simply calls into the managed entrypoint. > I hit the same thing, except that when I published the app and run it using dotnet it exhibited the same delay there How did you repro this? What delay did you see? --- @gkhanna79 commented on [Fri Apr 14 2017](https://github.com/dotnet/core-setup/issues/1762#issuecomment-294252653) The host does not do anything to block/wait against the application aside from invoking its entrypoint. Someone in CoreFX should take a stab.
1.0
Strange behavior with Process creation and termination - @livarcocc commented on [Thu Mar 16 2017](https://github.com/dotnet/core-setup/issues/1762) _From @mellinoe on April 29, 2016 21:36_ ## Steps to reproduce Consider this program: ``` CSharp public static void Main(string[] args) { ProcessStartInfo psi = new ProcessStartInfo("ping", "github.com"); psi.RedirectStandardOutput = true; Process.Start(psi); Console.WriteLine("Process started."); } ``` NOTE: This is just an easy example, my use case is slightly different (using xdg-open to launch a browser). Ping works here because it usually takes a while to exit. ## Expected behavior The program starts a process, prints out "Process started", and terminates. ## Actual behavior When executed via `dotnet run`, the program does not terminate until the "ping" process terminates. Executing `killall ping` causes the `dotnet` process to terminate, as well, once `ping` has been killed. When I run the same program using corerun or coreconsole, the above behavior is not exhibited. A ping process is started, but the program immediately terminates. Ping continues in the background. ## Environment data `dotnet --info` output: ``` .NET Command Line Tools (1.0.0-rc2-002543) Product Information: Version: 1.0.0-rc2-002543 Commit Sha: 38d0c28a1e Runtime Environment: OS Name: ubuntu OS Version: 15.04 OS Platform: Linux RID: ubuntu.15.04-x64 ``` _Copied from original issue: dotnet/cli#2776_ --- @livarcocc commented on [Thu Mar 16 2017](https://github.com/dotnet/core-setup/issues/1762#issuecomment-287265358) _From @mellinoe on April 29, 2016 21:43_ Another point of data: If I execute `dotnet <test.dll>`, then the correct behavior is exhibited. This repros on both Ubuntu and Windows (all I've tried). --- @livarcocc commented on [Thu Mar 16 2017](https://github.com/dotnet/core-setup/issues/1762#issuecomment-287265362) _From @brthor on May 2, 2016 18:53_ It seems like we should be a little more robust in how we manage child processes in `dotnet run` here, it sounds like we are waiting for the entire process tree instead of the immediate child. I don't think this is a blocker for RC2 though, comment if you feel otherwise. --- @livarcocc commented on [Thu Mar 16 2017](https://github.com/dotnet/core-setup/issues/1762#issuecomment-287265367) @mellinoe I just tried this in 1.0.1 and I hit the same thing, except that when I published the app and run it using dotnet <dll> it exhibited the same delay there. So, wondering if that's an issue with dotnet.exe, since that's also what we use during dotnet run. I will move this issue to core-setup. --- @gkhanna79 commented on [Fri Mar 17 2017](https://github.com/dotnet/core-setup/issues/1762#issuecomment-287443578) Dotnet.exe is not launching processes @livarcocc. It simply calls into the managed entrypoint. > I hit the same thing, except that when I published the app and run it using dotnet it exhibited the same delay there How did you repro this? What delay did you see? --- @gkhanna79 commented on [Fri Apr 14 2017](https://github.com/dotnet/core-setup/issues/1762#issuecomment-294252653) The host does not do anything to block/wait against the application aside from invoking its entrypoint. Someone in CoreFX should take a stab.
process
strange behavior with process creation and termination livarcocc commented on from mellinoe on april steps to reproduce consider this program csharp public static void main string args processstartinfo psi new processstartinfo ping github com psi redirectstandardoutput true process start psi console writeline process started note this is just an easy example my use case is slightly different using xdg open to launch a browser ping works here because it usually takes a while to exit expected behavior the program starts a process prints out process started and terminates actual behavior when executed via dotnet run the program does not terminate until the ping process terminates executing killall ping causes the dotnet process to terminate as well once ping has been killed when i run the same program using corerun or coreconsole the above behavior is not exhibited a ping process is started but the program immediately terminates ping continues in the background environment data dotnet info output net command line tools product information version commit sha runtime environment os name ubuntu os version os platform linux rid ubuntu copied from original issue dotnet cli livarcocc commented on from mellinoe on april another point of data if i execute dotnet then the correct behavior is exhibited this repros on both ubuntu and windows all i ve tried livarcocc commented on from brthor on may it seems like we should be a little more robust in how we manage child processes in dotnet run here it sounds like we are waiting for the entire process tree instead of the immediate child i don t think this is a blocker for though comment if you feel otherwise livarcocc commented on mellinoe i just tried this in and i hit the same thing except that when i published the app and run it using dotnet it exhibited the same delay there so wondering if that s an issue with dotnet exe since that s also what we use during dotnet run i will move this issue to core setup commented on dotnet exe is not launching processes livarcocc it simply calls into the managed entrypoint i hit the same thing except that when i published the app and run it using dotnet it exhibited the same delay there how did you repro this what delay did you see commented on the host does not do anything to block wait against the application aside from invoking its entrypoint someone in corefx should take a stab
1
21,712
30,213,783,224
IssuesEvent
2023-07-05 14:21:24
googleapis/sdk-platform-java
https://api.github.com/repos/googleapis/sdk-platform-java
closed
Transient workflow failures from javax.net.ssl.SSLHandshakeException: PKIX path validation failed
type: process priority: p3
Started noticing CI failures from https://github.com/googleapis/gapic-generator-java/pull/1557 presubmits that do not look related to the changes made. Opening this issue to track and investigate. Failing workflow run: https://github.com/googleapis/gapic-generator-java/actions/runs/4515219935/attempts/2 Latest successful workflow run: https://github.com/googleapis/gapic-generator-java/actions/runs/4514393539 ``` [INFO] --- download-maven-plugin:1.6.8:wget (download-metadata-proto) @ gapic-generator-java --- Warning: Could not get content javax.net.ssl.SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed at sun.security.ssl.Alert.createSSLException (Alert.java:131) at sun.security.ssl.TransportContext.fatal (TransportContext.java:353) ... Error: Failed to execute goal com.googlecode.maven-download-plugin:download-maven-plugin:1.6.8:wget (download-metadata-proto) on project gapic-generator-java: IO Error: Could not get content -> [Help 1] ```
1.0
Transient workflow failures from javax.net.ssl.SSLHandshakeException: PKIX path validation failed - Started noticing CI failures from https://github.com/googleapis/gapic-generator-java/pull/1557 presubmits that do not look related to the changes made. Opening this issue to track and investigate. Failing workflow run: https://github.com/googleapis/gapic-generator-java/actions/runs/4515219935/attempts/2 Latest successful workflow run: https://github.com/googleapis/gapic-generator-java/actions/runs/4514393539 ``` [INFO] --- download-maven-plugin:1.6.8:wget (download-metadata-proto) @ gapic-generator-java --- Warning: Could not get content javax.net.ssl.SSLHandshakeException: PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed at sun.security.ssl.Alert.createSSLException (Alert.java:131) at sun.security.ssl.TransportContext.fatal (TransportContext.java:353) ... Error: Failed to execute goal com.googlecode.maven-download-plugin:download-maven-plugin:1.6.8:wget (download-metadata-proto) on project gapic-generator-java: IO Error: Could not get content -> [Help 1] ```
process
transient workflow failures from javax net ssl sslhandshakeexception pkix path validation failed started noticing ci failures from presubmits that do not look related to the changes made opening this issue to track and investigate failing workflow run latest successful workflow run download maven plugin wget download metadata proto gapic generator java warning could not get content javax net ssl sslhandshakeexception pkix path validation failed java security cert certpathvalidatorexception validity check failed at sun security ssl alert createsslexception alert java at sun security ssl transportcontext fatal transportcontext java error failed to execute goal com googlecode maven download plugin download maven plugin wget download metadata proto on project gapic generator java io error could not get content
1
246,898
7,895,824,011
IssuesEvent
2018-06-29 05:58:29
aowen87/BAR
https://api.github.com/repos/aowen87/BAR
closed
double check python home issues
Likelihood: 3 - Occasional OS: All Priority: Normal Severity: 2 - Minor Irritation Support Group: Any bug version: trunk
I just noticed we can't use PySide with 2.5.2 after the launcher was updated for 2.6. Sounds like a python home issues (the site-packages in <arch>/lib/python/...../site-packages aren't picked up), but ours in lib/site-packages are We need to double check a few of these combos with the new builtin python setup. -----------------------REDMINE MIGRATION----------------------- This ticket was migrated from Redmine. The following information could not be accurately captured in the new ticket: Original author: Cyrus Harrison Original creation: 03/15/2013 03:10 pm Original update: 11/04/2013 05:47 pm Ticket number: 1381
1.0
double check python home issues - I just noticed we can't use PySide with 2.5.2 after the launcher was updated for 2.6. Sounds like a python home issues (the site-packages in <arch>/lib/python/...../site-packages aren't picked up), but ours in lib/site-packages are We need to double check a few of these combos with the new builtin python setup. -----------------------REDMINE MIGRATION----------------------- This ticket was migrated from Redmine. The following information could not be accurately captured in the new ticket: Original author: Cyrus Harrison Original creation: 03/15/2013 03:10 pm Original update: 11/04/2013 05:47 pm Ticket number: 1381
non_process
double check python home issues i just noticed we can t use pyside with after the launcher was updated for sounds like a python home issues the site packages in lib python site packages aren t picked up but ours in lib site packages are we need to double check a few of these combos with the new builtin python setup redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author cyrus harrison original creation pm original update pm ticket number
0
14,271
17,225,651,225
IssuesEvent
2021-07-20 01:00:19
ooi-data/CE09OSSM-RID27-02-FLORTD000-recovered_host-flort_sample
https://api.github.com/repos/ooi-data/CE09OSSM-RID27-02-FLORTD000-recovered_host-flort_sample
opened
🛑 Processing failed: FSTimeoutError
process
## Overview `FSTimeoutError` found in `processing_task` task during run ended on 2021-07-20T01:00:18.408581. ## Details Flow name: `CE09OSSM-RID27-02-FLORTD000-recovered_host-flort_sample` Task name: `processing_task` Error type: `FSTimeoutError` Error message: <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 84, in processing_task File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/convenience.py", line 640, in copy_store data = source[source_key] File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/mapping.py", line 133, in __getitem__ result = self.fs.cat(k) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 87, in wrapper return sync(self.loop, func, *args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 66, in sync raise FSTimeoutError fsspec.exceptions.FSTimeoutError ``` </details>
1.0
🛑 Processing failed: FSTimeoutError - ## Overview `FSTimeoutError` found in `processing_task` task during run ended on 2021-07-20T01:00:18.408581. ## Details Flow name: `CE09OSSM-RID27-02-FLORTD000-recovered_host-flort_sample` Task name: `processing_task` Error type: `FSTimeoutError` Error message: <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 84, in processing_task File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/convenience.py", line 640, in copy_store data = source[source_key] File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/mapping.py", line 133, in __getitem__ result = self.fs.cat(k) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 87, in wrapper return sync(self.loop, func, *args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 66, in sync raise FSTimeoutError fsspec.exceptions.FSTimeoutError ``` </details>
process
🛑 processing failed fstimeouterror overview fstimeouterror found in processing task task during run ended on details flow name recovered host flort sample task name processing task error type fstimeouterror error message traceback traceback most recent call last file usr share miniconda envs harvester lib site packages ooi harvester processor pipeline py line in processing task file srv conda envs notebook lib site packages zarr convenience py line in copy store data source file srv conda envs notebook lib site packages fsspec mapping py line in getitem result self fs cat k file srv conda envs notebook lib site packages fsspec asyn py line in wrapper return sync self loop func args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in sync raise fstimeouterror fsspec exceptions fstimeouterror
1
126,091
12,286,711,948
IssuesEvent
2020-05-09 08:46:48
dankamongmen/notcurses
https://api.github.com/repos/dankamongmen/notcurses
closed
Call setlocale(LC_ALL, "") in notcurses_init() unless told not to
documentation enhancement
I'm strongly inclined add a call to `setlocale(LC_ALL, "")` as the first line in `notcurses_init()`, and provide a field in `notcurses_options` allowing the client code to inhibit this. Even with the warning added in #414 , people are going to frustratingly forget to do this. I don't see any great problem with doing so, and think the number of people that will be surprised by this behavior is much smaller than the number of people that will be surprised by the need to do this. I'd be interested in hearing counterarguments, though.
1.0
Call setlocale(LC_ALL, "") in notcurses_init() unless told not to - I'm strongly inclined add a call to `setlocale(LC_ALL, "")` as the first line in `notcurses_init()`, and provide a field in `notcurses_options` allowing the client code to inhibit this. Even with the warning added in #414 , people are going to frustratingly forget to do this. I don't see any great problem with doing so, and think the number of people that will be surprised by this behavior is much smaller than the number of people that will be surprised by the need to do this. I'd be interested in hearing counterarguments, though.
non_process
call setlocale lc all in notcurses init unless told not to i m strongly inclined add a call to setlocale lc all as the first line in notcurses init and provide a field in notcurses options allowing the client code to inhibit this even with the warning added in people are going to frustratingly forget to do this i don t see any great problem with doing so and think the number of people that will be surprised by this behavior is much smaller than the number of people that will be surprised by the need to do this i d be interested in hearing counterarguments though
0
385,104
11,412,566,377
IssuesEvent
2020-02-01 14:05:19
DragonHeart000/ModernWarfareBugs
https://api.github.com/repos/DragonHeart000/ModernWarfareBugs
opened
Crossbow with FTAC Fury 20" Bolts one shots juggernaut
Game Type: Co-op Game Type: Multiplayer Map: Any Priority: High Type: Mechanic
**Summary** If using the FTAC Fury 20" Bolts (explosive bolts) the explosion will one shot a juggernaut. **Steps to Reproduce** Steps to reproduce the behavior: 1. Equip a crossbow with the FTAC Fury 20" Bolts 2. Have an enemy player get a Juggernaut 3. Shoot the enemy player with the crossbow 4. Once the bolt explodes the enemy player will die despite being a juggernaut who was only shot once. **Expected behavior** For the crossbow to do some damage but not kill a juggernaut in one shot. **Actual results** The crossbow only requires one arrow to kill a juggernaut making the juggernaut a useless killstreak. **Screenshots** [Video of issue](https://youtu.be/zJ9nAJBV0cc) **System info** - Works on all platforms - Game version 1.13.0 **Additional context** The FTAC Fury 20" Bolts are the only ones that have this behavior, I have tried with all the other bolts. As far as I tested, this is the only weapon in the game that can one-shot a juggernaut. The only other things that can do so are killstreaks.
1.0
Crossbow with FTAC Fury 20" Bolts one shots juggernaut - **Summary** If using the FTAC Fury 20" Bolts (explosive bolts) the explosion will one shot a juggernaut. **Steps to Reproduce** Steps to reproduce the behavior: 1. Equip a crossbow with the FTAC Fury 20" Bolts 2. Have an enemy player get a Juggernaut 3. Shoot the enemy player with the crossbow 4. Once the bolt explodes the enemy player will die despite being a juggernaut who was only shot once. **Expected behavior** For the crossbow to do some damage but not kill a juggernaut in one shot. **Actual results** The crossbow only requires one arrow to kill a juggernaut making the juggernaut a useless killstreak. **Screenshots** [Video of issue](https://youtu.be/zJ9nAJBV0cc) **System info** - Works on all platforms - Game version 1.13.0 **Additional context** The FTAC Fury 20" Bolts are the only ones that have this behavior, I have tried with all the other bolts. As far as I tested, this is the only weapon in the game that can one-shot a juggernaut. The only other things that can do so are killstreaks.
non_process
crossbow with ftac fury bolts one shots juggernaut summary if using the ftac fury bolts explosive bolts the explosion will one shot a juggernaut steps to reproduce steps to reproduce the behavior equip a crossbow with the ftac fury bolts have an enemy player get a juggernaut shoot the enemy player with the crossbow once the bolt explodes the enemy player will die despite being a juggernaut who was only shot once expected behavior for the crossbow to do some damage but not kill a juggernaut in one shot actual results the crossbow only requires one arrow to kill a juggernaut making the juggernaut a useless killstreak screenshots system info works on all platforms game version additional context the ftac fury bolts are the only ones that have this behavior i have tried with all the other bolts as far as i tested this is the only weapon in the game that can one shot a juggernaut the only other things that can do so are killstreaks
0
161,687
13,865,798,448
IssuesEvent
2020-10-16 05:22:38
natahouse/react-snap
https://api.github.com/repos/natahouse/react-snap
closed
Add documentation on deciding whether to use react-snap or other approaches
documentation
Initially requested at https://github.com/stereobooster/react-snap/issues/118
1.0
Add documentation on deciding whether to use react-snap or other approaches - Initially requested at https://github.com/stereobooster/react-snap/issues/118
non_process
add documentation on deciding whether to use react snap or other approaches initially requested at
0
6,940
10,110,543,912
IssuesEvent
2019-07-30 10:30:58
bisq-network/bisq
https://api.github.com/repos/bisq-network/bisq
closed
Error at traderSignAndFinalizeDisputedPayout , Then can no longer take offer
in:trade-process was:dropped
I guess what happened was, I took an order, sent coins to the given address, when clicked reviewed order it says offer not available, saying the worst case I would lose my bond or something something, then I moved the coins into bisq wallet. And then somehow this trade went into support/dispute. Then it disappeared, but I got this pop up while using the app. And another side effect is I can no longer take any offers, saying peer offline, cannot connect to market maker something something. It happens to every offer for 2 days now so I'm convinced that because of this pop up error that causes the latter issue.
1.0
Error at traderSignAndFinalizeDisputedPayout , Then can no longer take offer - I guess what happened was, I took an order, sent coins to the given address, when clicked reviewed order it says offer not available, saying the worst case I would lose my bond or something something, then I moved the coins into bisq wallet. And then somehow this trade went into support/dispute. Then it disappeared, but I got this pop up while using the app. And another side effect is I can no longer take any offers, saying peer offline, cannot connect to market maker something something. It happens to every offer for 2 days now so I'm convinced that because of this pop up error that causes the latter issue.
process
error at tradersignandfinalizedisputedpayout then can no longer take offer i guess what happened was i took an order sent coins to the given address when clicked reviewed order it says offer not available saying the worst case i would lose my bond or something something then i moved the coins into bisq wallet and then somehow this trade went into support dispute then it disappeared but i got this pop up while using the app and another side effect is i can no longer take any offers saying peer offline cannot connect to market maker something something it happens to every offer for days now so i m convinced that because of this pop up error that causes the latter issue
1
266,070
8,362,583,508
IssuesEvent
2018-10-03 17:14:41
phetsims/graphing-quadratics
https://api.github.com/repos/phetsims/graphing-quadratics
opened
convert QuadraticIO to ES6 class
priority:5-deferred
PhET-iO `phetioInherit` currently does not support ES6 classes for implementation of IO types, see https://github.com/phetsims/phet-io/issues/1371. So QuadraticIO is currently implemented using ES5 style. When that issue has been addressed, convert QuadraticIO to an ES6 implementation. This issue does not need to be addressed before 1.0 publication, and is therefore labeled as deferred.
1.0
convert QuadraticIO to ES6 class - PhET-iO `phetioInherit` currently does not support ES6 classes for implementation of IO types, see https://github.com/phetsims/phet-io/issues/1371. So QuadraticIO is currently implemented using ES5 style. When that issue has been addressed, convert QuadraticIO to an ES6 implementation. This issue does not need to be addressed before 1.0 publication, and is therefore labeled as deferred.
non_process
convert quadraticio to class phet io phetioinherit currently does not support classes for implementation of io types see so quadraticio is currently implemented using style when that issue has been addressed convert quadraticio to an implementation this issue does not need to be addressed before publication and is therefore labeled as deferred
0
8,875
11,968,333,540
IssuesEvent
2020-04-06 08:28:19
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
cytolysis
multi-species process
Hi This term GO:0001897 cytolysis by symbiont of host cells can have a taxon ID added But with this term adding the taxon ID leads to an error GO:0001900 positive regulation of cytolysis by symbiont of host cells Is this intentional or is there something missing? Thanks Ruth
1.0
cytolysis - Hi This term GO:0001897 cytolysis by symbiont of host cells can have a taxon ID added But with this term adding the taxon ID leads to an error GO:0001900 positive regulation of cytolysis by symbiont of host cells Is this intentional or is there something missing? Thanks Ruth
process
cytolysis hi this term go cytolysis by symbiont of host cells can have a taxon id added but with this term adding the taxon id leads to an error go positive regulation of cytolysis by symbiont of host cells is this intentional or is there something missing thanks ruth
1
64
2,522,483,847
IssuesEvent
2015-01-19 22:30:46
MozillaFoundation/plan
https://api.github.com/repos/MozillaFoundation/plan
closed
Planning
p1 process
2015 planning process with MOFO & MOCO for board review on December 19th. Phase: Research / Build / Ship Owner: @openmatt Decision: @msurman Lead design: n/a Lead dev: n/a Quality: n/a
1.0
Planning - 2015 planning process with MOFO & MOCO for board review on December 19th. Phase: Research / Build / Ship Owner: @openmatt Decision: @msurman Lead design: n/a Lead dev: n/a Quality: n/a
process
planning planning process with mofo moco for board review on december phase research build ship owner openmatt decision msurman lead design n a lead dev n a quality n a
1
4,180
7,114,519,393
IssuesEvent
2018-01-18 01:11:09
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
closed
Added debugging entries in Standard_Query_Processor.cpp and add debug module PROXY_DEBUG_MYSQL_QUERY_PROCESSOR
DEBUG QUERY PROCESSOR development
Query Processor is documented. Adding few debugging entries will be helpful
1.0
Added debugging entries in Standard_Query_Processor.cpp and add debug module PROXY_DEBUG_MYSQL_QUERY_PROCESSOR - Query Processor is documented. Adding few debugging entries will be helpful
process
added debugging entries in standard query processor cpp and add debug module proxy debug mysql query processor query processor is documented adding few debugging entries will be helpful
1
207,885
16,096,750,201
IssuesEvent
2021-04-27 01:43:18
sct-pipeline/ukbiobank-spinalcord-csa
https://api.github.com/repos/sct-pipeline/ukbiobank-spinalcord-csa
opened
Remove ANTS in dependencies
documentation
## Description Since ANTS was only used for registration of the labeled segmentation of T1w to T2w and T2w was removed from the processing pipeline in https://github.com/sct-pipeline/ukbiobank-spinalcord-csa/pull/52, we can remove it from dependencies in `README.md`
1.0
Remove ANTS in dependencies - ## Description Since ANTS was only used for registration of the labeled segmentation of T1w to T2w and T2w was removed from the processing pipeline in https://github.com/sct-pipeline/ukbiobank-spinalcord-csa/pull/52, we can remove it from dependencies in `README.md`
non_process
remove ants in dependencies description since ants was only used for registration of the labeled segmentation of to and was removed from the processing pipeline in we can remove it from dependencies in readme md
0
21,481
14,591,098,652
IssuesEvent
2020-12-19 11:16:42
macvim-dev/macvim
https://api.github.com/repos/macvim-dev/macvim
closed
CI: Migrate to GitHub Actions
Infrastructure
Due to Travis CI's [recent changes](https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing) which makes it harder for open-source projects to run for free, we should investigate migrating to Github Actions (which a lot of other open-source apps have done as well). We currently have a limited set of credits which will run out soon, so we should do the work soon. Also, GitHub Actions supports more features than Travis CI, such as the ability to generate artifacts, much better control over caching, flexible language for constructing different jobs and tasks, tighter integration with other GitHub features like releases, and so on. The loss here would be the test matrix as Travis CI has a [much wider range](https://docs.travis-ci.com/user/reference/osx/) of macOS VMs to choose from (down to Xcode 7 / macOS 10.11 as of now), whereas GitHub Actions is [much more limited](https://docs.github.com/en/free-pro-team@latest/actions/reference/specifications-for-github-hosted-runners) for testing on old OS versions (down to Xcode 10 / macOS 10.15 only). Track that work in this issue. - [x] Add GitHub Actions support (#1144) - [x] Fix Vim tests to work (#1146) - [x] Add test matrix (#1147)
1.0
CI: Migrate to GitHub Actions - Due to Travis CI's [recent changes](https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing) which makes it harder for open-source projects to run for free, we should investigate migrating to Github Actions (which a lot of other open-source apps have done as well). We currently have a limited set of credits which will run out soon, so we should do the work soon. Also, GitHub Actions supports more features than Travis CI, such as the ability to generate artifacts, much better control over caching, flexible language for constructing different jobs and tasks, tighter integration with other GitHub features like releases, and so on. The loss here would be the test matrix as Travis CI has a [much wider range](https://docs.travis-ci.com/user/reference/osx/) of macOS VMs to choose from (down to Xcode 7 / macOS 10.11 as of now), whereas GitHub Actions is [much more limited](https://docs.github.com/en/free-pro-team@latest/actions/reference/specifications-for-github-hosted-runners) for testing on old OS versions (down to Xcode 10 / macOS 10.15 only). Track that work in this issue. - [x] Add GitHub Actions support (#1144) - [x] Fix Vim tests to work (#1146) - [x] Add test matrix (#1147)
non_process
ci migrate to github actions due to travis ci s which makes it harder for open source projects to run for free we should investigate migrating to github actions which a lot of other open source apps have done as well we currently have a limited set of credits which will run out soon so we should do the work soon also github actions supports more features than travis ci such as the ability to generate artifacts much better control over caching flexible language for constructing different jobs and tasks tighter integration with other github features like releases and so on the loss here would be the test matrix as travis ci has a of macos vms to choose from down to xcode macos as of now whereas github actions is for testing on old os versions down to xcode macos only track that work in this issue add github actions support fix vim tests to work add test matrix
0
20,752
27,486,765,764
IssuesEvent
2023-03-04 06:03:32
GoogleCloudPlatform/microservices-demo
https://api.github.com/repos/GoogleCloudPlatform/microservices-demo
closed
Move cloud-ops-sandbox kustomization configuration to CloudOps Sandbox repo
type: cleanup type: process priority: p2
### Describe request or inquiry PR #1566 added the kustomization config that deploys Online Boutique using the configuration that Sandbox uses. It makes sense to host this kustomization config in the Sandbox [repo] instead of the one of Online Boutique. The possible setup that "fixes" Online Boutique version can look like: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - github.com/GoogleCloudPlatform/microservices-demo/kustomize/base?ref=release/v0.5.1 components: - github.com/GoogleCloudPlatform/microservices-demo/kustomize/components/google-cloud-operations?ref=release/v0.5.1 ``` ### What purpose/environment will this feature serve? This change will let to avoid misuse of the Online Boutique collection of kustomizations as well as will remove the reversed dependency between Online Boutique and Sandbox. [repo]: https://github.com/GoogleCloudPlatform/cloud-ops-sandbox
1.0
Move cloud-ops-sandbox kustomization configuration to CloudOps Sandbox repo - ### Describe request or inquiry PR #1566 added the kustomization config that deploys Online Boutique using the configuration that Sandbox uses. It makes sense to host this kustomization config in the Sandbox [repo] instead of the one of Online Boutique. The possible setup that "fixes" Online Boutique version can look like: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - github.com/GoogleCloudPlatform/microservices-demo/kustomize/base?ref=release/v0.5.1 components: - github.com/GoogleCloudPlatform/microservices-demo/kustomize/components/google-cloud-operations?ref=release/v0.5.1 ``` ### What purpose/environment will this feature serve? This change will let to avoid misuse of the Online Boutique collection of kustomizations as well as will remove the reversed dependency between Online Boutique and Sandbox. [repo]: https://github.com/GoogleCloudPlatform/cloud-ops-sandbox
process
move cloud ops sandbox kustomization configuration to cloudops sandbox repo describe request or inquiry pr added the kustomization config that deploys online boutique using the configuration that sandbox uses it makes sense to host this kustomization config in the sandbox instead of the one of online boutique the possible setup that fixes online boutique version can look like yaml apiversion kustomize config io kind kustomization resources github com googlecloudplatform microservices demo kustomize base ref release components github com googlecloudplatform microservices demo kustomize components google cloud operations ref release what purpose environment will this feature serve this change will let to avoid misuse of the online boutique collection of kustomizations as well as will remove the reversed dependency between online boutique and sandbox
1
2,602
5,356,457,870
IssuesEvent
2017-02-20 15:42:32
openvstorage/gobjfs
https://api.github.com/repos/openvstorage/gobjfs
closed
option to limit the number of file descriptors that can be used.
process_wontfix type_feature
Rora load can cause an [Alba ASD to go down](https://github.com/openvstorage/alba/issues/372). If we can limit the number of fds rora uses and limit the number of fds the tcp server has, we can reserve some for internal use (logging, UDP multicast, rocksdb) and avoid these bad scenarios.
1.0
option to limit the number of file descriptors that can be used. - Rora load can cause an [Alba ASD to go down](https://github.com/openvstorage/alba/issues/372). If we can limit the number of fds rora uses and limit the number of fds the tcp server has, we can reserve some for internal use (logging, UDP multicast, rocksdb) and avoid these bad scenarios.
process
option to limit the number of file descriptors that can be used rora load can cause an if we can limit the number of fds rora uses and limit the number of fds the tcp server has we can reserve some for internal use logging udp multicast rocksdb and avoid these bad scenarios
1
1,259
3,791,887,139
IssuesEvent
2016-03-22 06:28:57
nodejs/node
https://api.github.com/repos/nodejs/node
closed
`process.stdin` keypress event returns `ctrl: false` in a `[control + [backspace]` event
process question
When logging a keypress event, the `[control]` key generally provides a `{ ... ctrl: true }` in the return keypress data returned on the event. However, peculiarly, when `[control + [backspace]` event is fired, `ctrl` is returned as false. Steps to reproduce: ```js var rl = require('readline'); var i = rl.createInterface(process.stdin, process.stdout, null); i.question("What do you think of node.js?", function(answer) { console.log("Thank you for your valuable feedback."); i.close(); process.stdin.destroy(); }); process.stdin.on('keypress', function(key, data){ console.log(data); }); /* { sequence: ', name: 'backspace', ctrl: false, < -- should be true meta: false, shift: false } */ ``` This was verified to be a situation in `v4.0.0`.
1.0
`process.stdin` keypress event returns `ctrl: false` in a `[control + [backspace]` event - When logging a keypress event, the `[control]` key generally provides a `{ ... ctrl: true }` in the return keypress data returned on the event. However, peculiarly, when `[control + [backspace]` event is fired, `ctrl` is returned as false. Steps to reproduce: ```js var rl = require('readline'); var i = rl.createInterface(process.stdin, process.stdout, null); i.question("What do you think of node.js?", function(answer) { console.log("Thank you for your valuable feedback."); i.close(); process.stdin.destroy(); }); process.stdin.on('keypress', function(key, data){ console.log(data); }); /* { sequence: ', name: 'backspace', ctrl: false, < -- should be true meta: false, shift: false } */ ``` This was verified to be a situation in `v4.0.0`.
process
process stdin keypress event returns ctrl false in a event when logging a keypress event the key generally provides a ctrl true in the return keypress data returned on the event however peculiarly when event is fired ctrl is returned as false steps to reproduce js var rl require readline var i rl createinterface process stdin process stdout null i question what do you think of node js function answer console log thank you for your valuable feedback i close process stdin destroy process stdin on keypress function key data console log data sequence name backspace ctrl false should be true meta false shift false this was verified to be a situation in
1
4,554
11,348,404,413
IssuesEvent
2020-01-24 00:16:26
TerriaJS/terriajs
https://api.github.com/repos/TerriaJS/terriajs
closed
Changing coord presentation fails in mobx
New Model Architecture T-Bug
Clicking on the coord bar in a mobx app doesn't do anything. Throws an error related to `toggleUseProjection` method. ![Coords](https://user-images.githubusercontent.com/6735870/71046129-6b255180-218b-11ea-84ae-2bfdb9e207c4.png)
1.0
Changing coord presentation fails in mobx - Clicking on the coord bar in a mobx app doesn't do anything. Throws an error related to `toggleUseProjection` method. ![Coords](https://user-images.githubusercontent.com/6735870/71046129-6b255180-218b-11ea-84ae-2bfdb9e207c4.png)
non_process
changing coord presentation fails in mobx clicking on the coord bar in a mobx app doesn t do anything throws an error related to toggleuseprojection method
0
7,615
10,724,187,260
IssuesEvent
2019-10-28 00:04:33
osquery/osquery
https://api.github.com/repos/osquery/osquery
opened
Configurable audit backlog wait time setting on Linux
feature process auditing
<!-- Thank you for contributing to osquery! --> # Feature request <!-- Please follow this template. Before submitting an issue search for duplicates. --> ### What new feature do you want? <!-- Please describe with as much detail as possible. Include examples. --> The ability to configure the audit_backlog_wait_time value. References: [1] http://man7.org/linux/man-pages/man3/audit_set_backlog_wait_time.3.html [2] https://github.com/torvalds/linux/blob/master/kernel/audit.c#L121-L122 Currently the value is hardcoded to 1 in the code: https://github.com/osquery/osquery/blob/master/osquery/events/linux/auditdnetlink.cpp#L307 ` audit_set_backlog_wait_time(audit_netlink_handle_, 1); ` This has been discussed by @theopolis in https://github.com/osquery/osquery/issues/3148 ### How is this new feature useful? <!-- Describe how can this make osquery better or how you intend to use it. --> Based on a couple of online discussions it seems like for specific kernel versions setting this value to a number will lead to the kernel blocking the thread that invoked the syscall if the audit consumer (osquery) can't keep up and the backlog is reached. Setting the parameter to 0 ensures nothing gets blocked in the kernel. Bear in mind that this is not supported in all kernels so its effectiveness can vary. References: [1] https://github.com/elastic/go-libaudit/issues/34 [2] https://github.com/elastic/beats/issues/7157 **Caveat**: this could be specific to the Beats product by Elastic, and osquery could work OK with a value of 1. But, auditd itself sets it to 0 on a my Ubuntu 18.04 LTS that I've just tested it. ### How can this be implemented? <!-- It's okay to leave this empty if you don't know. --> Change the backlog wait time from 1 to 0, or even better keep a default and then expose this value as parameter that can be set via the CLI and a configuration option so that users can pick their own. Should be implemented similar to #5921.
1.0
Configurable audit backlog wait time setting on Linux - <!-- Thank you for contributing to osquery! --> # Feature request <!-- Please follow this template. Before submitting an issue search for duplicates. --> ### What new feature do you want? <!-- Please describe with as much detail as possible. Include examples. --> The ability to configure the audit_backlog_wait_time value. References: [1] http://man7.org/linux/man-pages/man3/audit_set_backlog_wait_time.3.html [2] https://github.com/torvalds/linux/blob/master/kernel/audit.c#L121-L122 Currently the value is hardcoded to 1 in the code: https://github.com/osquery/osquery/blob/master/osquery/events/linux/auditdnetlink.cpp#L307 ` audit_set_backlog_wait_time(audit_netlink_handle_, 1); ` This has been discussed by @theopolis in https://github.com/osquery/osquery/issues/3148 ### How is this new feature useful? <!-- Describe how can this make osquery better or how you intend to use it. --> Based on a couple of online discussions it seems like for specific kernel versions setting this value to a number will lead to the kernel blocking the thread that invoked the syscall if the audit consumer (osquery) can't keep up and the backlog is reached. Setting the parameter to 0 ensures nothing gets blocked in the kernel. Bear in mind that this is not supported in all kernels so its effectiveness can vary. References: [1] https://github.com/elastic/go-libaudit/issues/34 [2] https://github.com/elastic/beats/issues/7157 **Caveat**: this could be specific to the Beats product by Elastic, and osquery could work OK with a value of 1. But, auditd itself sets it to 0 on a my Ubuntu 18.04 LTS that I've just tested it. ### How can this be implemented? <!-- It's okay to leave this empty if you don't know. --> Change the backlog wait time from 1 to 0, or even better keep a default and then expose this value as parameter that can be set via the CLI and a configuration option so that users can pick their own. Should be implemented similar to #5921.
process
configurable audit backlog wait time setting on linux feature request please follow this template before submitting an issue search for duplicates what new feature do you want the ability to configure the audit backlog wait time value references currently the value is hardcoded to in the code audit set backlog wait time audit netlink handle this has been discussed by theopolis in how is this new feature useful based on a couple of online discussions it seems like for specific kernel versions setting this value to a number will lead to the kernel blocking the thread that invoked the syscall if the audit consumer osquery can t keep up and the backlog is reached setting the parameter to ensures nothing gets blocked in the kernel bear in mind that this is not supported in all kernels so its effectiveness can vary references caveat this could be specific to the beats product by elastic and osquery could work ok with a value of but auditd itself sets it to on a my ubuntu lts that i ve just tested it how can this be implemented change the backlog wait time from to or even better keep a default and then expose this value as parameter that can be set via the cli and a configuration option so that users can pick their own should be implemented similar to
1
41,266
10,349,632,887
IssuesEvent
2019-09-04 23:16:25
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
SET constants cause jOOQ parser to fail
T: Defect
The problem deals with jOOQ dynamically generating a Java enum class to correspond with a Sql SET. It appears the generated enum constants do not take into account syntax differences between Sql SET constants and Java enum constants e.g., SET constants can have spaces, dashes, and other chars not legal for enum constants. More generally I'm surprised jOOQ is creating an enum class like this. I would think a more abstract form of type information would be sufficient at this stage in the parser. ### Steps to reproduce the problem (if possible, create an MCVE: https://github.com/jOOQ/jOOQ-mcve): - Download and extract the standard [sakila-schema](https://downloads.mysql.com/docs/sakila-db.zip) from MySql. - Parse the `sakila-schema.sql` file: ```java ... DSLContext ctx = DSL.using(SQLDialect.MYSQL); Meta meta = ctx.meta(Source.of(sakila)); ``` - Results in exception due to non-indentifier chars in SET constant names: Error:java: org.jooq.tools.reflect.ReflectException: Compilation error: /org/jooq/impl/GeneratedEnum12521848.java:3: error: ',', '}', or ';' expected Error:java: /org/jooq/impl/GeneratedEnum12521848.java:3: error: '}' expected Error:java: /org/jooq/impl/GeneratedEnum12521848.java:5: error: class, interface, or enum expected Error:java: /org/jooq/impl/GeneratedEnum12521848.java:7: error: class, interface, or enum expected Error:java: /org/jooq/impl/GeneratedEnum12521848.java:9: error: class, interface, or enum expected Error:java: /org/jooq/impl/GeneratedEnum12521848.java:10: error: class, interface, or enum expected Error:java: 6 errors ### Versions: - jOOQ: 3.12.0 - Java: 8 - JDBC Driver (include name if inofficial driver): H2
1.0
SET constants cause jOOQ parser to fail - The problem deals with jOOQ dynamically generating a Java enum class to correspond with a Sql SET. It appears the generated enum constants do not take into account syntax differences between Sql SET constants and Java enum constants e.g., SET constants can have spaces, dashes, and other chars not legal for enum constants. More generally I'm surprised jOOQ is creating an enum class like this. I would think a more abstract form of type information would be sufficient at this stage in the parser. ### Steps to reproduce the problem (if possible, create an MCVE: https://github.com/jOOQ/jOOQ-mcve): - Download and extract the standard [sakila-schema](https://downloads.mysql.com/docs/sakila-db.zip) from MySql. - Parse the `sakila-schema.sql` file: ```java ... DSLContext ctx = DSL.using(SQLDialect.MYSQL); Meta meta = ctx.meta(Source.of(sakila)); ``` - Results in exception due to non-indentifier chars in SET constant names: Error:java: org.jooq.tools.reflect.ReflectException: Compilation error: /org/jooq/impl/GeneratedEnum12521848.java:3: error: ',', '}', or ';' expected Error:java: /org/jooq/impl/GeneratedEnum12521848.java:3: error: '}' expected Error:java: /org/jooq/impl/GeneratedEnum12521848.java:5: error: class, interface, or enum expected Error:java: /org/jooq/impl/GeneratedEnum12521848.java:7: error: class, interface, or enum expected Error:java: /org/jooq/impl/GeneratedEnum12521848.java:9: error: class, interface, or enum expected Error:java: /org/jooq/impl/GeneratedEnum12521848.java:10: error: class, interface, or enum expected Error:java: 6 errors ### Versions: - jOOQ: 3.12.0 - Java: 8 - JDBC Driver (include name if inofficial driver): H2
non_process
set constants cause jooq parser to fail the problem deals with jooq dynamically generating a java enum class to correspond with a sql set it appears the generated enum constants do not take into account syntax differences between sql set constants and java enum constants e g set constants can have spaces dashes and other chars not legal for enum constants more generally i m surprised jooq is creating an enum class like this i would think a more abstract form of type information would be sufficient at this stage in the parser steps to reproduce the problem if possible create an mcve download and extract the standard from mysql parse the sakila schema sql file java dslcontext ctx dsl using sqldialect mysql meta meta ctx meta source of sakila results in exception due to non indentifier chars in set constant names error java org jooq tools reflect reflectexception compilation error org jooq impl java error or expected error java org jooq impl java error expected error java org jooq impl java error class interface or enum expected error java org jooq impl java error class interface or enum expected error java org jooq impl java error class interface or enum expected error java org jooq impl java error class interface or enum expected error java errors versions jooq java jdbc driver include name if inofficial driver
0
434,032
12,512,957,074
IssuesEvent
2020-06-03 00:23:41
eclipse-ee4j/glassfish
https://api.github.com/repos/eclipse-ee4j/glassfish
closed
SJSU-Installing one more instance while unistalling the previous instance
Component: installation Priority: Minor Stale Type: Bug glassfish installer
SJSU Glassfish allows installation while uninstalling the exisiting one. Ideally it should abort one of the process #### Environment Operating System: All Platform: All #### Affected Versions [V3]
1.0
SJSU-Installing one more instance while unistalling the previous instance - SJSU Glassfish allows installation while uninstalling the exisiting one. Ideally it should abort one of the process #### Environment Operating System: All Platform: All #### Affected Versions [V3]
non_process
sjsu installing one more instance while unistalling the previous instance sjsu glassfish allows installation while uninstalling the exisiting one ideally it should abort one of the process environment operating system all platform all affected versions
0
66,125
14,767,355,403
IssuesEvent
2021-01-10 06:14:07
shiriivtsan/bebo
https://api.github.com/repos/shiriivtsan/bebo
opened
CVE-2019-17531 (High) detected in jackson-databind-2.8.6.jar
security vulnerability
## CVE-2019-17531 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: bebo/jackson-databind-2.8.6.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.6.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/shiriivtsan/bebo/commit/8eb42e349cd3aded1eab4b65b59788a7e934dd99">8eb42e349cd3aded1eab4b65b59788a7e934dd99</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload. <p>Publish Date: 2019-10-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531>CVE-2019-17531</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p> <p>Release Date: 2019-10-12</p> <p>Fix Resolution: 2.10</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.6","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"}],"vulnerabilityIdentifier":"CVE-2019-17531","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-17531 (High) detected in jackson-databind-2.8.6.jar - ## CVE-2019-17531 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: bebo/jackson-databind-2.8.6.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.6.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/shiriivtsan/bebo/commit/8eb42e349cd3aded1eab4b65b59788a7e934dd99">8eb42e349cd3aded1eab4b65b59788a7e934dd99</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload. <p>Publish Date: 2019-10-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531>CVE-2019-17531</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p> <p>Release Date: 2019-10-12</p> <p>Fix Resolution: 2.10</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.6","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"}],"vulnerabilityIdentifier":"CVE-2019-17531","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library bebo jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the apache extra version x jar in the classpath and an attacker can provide a jndi service to access it is possible to make the service execute a malicious payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the apache extra version x jar in the classpath and an attacker can provide a jndi service to access it is possible to make the service execute a malicious payload vulnerabilityurl
0
16,809
22,058,877,002
IssuesEvent
2022-05-30 15:18:46
camunda/zeebe-process-test
https://api.github.com/repos/camunda/zeebe-process-test
closed
ProcessInstanceAssert::hasVariableWithValue shows inconsistent behaviour with same test case and values
kind/bug team/process-automation
## Description ProcessInstanceAssert::hasVariableWithValue passes and fails during repeated runs with unchanged test data and code. ### Expected behaviour Consistent behavior. If object contains expected values then the test should pass (independent of the sequence) ## Reproduction steps Please run test code repeatedly and observe varying outcome. <details> <summary>Test code</summary> https://github.com/rob2universe/camunda8-testing/blob/a01e62ee443d13ddd38f3cde5fec1a0839109d51/src/test/java/io/camunda/c8/test/ProcessTests.java#L72 ``` package io.camunda.c8.test; import com.google.gson.Gson; import com.google.gson.JsonObject; import io.camunda.zeebe.client.ZeebeClient; import io.camunda.zeebe.client.api.response.DeploymentEvent; import io.camunda.zeebe.process.test.api.ZeebeTestEngine; import io.camunda.zeebe.process.test.extension.ZeebeProcessTest; import io.camunda.zeebe.process.test.filters.RecordStream; import org.junit.jupiter.api.Test; import java.time.Duration; import java.util.Map; import java.util.concurrent.TimeoutException; import static io.camunda.zeebe.process.test.assertions.BpmnAssert.assertThat; @ZeebeProcessTest public class ProcessTests { private ZeebeTestEngine engine; private ZeebeClient client; private RecordStream recordStream; private DeploymentEvent initDeployment() { return client.newDeployResourceCommand() .addResourceFromClasspath("process.bpmn") .addResourceFromClasspath("decision.dmn") .send() .join(); } @Test public void testDeployment() { assertThat(initDeployment()); } @Test public void testProcess() throws InterruptedException, TimeoutException { initDeployment(); // When instance is started var piEvent = client.newCreateInstanceCommand() .bpmnProcessId("TestProcess") .latestVersion() .variables(Map.of("myItem", "a")) .send() .join(); // Then instance should have passed start event and should be awaiting job completion assertThat(piEvent) .hasPassedElement("ProcessingStartedStartEvent") .isWaitingAtElements("CallServiceTask"); // When job is activated var response = client.newActivateJobsCommand() .jobType("callService") .maxJobsToActivate(1) .send() .join(); // Then activated job should exist var activatedJob = response.getJobs().get(0); assertThat(activatedJob); // When job is completed and process engine had time to continue processing client.newCompleteCommand(activatedJob.getKey()).send().join(); engine.waitForIdleState(Duration.ofMillis(500)); // Then service task, business rule task, and process instance should be completed assertThat(piEvent) .hasPassedElementsInOrder("CallServiceTask", "EvaluateBusinessRulesTask") .isCompleted() // and business rule task result should be available as process data .hasVariableWithValue("result", Map.of("checkedItem","a","myOutput","aa")); //TODO // the test passed and sometimes fails, probably depending on the sequence of the serialization of the Map. // When the test failes the error is: // java.lang.AssertionError: The variable 'result' does not have the expected value. The value passed in // ('{checkedItem=a, myOutput=aa}') is internally mapped to a JSON String that yields // '{"checkedItem":"a","myOutput":"aa"}'. However, the actual value (as JSON String) is // '{"myOutput":"aa","checkedItem":"a"}'. } } ``` </details> <details> <summary>Process</summary> ```xml <?xml version="1.0" encoding="UTF-8"?> <definitions xmlns="https://www.omg.org/spec/DMN/20191111/MODEL/" xmlns:dmndi="https://www.omg.org/spec/DMN/20191111/DMNDI/" xmlns:dc="http://www.omg.org/spec/DMN/20180521/DC/" xmlns:modeler="http://camunda.org/schema/modeler/1.0" xmlns:camunda="http://camunda.org/schema/1.0/dmn" id="Definitions_99d90b2e-3675-44a8-842b-7e38de90a7b4" name="CheckItem" namespace="http://camunda.org/schema/1.0/dmn" exporter="Camunda Modeler" exporterVersion="5.0.0" modeler:executionPlatform="Camunda Cloud" modeler:executionPlatformVersion="1.3.0" camunda:diagramRelationId="2ffb3ea9-7596-432a-a9ce-1bac47a013a4"> <decision id="Decision_CheckItem" name="Check Item"> <decisionTable id="DecisionTable_0azx1hv"> <input id="Input_1" label="myItem"> <inputExpression id="InputExpression_1" typeRef="string"> <text>myItem</text> </inputExpression> </input> <output id="OutputClause_1gt1jlu" label="myOutput" name="myOutput" typeRef="string" /> <output id="OutputClause_15hqo4g" label="checkedItem" name="checkedItem" typeRef="string" /> <rule id="DecisionRule_0qmyhsl"> <inputEntry id="UnaryTests_1hk0huc"> <text>"a"</text> </inputEntry> <outputEntry id="LiteralExpression_0ofzcke"> <text>"aa"</text> </outputEntry> <outputEntry id="LiteralExpression_18inzt7"> <text>myItem</text> </outputEntry> </rule> <rule id="DecisionRule_05d3x9i"> <inputEntry id="UnaryTests_1rbftn4"> <text>"b"</text> </inputEntry> <outputEntry id="LiteralExpression_0jzgss5"> <text>"bb"</text> </outputEntry> <outputEntry id="LiteralExpression_1gqywzr"> <text>myItem</text> </outputEntry> </rule> <rule id="DecisionRule_0ozdf9j"> <inputEntry id="UnaryTests_087nmtg"> <text>"c"</text> </inputEntry> <outputEntry id="LiteralExpression_0yeghgd"> <text>"cc"</text> </outputEntry> <outputEntry id="LiteralExpression_1bhx39z"> <text>myItem</text> </outputEntry> </rule> </decisionTable> </decision> <dmndi:DMNDI> <dmndi:DMNDiagram> <dmndi:DMNShape dmnElementRef="Decision_CheckItem"> <dc:Bounds height="80" width="180" x="160" y="100" /> </dmndi:DMNShape> </dmndi:DMNDiagram> </dmndi:DMNDI> </definitions> ``` ```xml <?xml version="1.0" encoding="UTF-8"?> <bpmn:definitions xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:dc="http://www.omg.org/spec/DD/20100524/DC" xmlns:zeebe="http://camunda.org/schema/zeebe/1.0" xmlns:di="http://www.omg.org/spec/DD/20100524/DI" xmlns:modeler="http://camunda.org/schema/modeler/1.0" id="Definitions_1fvxzlk" targetNamespace="http://bpmn.io/schema/bpmn" exporter="Camunda Modeler" exporterVersion="5.0.0" modeler:executionPlatform="Camunda Cloud" modeler:executionPlatformVersion="1.1.0"> <bpmn:process id="TestProcess" name="Test Process" isExecutable="true"> <bpmn:startEvent id="ProcessingStartedStartEvent" name="Processing started"> <bpmn:outgoing>Flow_006yvg0</bpmn:outgoing> </bpmn:startEvent> <bpmn:sequenceFlow id="Flow_006yvg0" sourceRef="ProcessingStartedStartEvent" targetRef="CallServiceTask" /> <bpmn:endEvent id="ProcessingCompletedEndEvent" name="Processing completed"> <bpmn:incoming>Flow_18z712h</bpmn:incoming> </bpmn:endEvent> <bpmn:sequenceFlow id="Flow_08flgx2" sourceRef="CallServiceTask" targetRef="EvaluateBusinessRulesTask" /> <bpmn:serviceTask id="CallServiceTask" name="Call Service"> <bpmn:extensionElements> <zeebe:taskDefinition type="callService" /> </bpmn:extensionElements> <bpmn:incoming>Flow_006yvg0</bpmn:incoming> <bpmn:outgoing>Flow_08flgx2</bpmn:outgoing> </bpmn:serviceTask> <bpmn:sequenceFlow id="Flow_18z712h" sourceRef="EvaluateBusinessRulesTask" targetRef="ProcessingCompletedEndEvent" /> <bpmn:businessRuleTask id="EvaluateBusinessRulesTask" name="Evaluate business rules"> <bpmn:extensionElements> <zeebe:calledDecision decisionId="Decision_CheckItem" resultVariable="result" /> </bpmn:extensionElements> <bpmn:incoming>Flow_08flgx2</bpmn:incoming> <bpmn:outgoing>Flow_18z712h</bpmn:outgoing> </bpmn:businessRuleTask> </bpmn:process> <bpmndi:BPMNDiagram id="BPMNDiagram_1"> <bpmndi:BPMNPlane id="BPMNPlane_1" bpmnElement="TestProcess"> <bpmndi:BPMNEdge id="Flow_08flgx2_di" bpmnElement="Flow_08flgx2"> <di:waypoint x="340" y="120" /> <di:waypoint x="400" y="120" /> </bpmndi:BPMNEdge> <bpmndi:BPMNEdge id="Flow_006yvg0_di" bpmnElement="Flow_006yvg0"> <di:waypoint x="188" y="120" /> <di:waypoint x="240" y="120" /> </bpmndi:BPMNEdge> <bpmndi:BPMNEdge id="Flow_18z712h_di" bpmnElement="Flow_18z712h"> <di:waypoint x="500" y="120" /> <di:waypoint x="562" y="120" /> </bpmndi:BPMNEdge> <bpmndi:BPMNShape id="Event_13xfe7s_di" bpmnElement="ProcessingStartedStartEvent"> <dc:Bounds x="152" y="102" width="36" height="36" /> <bpmndi:BPMNLabel> <dc:Bounds x="143" y="145" width="55" height="27" /> </bpmndi:BPMNLabel> </bpmndi:BPMNShape> <bpmndi:BPMNShape id="Activity_0lsf20p_di" bpmnElement="CallServiceTask"> <dc:Bounds x="240" y="80" width="100" height="80" /> <bpmndi:BPMNLabel /> </bpmndi:BPMNShape> <bpmndi:BPMNShape id="Event_0p2r0n9_di" bpmnElement="ProcessingCompletedEndEvent"> <dc:Bounds x="562" y="102" width="36" height="36" /> <bpmndi:BPMNLabel> <dc:Bounds x="553" y="145" width="55" height="27" /> </bpmndi:BPMNLabel> </bpmndi:BPMNShape> <bpmndi:BPMNShape id="Activity_030jwac_di" bpmnElement="EvaluateBusinessRulesTask"> <dc:Bounds x="400" y="80" width="100" height="80" /> <bpmndi:BPMNLabel /> </bpmndi:BPMNShape> </bpmndi:BPMNPlane> </bpmndi:BPMNDiagram> </bpmn:definitions> ``` </details> ## Environment - OS: <!-- e.g. Linux --> - Version: <!-- e.g. 1.0.0 -->
1.0
ProcessInstanceAssert::hasVariableWithValue shows inconsistent behaviour with same test case and values - ## Description ProcessInstanceAssert::hasVariableWithValue passes and fails during repeated runs with unchanged test data and code. ### Expected behaviour Consistent behavior. If object contains expected values then the test should pass (independent of the sequence) ## Reproduction steps Please run test code repeatedly and observe varying outcome. <details> <summary>Test code</summary> https://github.com/rob2universe/camunda8-testing/blob/a01e62ee443d13ddd38f3cde5fec1a0839109d51/src/test/java/io/camunda/c8/test/ProcessTests.java#L72 ``` package io.camunda.c8.test; import com.google.gson.Gson; import com.google.gson.JsonObject; import io.camunda.zeebe.client.ZeebeClient; import io.camunda.zeebe.client.api.response.DeploymentEvent; import io.camunda.zeebe.process.test.api.ZeebeTestEngine; import io.camunda.zeebe.process.test.extension.ZeebeProcessTest; import io.camunda.zeebe.process.test.filters.RecordStream; import org.junit.jupiter.api.Test; import java.time.Duration; import java.util.Map; import java.util.concurrent.TimeoutException; import static io.camunda.zeebe.process.test.assertions.BpmnAssert.assertThat; @ZeebeProcessTest public class ProcessTests { private ZeebeTestEngine engine; private ZeebeClient client; private RecordStream recordStream; private DeploymentEvent initDeployment() { return client.newDeployResourceCommand() .addResourceFromClasspath("process.bpmn") .addResourceFromClasspath("decision.dmn") .send() .join(); } @Test public void testDeployment() { assertThat(initDeployment()); } @Test public void testProcess() throws InterruptedException, TimeoutException { initDeployment(); // When instance is started var piEvent = client.newCreateInstanceCommand() .bpmnProcessId("TestProcess") .latestVersion() .variables(Map.of("myItem", "a")) .send() .join(); // Then instance should have passed start event and should be awaiting job completion assertThat(piEvent) .hasPassedElement("ProcessingStartedStartEvent") .isWaitingAtElements("CallServiceTask"); // When job is activated var response = client.newActivateJobsCommand() .jobType("callService") .maxJobsToActivate(1) .send() .join(); // Then activated job should exist var activatedJob = response.getJobs().get(0); assertThat(activatedJob); // When job is completed and process engine had time to continue processing client.newCompleteCommand(activatedJob.getKey()).send().join(); engine.waitForIdleState(Duration.ofMillis(500)); // Then service task, business rule task, and process instance should be completed assertThat(piEvent) .hasPassedElementsInOrder("CallServiceTask", "EvaluateBusinessRulesTask") .isCompleted() // and business rule task result should be available as process data .hasVariableWithValue("result", Map.of("checkedItem","a","myOutput","aa")); //TODO // the test passed and sometimes fails, probably depending on the sequence of the serialization of the Map. // When the test failes the error is: // java.lang.AssertionError: The variable 'result' does not have the expected value. The value passed in // ('{checkedItem=a, myOutput=aa}') is internally mapped to a JSON String that yields // '{"checkedItem":"a","myOutput":"aa"}'. However, the actual value (as JSON String) is // '{"myOutput":"aa","checkedItem":"a"}'. } } ``` </details> <details> <summary>Process</summary> ```xml <?xml version="1.0" encoding="UTF-8"?> <definitions xmlns="https://www.omg.org/spec/DMN/20191111/MODEL/" xmlns:dmndi="https://www.omg.org/spec/DMN/20191111/DMNDI/" xmlns:dc="http://www.omg.org/spec/DMN/20180521/DC/" xmlns:modeler="http://camunda.org/schema/modeler/1.0" xmlns:camunda="http://camunda.org/schema/1.0/dmn" id="Definitions_99d90b2e-3675-44a8-842b-7e38de90a7b4" name="CheckItem" namespace="http://camunda.org/schema/1.0/dmn" exporter="Camunda Modeler" exporterVersion="5.0.0" modeler:executionPlatform="Camunda Cloud" modeler:executionPlatformVersion="1.3.0" camunda:diagramRelationId="2ffb3ea9-7596-432a-a9ce-1bac47a013a4"> <decision id="Decision_CheckItem" name="Check Item"> <decisionTable id="DecisionTable_0azx1hv"> <input id="Input_1" label="myItem"> <inputExpression id="InputExpression_1" typeRef="string"> <text>myItem</text> </inputExpression> </input> <output id="OutputClause_1gt1jlu" label="myOutput" name="myOutput" typeRef="string" /> <output id="OutputClause_15hqo4g" label="checkedItem" name="checkedItem" typeRef="string" /> <rule id="DecisionRule_0qmyhsl"> <inputEntry id="UnaryTests_1hk0huc"> <text>"a"</text> </inputEntry> <outputEntry id="LiteralExpression_0ofzcke"> <text>"aa"</text> </outputEntry> <outputEntry id="LiteralExpression_18inzt7"> <text>myItem</text> </outputEntry> </rule> <rule id="DecisionRule_05d3x9i"> <inputEntry id="UnaryTests_1rbftn4"> <text>"b"</text> </inputEntry> <outputEntry id="LiteralExpression_0jzgss5"> <text>"bb"</text> </outputEntry> <outputEntry id="LiteralExpression_1gqywzr"> <text>myItem</text> </outputEntry> </rule> <rule id="DecisionRule_0ozdf9j"> <inputEntry id="UnaryTests_087nmtg"> <text>"c"</text> </inputEntry> <outputEntry id="LiteralExpression_0yeghgd"> <text>"cc"</text> </outputEntry> <outputEntry id="LiteralExpression_1bhx39z"> <text>myItem</text> </outputEntry> </rule> </decisionTable> </decision> <dmndi:DMNDI> <dmndi:DMNDiagram> <dmndi:DMNShape dmnElementRef="Decision_CheckItem"> <dc:Bounds height="80" width="180" x="160" y="100" /> </dmndi:DMNShape> </dmndi:DMNDiagram> </dmndi:DMNDI> </definitions> ``` ```xml <?xml version="1.0" encoding="UTF-8"?> <bpmn:definitions xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:dc="http://www.omg.org/spec/DD/20100524/DC" xmlns:zeebe="http://camunda.org/schema/zeebe/1.0" xmlns:di="http://www.omg.org/spec/DD/20100524/DI" xmlns:modeler="http://camunda.org/schema/modeler/1.0" id="Definitions_1fvxzlk" targetNamespace="http://bpmn.io/schema/bpmn" exporter="Camunda Modeler" exporterVersion="5.0.0" modeler:executionPlatform="Camunda Cloud" modeler:executionPlatformVersion="1.1.0"> <bpmn:process id="TestProcess" name="Test Process" isExecutable="true"> <bpmn:startEvent id="ProcessingStartedStartEvent" name="Processing started"> <bpmn:outgoing>Flow_006yvg0</bpmn:outgoing> </bpmn:startEvent> <bpmn:sequenceFlow id="Flow_006yvg0" sourceRef="ProcessingStartedStartEvent" targetRef="CallServiceTask" /> <bpmn:endEvent id="ProcessingCompletedEndEvent" name="Processing completed"> <bpmn:incoming>Flow_18z712h</bpmn:incoming> </bpmn:endEvent> <bpmn:sequenceFlow id="Flow_08flgx2" sourceRef="CallServiceTask" targetRef="EvaluateBusinessRulesTask" /> <bpmn:serviceTask id="CallServiceTask" name="Call Service"> <bpmn:extensionElements> <zeebe:taskDefinition type="callService" /> </bpmn:extensionElements> <bpmn:incoming>Flow_006yvg0</bpmn:incoming> <bpmn:outgoing>Flow_08flgx2</bpmn:outgoing> </bpmn:serviceTask> <bpmn:sequenceFlow id="Flow_18z712h" sourceRef="EvaluateBusinessRulesTask" targetRef="ProcessingCompletedEndEvent" /> <bpmn:businessRuleTask id="EvaluateBusinessRulesTask" name="Evaluate business rules"> <bpmn:extensionElements> <zeebe:calledDecision decisionId="Decision_CheckItem" resultVariable="result" /> </bpmn:extensionElements> <bpmn:incoming>Flow_08flgx2</bpmn:incoming> <bpmn:outgoing>Flow_18z712h</bpmn:outgoing> </bpmn:businessRuleTask> </bpmn:process> <bpmndi:BPMNDiagram id="BPMNDiagram_1"> <bpmndi:BPMNPlane id="BPMNPlane_1" bpmnElement="TestProcess"> <bpmndi:BPMNEdge id="Flow_08flgx2_di" bpmnElement="Flow_08flgx2"> <di:waypoint x="340" y="120" /> <di:waypoint x="400" y="120" /> </bpmndi:BPMNEdge> <bpmndi:BPMNEdge id="Flow_006yvg0_di" bpmnElement="Flow_006yvg0"> <di:waypoint x="188" y="120" /> <di:waypoint x="240" y="120" /> </bpmndi:BPMNEdge> <bpmndi:BPMNEdge id="Flow_18z712h_di" bpmnElement="Flow_18z712h"> <di:waypoint x="500" y="120" /> <di:waypoint x="562" y="120" /> </bpmndi:BPMNEdge> <bpmndi:BPMNShape id="Event_13xfe7s_di" bpmnElement="ProcessingStartedStartEvent"> <dc:Bounds x="152" y="102" width="36" height="36" /> <bpmndi:BPMNLabel> <dc:Bounds x="143" y="145" width="55" height="27" /> </bpmndi:BPMNLabel> </bpmndi:BPMNShape> <bpmndi:BPMNShape id="Activity_0lsf20p_di" bpmnElement="CallServiceTask"> <dc:Bounds x="240" y="80" width="100" height="80" /> <bpmndi:BPMNLabel /> </bpmndi:BPMNShape> <bpmndi:BPMNShape id="Event_0p2r0n9_di" bpmnElement="ProcessingCompletedEndEvent"> <dc:Bounds x="562" y="102" width="36" height="36" /> <bpmndi:BPMNLabel> <dc:Bounds x="553" y="145" width="55" height="27" /> </bpmndi:BPMNLabel> </bpmndi:BPMNShape> <bpmndi:BPMNShape id="Activity_030jwac_di" bpmnElement="EvaluateBusinessRulesTask"> <dc:Bounds x="400" y="80" width="100" height="80" /> <bpmndi:BPMNLabel /> </bpmndi:BPMNShape> </bpmndi:BPMNPlane> </bpmndi:BPMNDiagram> </bpmn:definitions> ``` </details> ## Environment - OS: <!-- e.g. Linux --> - Version: <!-- e.g. 1.0.0 -->
process
processinstanceassert hasvariablewithvalue shows inconsistent behaviour with same test case and values description processinstanceassert hasvariablewithvalue passes and fails during repeated runs with unchanged test data and code expected behaviour consistent behavior if object contains expected values then the test should pass independent of the sequence reproduction steps please run test code repeatedly and observe varying outcome test code package io camunda test import com google gson gson import com google gson jsonobject import io camunda zeebe client zeebeclient import io camunda zeebe client api response deploymentevent import io camunda zeebe process test api zeebetestengine import io camunda zeebe process test extension zeebeprocesstest import io camunda zeebe process test filters recordstream import org junit jupiter api test import java time duration import java util map import java util concurrent timeoutexception import static io camunda zeebe process test assertions bpmnassert assertthat zeebeprocesstest public class processtests private zeebetestengine engine private zeebeclient client private recordstream recordstream private deploymentevent initdeployment return client newdeployresourcecommand addresourcefromclasspath process bpmn addresourcefromclasspath decision dmn send join test public void testdeployment assertthat initdeployment test public void testprocess throws interruptedexception timeoutexception initdeployment when instance is started var pievent client newcreateinstancecommand bpmnprocessid testprocess latestversion variables map of myitem a send join then instance should have passed start event and should be awaiting job completion assertthat pievent haspassedelement processingstartedstartevent iswaitingatelements callservicetask when job is activated var response client newactivatejobscommand jobtype callservice maxjobstoactivate send join then activated job should exist var activatedjob response getjobs get assertthat activatedjob when job is completed and process engine had time to continue processing client newcompletecommand activatedjob getkey send join engine waitforidlestate duration ofmillis then service task business rule task and process instance should be completed assertthat pievent haspassedelementsinorder callservicetask evaluatebusinessrulestask iscompleted and business rule task result should be available as process data hasvariablewithvalue result map of checkeditem a myoutput aa todo the test passed and sometimes fails probably depending on the sequence of the serialization of the map when the test failes the error is java lang assertionerror the variable result does not have the expected value the value passed in checkeditem a myoutput aa is internally mapped to a json string that yields checkeditem a myoutput aa however the actual value as json string is myoutput aa checkeditem a process xml myitem a aa myitem b bb myitem c cc myitem xml flow flow flow flow flow flow environment os version
1
16,068
20,235,034,745
IssuesEvent
2022-02-14 00:20:30
fmnas/fmnas-site
https://api.github.com/repos/fmnas/fmnas-site
reopened
Try getting size on a remote server.
admin backend form processor small (1-3h)
--- _This issue has been automatically created by [todo-actions](https://github.com/apps/todo-actions) based on a TODO comment found in [src/resize.php:17](https://github.com/fmnas/fmnas-site/blob/main/src/resize.php#L17). It will automatically be closed when the TODO comment is removed from the default branch (main)._
1.0
Try getting size on a remote server. - --- _This issue has been automatically created by [todo-actions](https://github.com/apps/todo-actions) based on a TODO comment found in [src/resize.php:17](https://github.com/fmnas/fmnas-site/blob/main/src/resize.php#L17). It will automatically be closed when the TODO comment is removed from the default branch (main)._
process
try getting size on a remote server this issue has been automatically created by based on a todo comment found in it will automatically be closed when the todo comment is removed from the default branch main
1
386
2,831,349,240
IssuesEvent
2015-05-24 15:12:14
brucemiller/LaTeXML
https://api.github.com/repos/brucemiller/LaTeXML
closed
[regression] No more way to generate pmml + LaTeX annotation
postprocessing regression
See the thread on the mailing list http://lists.jacobs-university.de/pipermail/project-latexml/2015-May/002071.html Basically, in the past presentation MathML + LaTeX annotation was the default. Now, by default only presentation MathML is generated and there is not any option to attach the LaTeX annotation alone. So I'd suggest to add an option to generate --keeptex so that the user can explicitly request the TeX annotation. (Of course someone will complain that it is yet another new command line option... but it is not worse than a introducing a regression + behavior change, no?)
1.0
[regression] No more way to generate pmml + LaTeX annotation - See the thread on the mailing list http://lists.jacobs-university.de/pipermail/project-latexml/2015-May/002071.html Basically, in the past presentation MathML + LaTeX annotation was the default. Now, by default only presentation MathML is generated and there is not any option to attach the LaTeX annotation alone. So I'd suggest to add an option to generate --keeptex so that the user can explicitly request the TeX annotation. (Of course someone will complain that it is yet another new command line option... but it is not worse than a introducing a regression + behavior change, no?)
process
no more way to generate pmml latex annotation see the thread on the mailing list basically in the past presentation mathml latex annotation was the default now by default only presentation mathml is generated and there is not any option to attach the latex annotation alone so i d suggest to add an option to generate keeptex so that the user can explicitly request the tex annotation of course someone will complain that it is yet another new command line option but it is not worse than a introducing a regression behavior change no
1
817,732
30,652,157,813
IssuesEvent
2023-07-25 09:37:03
fivetran/dbt_asana
https://api.github.com/repos/fivetran/dbt_asana
closed
[Feature] Add project_id and/or project_ids to `asana__task`
type:enhancement priority:p3 status:accepted update_type:models
### Is there an existing feature request for this? - [X] I have searched the existing issues ### Describe the Feature As a user of this model, I would've expected project_id to be an available field in the `asana__task` Model. I know this can be a little bit more complicated because a task could be associated to multiple projects, but still leaving it out entirely means you have to then join against a staging table (stg_asana__project_task) which is not ideal. I'd be willing to contribute a PR for this if you're interested. ### Describe alternatives you've considered _No response_ ### Are you interested in contributing this feature? - [X] Yes. - [ ] Yes, but I will need assistance and will schedule time during your [office hours](https://calendly.com/fivetran-solutions-team/fivetran-solutions-team-office-hours) for guidance. - [ ] No. ### Anything else? _No response_
1.0
[Feature] Add project_id and/or project_ids to `asana__task` - ### Is there an existing feature request for this? - [X] I have searched the existing issues ### Describe the Feature As a user of this model, I would've expected project_id to be an available field in the `asana__task` Model. I know this can be a little bit more complicated because a task could be associated to multiple projects, but still leaving it out entirely means you have to then join against a staging table (stg_asana__project_task) which is not ideal. I'd be willing to contribute a PR for this if you're interested. ### Describe alternatives you've considered _No response_ ### Are you interested in contributing this feature? - [X] Yes. - [ ] Yes, but I will need assistance and will schedule time during your [office hours](https://calendly.com/fivetran-solutions-team/fivetran-solutions-team-office-hours) for guidance. - [ ] No. ### Anything else? _No response_
non_process
add project id and or project ids to asana task is there an existing feature request for this i have searched the existing issues describe the feature as a user of this model i would ve expected project id to be an available field in the asana task model i know this can be a little bit more complicated because a task could be associated to multiple projects but still leaving it out entirely means you have to then join against a staging table stg asana project task which is not ideal i d be willing to contribute a pr for this if you re interested describe alternatives you ve considered no response are you interested in contributing this feature yes yes but i will need assistance and will schedule time during your for guidance no anything else no response
0
181,986
14,086,352,710
IssuesEvent
2020-11-05 03:28:02
CSU-Booking-Platform/application
https://api.github.com/repos/CSU-Booking-Platform/application
opened
Acceptance Tests for #39 Book a Room
acceptance-test
### User story #39 ### Acceptance criteria checklist - [ ] Users can book rooms for specific time intervals - [ ] Users cannot book a room if it is not available at that time
1.0
Acceptance Tests for #39 Book a Room - ### User story #39 ### Acceptance criteria checklist - [ ] Users can book rooms for specific time intervals - [ ] Users cannot book a room if it is not available at that time
non_process
acceptance tests for book a room user story acceptance criteria checklist users can book rooms for specific time intervals users cannot book a room if it is not available at that time
0
218,557
16,996,499,924
IssuesEvent
2021-07-01 07:13:34
k8-proxy/metadefender-menlo-integration
https://api.github.com/repos/k8-proxy/metadefender-menlo-integration
opened
Automating end to end floe of uploading a file till downloading it
P1 Test
- Pick a file from data-set - Download the file (will trigger menlo middleware if file of type that is downloadable) - Wait for middleware results (Download button to enable) - Download the file - Provide a means to allow user to specify file locations from a config file. Repeat for all files provided
1.0
Automating end to end floe of uploading a file till downloading it - - Pick a file from data-set - Download the file (will trigger menlo middleware if file of type that is downloadable) - Wait for middleware results (Download button to enable) - Download the file - Provide a means to allow user to specify file locations from a config file. Repeat for all files provided
non_process
automating end to end floe of uploading a file till downloading it pick a file from data set download the file will trigger menlo middleware if file of type that is downloadable wait for middleware results download button to enable download the file provide a means to allow user to specify file locations from a config file repeat for all files provided
0
1,637
4,258,335,995
IssuesEvent
2016-07-11 06:01:10
NICTA/nationalmap
https://api.github.com/repos/NICTA/nationalmap
closed
Move Project Documentation to Repo from Wiki
Process and packaging
So external parties can potentially contribute universal gaps/tips... for example For **Testing Purposes Only** you can deploy and **share** your own *Version/Demonstration/Test/Beta/Pilot/Etc* of The NationalMap with others quickly by remapping Port 80 to Port 3001 with this command: ``` iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3001 ``` > Web search redirecting/remapping iptable ports for instructions/troubleshooting. NOTE: this is a temporary change on the operating system - if used with instances that will/may be rebooted - you will need to ensure the proper startup scripts to automatically able these changes on reload. HOWEVER this is meant to be a temporary means to provide world-wide access to prototypes **NOT** as a permanent solution as production services should be using a dedicated/trusted standard web server/proxy - either using a fully-feature, dedicated, and robust solution like Nginx or a lightweight web-proxy or cache like Varnish is used to current support production - to redirect external traffic to the application. These services are important for security reasons too as they can ensure the application is run as a general user NOT as the root user (Only root users can run services from lower level Port #s) this allows you to ensure that even if the application is exploited ~0.01% risk +1 then the system would not be susceptible (As if that mattered in the age of containerization and isolation :) Sorry the last Paragraph was a personal rant that would be removed
1.0
Move Project Documentation to Repo from Wiki - So external parties can potentially contribute universal gaps/tips... for example For **Testing Purposes Only** you can deploy and **share** your own *Version/Demonstration/Test/Beta/Pilot/Etc* of The NationalMap with others quickly by remapping Port 80 to Port 3001 with this command: ``` iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3001 ``` > Web search redirecting/remapping iptable ports for instructions/troubleshooting. NOTE: this is a temporary change on the operating system - if used with instances that will/may be rebooted - you will need to ensure the proper startup scripts to automatically able these changes on reload. HOWEVER this is meant to be a temporary means to provide world-wide access to prototypes **NOT** as a permanent solution as production services should be using a dedicated/trusted standard web server/proxy - either using a fully-feature, dedicated, and robust solution like Nginx or a lightweight web-proxy or cache like Varnish is used to current support production - to redirect external traffic to the application. These services are important for security reasons too as they can ensure the application is run as a general user NOT as the root user (Only root users can run services from lower level Port #s) this allows you to ensure that even if the application is exploited ~0.01% risk +1 then the system would not be susceptible (As if that mattered in the age of containerization and isolation :) Sorry the last Paragraph was a personal rant that would be removed
process
move project documentation to repo from wiki so external parties can potentially contribute universal gaps tips for example for testing purposes only you can deploy and share your own version demonstration test beta pilot etc of the nationalmap with others quickly by remapping port to port with this command iptables t nat a prerouting i p tcp dport j redirect to port web search redirecting remapping iptable ports for instructions troubleshooting note this is a temporary change on the operating system if used with instances that will may be rebooted you will need to ensure the proper startup scripts to automatically able these changes on reload however this is meant to be a temporary means to provide world wide access to prototypes not as a permanent solution as production services should be using a dedicated trusted standard web server proxy either using a fully feature dedicated and robust solution like nginx or a lightweight web proxy or cache like varnish is used to current support production to redirect external traffic to the application these services are important for security reasons too as they can ensure the application is run as a general user not as the root user only root users can run services from lower level port s this allows you to ensure that even if the application is exploited risk then the system would not be susceptible as if that mattered in the age of containerization and isolation sorry the last paragraph was a personal rant that would be removed
1
5,603
8,467,531,596
IssuesEvent
2018-10-23 17:13:36
census-instrumentation/opencensus-service
https://api.github.com/repos/census-instrumentation/opencensus-service
closed
metrics/interceptor: define how the agent will be able to pull metrics from tertiary sources
process
Following suit with https://github.com/census-instrumentation/opencensus-service/issues/19, we need to define how the agent will be able to pull metrics from sources like: * Prometheus and transform them into OpenCensus Metrics which can then be consumed/exported to the various backends. Basically this issue is to define the ability for the agent to be pointed to users that have all their metrics going to "System A" but are slowly doing a migration to "System B" or OpenCensus and should be able to have this done transparently without having to invasively instrument their code or take down their services. Some services like Prometheus will require us to pull from client applications while others like Stackdriver support pushing to an end point so can be intercepted. Anyways this issue to have that discussion and set priorities and reminders for how we shall proceed.
1.0
metrics/interceptor: define how the agent will be able to pull metrics from tertiary sources - Following suit with https://github.com/census-instrumentation/opencensus-service/issues/19, we need to define how the agent will be able to pull metrics from sources like: * Prometheus and transform them into OpenCensus Metrics which can then be consumed/exported to the various backends. Basically this issue is to define the ability for the agent to be pointed to users that have all their metrics going to "System A" but are slowly doing a migration to "System B" or OpenCensus and should be able to have this done transparently without having to invasively instrument their code or take down their services. Some services like Prometheus will require us to pull from client applications while others like Stackdriver support pushing to an end point so can be intercepted. Anyways this issue to have that discussion and set priorities and reminders for how we shall proceed.
process
metrics interceptor define how the agent will be able to pull metrics from tertiary sources following suit with we need to define how the agent will be able to pull metrics from sources like prometheus and transform them into opencensus metrics which can then be consumed exported to the various backends basically this issue is to define the ability for the agent to be pointed to users that have all their metrics going to system a but are slowly doing a migration to system b or opencensus and should be able to have this done transparently without having to invasively instrument their code or take down their services some services like prometheus will require us to pull from client applications while others like stackdriver support pushing to an end point so can be intercepted anyways this issue to have that discussion and set priorities and reminders for how we shall proceed
1
19,415
25,558,619,301
IssuesEvent
2022-11-30 09:04:15
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
opened
Update `frequency` in time-related preprocessors
enhancement preprocessor
https://github.com/ESMValGroup/ESMValCore/pull/1837 allows the automatic update of `frequency` in `metadata.yml` with a value given in `cube.attributes` (using `cube.attributes['frequency']`). Currently, this feature is not used by any preprocessor. It would be nice to add this to at least some of the time-related preprocessors. Remaining questions (see [this](https://github.com/ESMValGroup/ESMValCore/pull/1837#issuecomment-1330790035) comment): - How do we deal with data that does not have a time dimensions anymore (e.g., after running `climate_statistics` with `period=full`). Use `fx`? Use `None`? Use an empty string `''`? - How do we deal with climatologies (e.g., after running `climate_statistics` with `period=monthly`? For monthly climatologies, do we simply use `mon`? I am not entirly sure as this is different to a standard monthly time series. - If course-resolution data (e.g., yearly) is run through a finer-resolution preprocessor (e.g., `monthly_statistics`), the data stays yearly. How do we deal with this? Add a sanity check to the preprocs? Ignore this as this is not a realistic use case?
1.0
Update `frequency` in time-related preprocessors - https://github.com/ESMValGroup/ESMValCore/pull/1837 allows the automatic update of `frequency` in `metadata.yml` with a value given in `cube.attributes` (using `cube.attributes['frequency']`). Currently, this feature is not used by any preprocessor. It would be nice to add this to at least some of the time-related preprocessors. Remaining questions (see [this](https://github.com/ESMValGroup/ESMValCore/pull/1837#issuecomment-1330790035) comment): - How do we deal with data that does not have a time dimensions anymore (e.g., after running `climate_statistics` with `period=full`). Use `fx`? Use `None`? Use an empty string `''`? - How do we deal with climatologies (e.g., after running `climate_statistics` with `period=monthly`? For monthly climatologies, do we simply use `mon`? I am not entirly sure as this is different to a standard monthly time series. - If course-resolution data (e.g., yearly) is run through a finer-resolution preprocessor (e.g., `monthly_statistics`), the data stays yearly. How do we deal with this? Add a sanity check to the preprocs? Ignore this as this is not a realistic use case?
process
update frequency in time related preprocessors allows the automatic update of frequency in metadata yml with a value given in cube attributes using cube attributes currently this feature is not used by any preprocessor it would be nice to add this to at least some of the time related preprocessors remaining questions see comment how do we deal with data that does not have a time dimensions anymore e g after running climate statistics with period full use fx use none use an empty string how do we deal with climatologies e g after running climate statistics with period monthly for monthly climatologies do we simply use mon i am not entirly sure as this is different to a standard monthly time series if course resolution data e g yearly is run through a finer resolution preprocessor e g monthly statistics the data stays yearly how do we deal with this add a sanity check to the preprocs ignore this as this is not a realistic use case
1
83,004
10,315,997,777
IssuesEvent
2019-08-30 08:57:56
dinsic-pim/tchap-web
https://api.github.com/repos/dinsic-pim/tchap-web
opened
Room list : External user marker update
design enhancement
- [ ] Change the size of the blue border when extern user aren't allowed to 1px (or maybe 2) - [ ] Add this marker to "Invites" and "Favourites" section
1.0
Room list : External user marker update - - [ ] Change the size of the blue border when extern user aren't allowed to 1px (or maybe 2) - [ ] Add this marker to "Invites" and "Favourites" section
non_process
room list external user marker update change the size of the blue border when extern user aren t allowed to or maybe add this marker to invites and favourites section
0
721,692
24,834,914,541
IssuesEvent
2022-10-26 08:04:45
AY2223S1-CS2113-F11-1/tp
https://api.github.com/repos/AY2223S1-CS2113-F11-1/tp
closed
As a Property Manager, I can search for the clients or properties based on their details
type.Story priority.Medium
... so that I can quickly view the information
1.0
As a Property Manager, I can search for the clients or properties based on their details - ... so that I can quickly view the information
non_process
as a property manager i can search for the clients or properties based on their details so that i can quickly view the information
0
3,585
6,621,660,640
IssuesEvent
2017-09-21 20:03:51
WikiWatershed/model-my-watershed
https://api.github.com/repos/WikiWatershed/model-my-watershed
closed
Geoprocessing API: Log requests
BigCZ Geoprocessing API
From the ADR: https://github.com/WikiWatershed/model-my-watershed/blob/develop/doc/arch/adr-004-geoprocessing-api.md > Since we'll be logging to the database, we'll need to provide an interface into that data. I suggest we keep that simple and create a management task which exports ranges of usage data as a CSV so they can be evaluated in Excel. If this feature ends up gaining traction, we'll want to invest in in a more robust solution. A DRF plugin that may assist with task can be evaluated here http://drf-tracking.readthedocs.io/en/latest/.
1.0
Geoprocessing API: Log requests - From the ADR: https://github.com/WikiWatershed/model-my-watershed/blob/develop/doc/arch/adr-004-geoprocessing-api.md > Since we'll be logging to the database, we'll need to provide an interface into that data. I suggest we keep that simple and create a management task which exports ranges of usage data as a CSV so they can be evaluated in Excel. If this feature ends up gaining traction, we'll want to invest in in a more robust solution. A DRF plugin that may assist with task can be evaluated here http://drf-tracking.readthedocs.io/en/latest/.
process
geoprocessing api log requests from the adr since we ll be logging to the database we ll need to provide an interface into that data i suggest we keep that simple and create a management task which exports ranges of usage data as a csv so they can be evaluated in excel if this feature ends up gaining traction we ll want to invest in in a more robust solution a drf plugin that may assist with task can be evaluated here
1
175,532
13,563,033,990
IssuesEvent
2020-09-18 07:54:27
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: kv/contention/nodes=4 failed
C-test-failure O-roachtest O-robot branch-release-20.2 release-blocker
[(roachtest).kv/contention/nodes=4 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2280957&tab=buildLog) on [release-20.2@56c72f47ae31ad3373b69f87250d57fdfee176ce](https://github.com/cockroachdb/cockroach/commits/56c72f47ae31ad3373b69f87250d57fdfee176ce): ``` The test failed on branch=release-20.2, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/kv/contention/nodes=4/run_1 ts_util.go:130,kv.go:254,cluster.go:2634,errgroup.go:57: spent 47.368421% of time below target of 50.000000 txn/s, wanted no more than 10.000000% cluster.go:2656,kv.go:257,test_runner.go:754: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2644 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2652 | main.registerKVContention.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/kv.go:257 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:754 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitor).wait.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2700 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2614 | runtime.doInit | /usr/local/go/src/runtime/proc.go:5228 | runtime.main | /usr/local/go/src/runtime/proc.go:190 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1357 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/kv/contention/nodes=4](https://teamcity.cockroachdb.com/viewLog.html?buildId=2280957&tab=artifacts#/kv/contention/nodes=4) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Akv%2Fcontention%2Fnodes%3D4.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
2.0
roachtest: kv/contention/nodes=4 failed - [(roachtest).kv/contention/nodes=4 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2280957&tab=buildLog) on [release-20.2@56c72f47ae31ad3373b69f87250d57fdfee176ce](https://github.com/cockroachdb/cockroach/commits/56c72f47ae31ad3373b69f87250d57fdfee176ce): ``` The test failed on branch=release-20.2, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/kv/contention/nodes=4/run_1 ts_util.go:130,kv.go:254,cluster.go:2634,errgroup.go:57: spent 47.368421% of time below target of 50.000000 txn/s, wanted no more than 10.000000% cluster.go:2656,kv.go:257,test_runner.go:754: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2644 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2652 | main.registerKVContention.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/kv.go:257 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:754 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitor).wait.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2700 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2614 | runtime.doInit | /usr/local/go/src/runtime/proc.go:5228 | runtime.main | /usr/local/go/src/runtime/proc.go:190 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1357 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/kv/contention/nodes=4](https://teamcity.cockroachdb.com/viewLog.html?buildId=2280957&tab=artifacts#/kv/contention/nodes=4) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Akv%2Fcontention%2Fnodes%3D4.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
non_process
roachtest kv contention nodes failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts kv contention nodes run ts util go kv go cluster go errgroup go spent of time below target of txn s wanted no more than cluster go kv go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registerkvcontention home agent work go src github com cockroachdb cockroach pkg cmd roachtest kv go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror more artifacts powered by
0
212,668
23,934,098,373
IssuesEvent
2022-09-11 01:02:51
brightcove/cloud-custodian
https://api.github.com/repos/brightcove/cloud-custodian
closed
CVE-2012-6708 (Medium) detected in github.com/aws/aws-sdk-go-v1.15.23, github.com/aws/amazon-ssm-agent-2.3.235.0 - autoclosed
security vulnerability
## CVE-2012-6708 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>github.com/aws/aws-sdk-go-v1.15.23</b>, <b>github.com/aws/amazon-ssm-agent-2.3.235.0</b></p></summary> <p> <details><summary><b>github.com/aws/aws-sdk-go-v1.15.23</b></p></summary> <p>AWS SDK for the Go programming language.</p> <p> Dependency Hierarchy: - :x: **github.com/aws/aws-sdk-go-v1.15.23** (Vulnerable Library) </details> <details><summary><b>github.com/aws/amazon-ssm-agent-2.3.235.0</b></p></summary> <p>Agent to enable remote management of your Amazon EC2 instance configuration.</p> <p> Dependency Hierarchy: - :x: **github.com/aws/amazon-ssm-agent-2.3.235.0** (Vulnerable Library) </details> <p>Found in base branch: <b>brightcove</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v1.9.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"GO","packageName":"github.com/aws/aws-sdk-go","packageVersion":"v1.15.23","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"github.com/aws/aws-sdk-go:v1.15.23","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v1.9.0","isBinary":true},{"packageType":"GO","packageName":"github.com/aws/amazon-ssm-agent","packageVersion":"2.3.235.0","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"github.com/aws/amazon-ssm-agent:2.3.235.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v1.9.0","isBinary":true}],"baseBranches":["brightcove"],"vulnerabilityIdentifier":"CVE-2012-6708","vulnerabilityDetails":"jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the \u0027\u003c\u0027 character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the \u0027\u003c\u0027 character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2012-6708 (Medium) detected in github.com/aws/aws-sdk-go-v1.15.23, github.com/aws/amazon-ssm-agent-2.3.235.0 - autoclosed - ## CVE-2012-6708 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>github.com/aws/aws-sdk-go-v1.15.23</b>, <b>github.com/aws/amazon-ssm-agent-2.3.235.0</b></p></summary> <p> <details><summary><b>github.com/aws/aws-sdk-go-v1.15.23</b></p></summary> <p>AWS SDK for the Go programming language.</p> <p> Dependency Hierarchy: - :x: **github.com/aws/aws-sdk-go-v1.15.23** (Vulnerable Library) </details> <details><summary><b>github.com/aws/amazon-ssm-agent-2.3.235.0</b></p></summary> <p>Agent to enable remote management of your Amazon EC2 instance configuration.</p> <p> Dependency Hierarchy: - :x: **github.com/aws/amazon-ssm-agent-2.3.235.0** (Vulnerable Library) </details> <p>Found in base branch: <b>brightcove</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v1.9.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"GO","packageName":"github.com/aws/aws-sdk-go","packageVersion":"v1.15.23","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"github.com/aws/aws-sdk-go:v1.15.23","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v1.9.0","isBinary":true},{"packageType":"GO","packageName":"github.com/aws/amazon-ssm-agent","packageVersion":"2.3.235.0","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"github.com/aws/amazon-ssm-agent:2.3.235.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v1.9.0","isBinary":true}],"baseBranches":["brightcove"],"vulnerabilityIdentifier":"CVE-2012-6708","vulnerabilityDetails":"jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the \u0027\u003c\u0027 character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the \u0027\u003c\u0027 character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in github com aws aws sdk go github com aws amazon ssm agent autoclosed cve medium severity vulnerability vulnerable libraries github com aws aws sdk go github com aws amazon ssm agent github com aws aws sdk go aws sdk for the go programming language dependency hierarchy x github com aws aws sdk go vulnerable library github com aws amazon ssm agent agent to enable remote management of your amazon instance configuration dependency hierarchy x github com aws amazon ssm agent vulnerable library found in base branch brightcove vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree github com aws aws sdk go isminimumfixversionavailable true minimumfixversion jquery isbinary true packagetype go packagename github com aws amazon ssm agent packageversion packagefilepaths istransitivedependency false dependencytree github com aws amazon ssm agent isminimumfixversionavailable true minimumfixversion jquery isbinary true basebranches vulnerabilityidentifier cve vulnerabilitydetails jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common vulnerabilityurl
0
15,518
19,703,267,924
IssuesEvent
2022-01-12 18:52:23
googleapis/java-dms
https://api.github.com/repos/googleapis/java-dms
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'dms' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'dms' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname dms invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
92,868
11,716,598,109
IssuesEvent
2020-03-09 15:53:15
ibm-openbmc/dev
https://api.github.com/repos/ibm-openbmc/dev
opened
GUI : Design : Virtual TPM
GUI Milestone Tgt UI Design
## SMEs **BMC**: TBD **PHYP**: Chris Engle **ASMI**: Jayashankar Padath ## Use Case TBD ## InVision Prototype TBD ## Design Issue (phosphor-webui) TBD ## References/Resources - [eBMC Feature Item: 4.13.1](https://ibm.box.com/s/j15ux3yfjycy4or4azbqyhqq11lbya0r) - feature discovery folder: [Virtual Trusted Platform Module](https://ibm.box.com/s/fbit9bhwgcqghsivw7qxn8hl33ok43xp) - user research notes*: - user research synthesis: * This folder is restricted in accordance with GDPR guidelines.
1.0
GUI : Design : Virtual TPM - ## SMEs **BMC**: TBD **PHYP**: Chris Engle **ASMI**: Jayashankar Padath ## Use Case TBD ## InVision Prototype TBD ## Design Issue (phosphor-webui) TBD ## References/Resources - [eBMC Feature Item: 4.13.1](https://ibm.box.com/s/j15ux3yfjycy4or4azbqyhqq11lbya0r) - feature discovery folder: [Virtual Trusted Platform Module](https://ibm.box.com/s/fbit9bhwgcqghsivw7qxn8hl33ok43xp) - user research notes*: - user research synthesis: * This folder is restricted in accordance with GDPR guidelines.
non_process
gui design virtual tpm smes bmc tbd phyp chris engle asmi jayashankar padath use case tbd invision prototype tbd design issue phosphor webui tbd references resources feature discovery folder user research notes user research synthesis this folder is restricted in accordance with gdpr guidelines
0
17,923
23,912,728,929
IssuesEvent
2022-09-09 09:43:16
GoogleCloudPlatform/dotnet-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
closed
[iot]: Some tests are flaky
type: process priority: p1 api: cloudiot samples
There seems to be issues when many instances of the tests are running in parallel. Example here: https://source.cloud.google.com/results/invocations/579fb838-8af8-40ad-973c-4d5bbeaead68/log
1.0
[iot]: Some tests are flaky - There seems to be issues when many instances of the tests are running in parallel. Example here: https://source.cloud.google.com/results/invocations/579fb838-8af8-40ad-973c-4d5bbeaead68/log
process
some tests are flaky there seems to be issues when many instances of the tests are running in parallel example here
1
20,588
27,247,844,167
IssuesEvent
2023-02-22 04:37:21
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Convert `npm/**/examples/**` tests to `system-tests`
process: tests stage: backlog CT
There are 24 example projects in `npm/**/examples/**`: ``` ➜ cypress git:(develop) ✗ echo npm/**/examples/** | xargs -n1 echo npm/react/examples/a11y npm/react/examples/craco npm/react/examples/find-webpack npm/react/examples/nextjs npm/react/examples/nextjs-webpack-5 npm/react/examples/react-scripts npm/react/examples/react-scripts-folder npm/react/examples/react-scripts-typescript npm/react/examples/sass-and-ts npm/react/examples/snapshots npm/react/examples/tailwind npm/react/examples/using-babel npm/react/examples/using-babel-typescript npm/react/examples/visual-sudoku npm/react/examples/visual-testing-with-applitools npm/react/examples/visual-testing-with-happo npm/react/examples/visual-testing-with-percy npm/react/examples/webpack-file npm/react/examples/webpack-options npm/vue/examples/code-coverage npm/vue/examples/vue-cli npm/webpack-preprocessor/examples/react-app npm/webpack-preprocessor/examples/use-babelrc npm/webpack-preprocessor/examples/use-ts-loader ➜ cypress git:(develop) ✗ echo npm/**/examples/** | xargs -n1 echo | wc -l 24 ``` All of these can be converted to `system-tests`. This is especially important to save CI time and cost for the example projects that require `yarn install`, since currently we're running `yarn install` for each one of those projects, for each commit. Partially blocked by #18574 which adds the `yarn install` caching to `system-tests`.
1.0
Convert `npm/**/examples/**` tests to `system-tests` - There are 24 example projects in `npm/**/examples/**`: ``` ➜ cypress git:(develop) ✗ echo npm/**/examples/** | xargs -n1 echo npm/react/examples/a11y npm/react/examples/craco npm/react/examples/find-webpack npm/react/examples/nextjs npm/react/examples/nextjs-webpack-5 npm/react/examples/react-scripts npm/react/examples/react-scripts-folder npm/react/examples/react-scripts-typescript npm/react/examples/sass-and-ts npm/react/examples/snapshots npm/react/examples/tailwind npm/react/examples/using-babel npm/react/examples/using-babel-typescript npm/react/examples/visual-sudoku npm/react/examples/visual-testing-with-applitools npm/react/examples/visual-testing-with-happo npm/react/examples/visual-testing-with-percy npm/react/examples/webpack-file npm/react/examples/webpack-options npm/vue/examples/code-coverage npm/vue/examples/vue-cli npm/webpack-preprocessor/examples/react-app npm/webpack-preprocessor/examples/use-babelrc npm/webpack-preprocessor/examples/use-ts-loader ➜ cypress git:(develop) ✗ echo npm/**/examples/** | xargs -n1 echo | wc -l 24 ``` All of these can be converted to `system-tests`. This is especially important to save CI time and cost for the example projects that require `yarn install`, since currently we're running `yarn install` for each one of those projects, for each commit. Partially blocked by #18574 which adds the `yarn install` caching to `system-tests`.
process
convert npm examples tests to system tests there are example projects in npm examples ➜ cypress git develop ✗ echo npm examples xargs echo npm react examples npm react examples craco npm react examples find webpack npm react examples nextjs npm react examples nextjs webpack npm react examples react scripts npm react examples react scripts folder npm react examples react scripts typescript npm react examples sass and ts npm react examples snapshots npm react examples tailwind npm react examples using babel npm react examples using babel typescript npm react examples visual sudoku npm react examples visual testing with applitools npm react examples visual testing with happo npm react examples visual testing with percy npm react examples webpack file npm react examples webpack options npm vue examples code coverage npm vue examples vue cli npm webpack preprocessor examples react app npm webpack preprocessor examples use babelrc npm webpack preprocessor examples use ts loader ➜ cypress git develop ✗ echo npm examples xargs echo wc l all of these can be converted to system tests this is especially important to save ci time and cost for the example projects that require yarn install since currently we re running yarn install for each one of those projects for each commit partially blocked by which adds the yarn install caching to system tests
1
137,346
30,675,885,167
IssuesEvent
2023-07-26 05:11:07
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
tappy 1.0.1 has 2 GuardDog issues
guarddog code-execution typosquatting
https://pypi.org/project/tappy https://inspector.pypi.io/project/tappy ```{ "dependency": "tappy", "version": "1.0.1", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: tap-py", "code-execution": [ { "location": "tappy-1.0.1/setup.py:13", "code": " subprocess.run(shlex.split(\"cleanpy .\"), check=True)", "message": "This package is executing OS commands in the setup.py file" } ] }, "path": "/tmp/tmpun3635rv/tappy" } }```
1.0
tappy 1.0.1 has 2 GuardDog issues - https://pypi.org/project/tappy https://inspector.pypi.io/project/tappy ```{ "dependency": "tappy", "version": "1.0.1", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: tap-py", "code-execution": [ { "location": "tappy-1.0.1/setup.py:13", "code": " subprocess.run(shlex.split(\"cleanpy .\"), check=True)", "message": "This package is executing OS commands in the setup.py file" } ] }, "path": "/tmp/tmpun3635rv/tappy" } }```
non_process
tappy has guarddog issues dependency tappy version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt tap py code execution location tappy setup py code subprocess run shlex split cleanpy check true message this package is executing os commands in the setup py file path tmp tappy
0
262,475
8,271,381,489
IssuesEvent
2018-09-16 08:12:54
richelbilderbeek/djog_unos_2018
https://api.github.com/repos/richelbilderbeek/djog_unos_2018
closed
Testing
idea low priority
**Is your feature request related to a problem? Please describe.** Testing takes too much time **Describe the solution you'd like** Write a test which will test every part of our game **Describe alternatives you've considered** None **Additional context** Maybe something for @richelbilderbeek or @RafayelGardishyan
1.0
Testing - **Is your feature request related to a problem? Please describe.** Testing takes too much time **Describe the solution you'd like** Write a test which will test every part of our game **Describe alternatives you've considered** None **Additional context** Maybe something for @richelbilderbeek or @RafayelGardishyan
non_process
testing is your feature request related to a problem please describe testing takes too much time describe the solution you d like write a test which will test every part of our game describe alternatives you ve considered none additional context maybe something for richelbilderbeek or rafayelgardishyan
0
673,625
23,023,652,569
IssuesEvent
2022-07-22 07:28:44
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
play.google.com - see bug description
priority-critical browser-focus-geckoview engine-gecko
<!-- @browser: Firefox Mobile 102.0 --> <!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:102.0) Gecko/102.0 Firefox/102.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/107779 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://play.google.com/store/apps/details?id=com.google.android.apps.chromecast.app&pcampaignid=fdl_long&url=http://com.google.android.apps.chromecast.app/unifiedSettings/deviceSettings?userEmail%3Dphattbaby32@gmail.com%26deviceId **Browser / Version**: Firefox Mobile 102.0 **Operating System**: Android 11 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: app is for hackers **Steps to Reproduce**: Looks to have taken a picture from another outlet, to save my information I typed <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/7/39522371-0e5b-45f4-9543-6dd180006330.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220705093820</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2022/7/48925bc0-22a3-4656-b7bf-796b941bdb59) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
play.google.com - see bug description - <!-- @browser: Firefox Mobile 102.0 --> <!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:102.0) Gecko/102.0 Firefox/102.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/107779 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://play.google.com/store/apps/details?id=com.google.android.apps.chromecast.app&pcampaignid=fdl_long&url=http://com.google.android.apps.chromecast.app/unifiedSettings/deviceSettings?userEmail%3Dphattbaby32@gmail.com%26deviceId **Browser / Version**: Firefox Mobile 102.0 **Operating System**: Android 11 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: app is for hackers **Steps to Reproduce**: Looks to have taken a picture from another outlet, to save my information I typed <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/7/39522371-0e5b-45f4-9543-6dd180006330.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220705093820</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2022/7/48925bc0-22a3-4656-b7bf-796b941bdb59) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
play google com see bug description url browser version firefox mobile operating system android tested another browser yes chrome problem type something else description app is for hackers steps to reproduce looks to have taken a picture from another outlet to save my information i typed view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
0
475,960
13,728,744,104
IssuesEvent
2020-10-04 13:05:09
jenkins-x/jx
https://api.github.com/repos/jenkins-x/jx
closed
Define the Kubernetes pod and network security policies in jenkins-x platform
area/boot area/security kind/enhancement lifecycle/rotten priority/important-soon
It would be nice to define the POD and Network security policies for Jenkins X platform in order to increase the isolation between the CI/CD platform and the environments where the applications are running. This could make some users more confident to use the Jenkins X platform in the same cluster with the production environment. https://speakerdeck.com/ianlewis/kubernetes-security-best-practices?slide=36 https://github.com/freach/kubernetes-security-best-practice
1.0
Define the Kubernetes pod and network security policies in jenkins-x platform - It would be nice to define the POD and Network security policies for Jenkins X platform in order to increase the isolation between the CI/CD platform and the environments where the applications are running. This could make some users more confident to use the Jenkins X platform in the same cluster with the production environment. https://speakerdeck.com/ianlewis/kubernetes-security-best-practices?slide=36 https://github.com/freach/kubernetes-security-best-practice
non_process
define the kubernetes pod and network security policies in jenkins x platform it would be nice to define the pod and network security policies for jenkins x platform in order to increase the isolation between the ci cd platform and the environments where the applications are running this could make some users more confident to use the jenkins x platform in the same cluster with the production environment
0
250,688
27,111,194,061
IssuesEvent
2023-02-15 15:25:33
EliyaC/NodeGoat
https://api.github.com/repos/EliyaC/NodeGoat
closed
CVE-2017-16137 (Medium) detected in debug-2.2.0.tgz - autoclosed
security vulnerability
## CVE-2017-16137 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>debug-2.2.0.tgz</b></p></summary> <p>small debugging utility</p> <p>Library home page: <a href="https://registry.npmjs.org/debug/-/debug-2.2.0.tgz">https://registry.npmjs.org/debug/-/debug-2.2.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/npm/node_modules/node-gyp/node_modules/path-array/node_modules/array-index/node_modules/debug/package.json,/node_modules/connect/node_modules/debug/package.json,/node_modules/nyc/node_modules/debug/package.json,/node_modules/mocha/node_modules/debug/package.json</p> <p> Dependency Hierarchy: - mocha-2.5.3.tgz (Root Library) - :x: **debug-2.2.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/EliyaC/NodeGoat/commit/2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde">2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter. It takes around 50k characters to block for 2 seconds making this a low severity issue. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-16137>CVE-2017-16137</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-16137">https://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-16137</a></p> <p>Release Date: 2018-06-07</p> <p>Fix Resolution (debug): 2.6.9</p> <p>Direct dependency fix Resolution (mocha): 4.0.0</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2017-16137 (Medium) detected in debug-2.2.0.tgz - autoclosed - ## CVE-2017-16137 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>debug-2.2.0.tgz</b></p></summary> <p>small debugging utility</p> <p>Library home page: <a href="https://registry.npmjs.org/debug/-/debug-2.2.0.tgz">https://registry.npmjs.org/debug/-/debug-2.2.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/npm/node_modules/node-gyp/node_modules/path-array/node_modules/array-index/node_modules/debug/package.json,/node_modules/connect/node_modules/debug/package.json,/node_modules/nyc/node_modules/debug/package.json,/node_modules/mocha/node_modules/debug/package.json</p> <p> Dependency Hierarchy: - mocha-2.5.3.tgz (Root Library) - :x: **debug-2.2.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/EliyaC/NodeGoat/commit/2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde">2f9ac315d9e05728b7ce26ce7cf1b4e684e54fde</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter. It takes around 50k characters to block for 2 seconds making this a low severity issue. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-16137>CVE-2017-16137</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-16137">https://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-16137</a></p> <p>Release Date: 2018-06-07</p> <p>Fix Resolution (debug): 2.6.9</p> <p>Direct dependency fix Resolution (mocha): 4.0.0</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_process
cve medium detected in debug tgz autoclosed cve medium severity vulnerability vulnerable library debug tgz small debugging utility library home page a href path to dependency file package json path to vulnerable library node modules npm node modules node gyp node modules path array node modules array index node modules debug package json node modules connect node modules debug package json node modules nyc node modules debug package json node modules mocha node modules debug package json dependency hierarchy mocha tgz root library x debug tgz vulnerable library found in head commit a href found in base branch master vulnerability details the debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter it takes around characters to block for seconds making this a low severity issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution debug direct dependency fix resolution mocha rescue worker helmet automatic remediation is available for this issue
0
148,135
23,317,391,294
IssuesEvent
2022-08-08 13:38:16
Open-Bootcamp/productos-km0
https://api.github.com/repos/Open-Bootcamp/productos-km0
opened
Diseñar vista de reviews
design
Como usuario, quiero poder ver las reviews que he recibido, las reviews que he dado y las reviews de ventas/compras que tengo pendientes - [ ] Diseñar vista de tabs donde pueda acceder a distintas reviews, en concreto, se debe mostrar un listado por tab de reviews dadas, pendientes y recibidas - [ ] Las reviews estarán basadas en un sistema de puntuación del 1 al 5 - [ ] El listado debe mostrar el título de la review y el puntaje - [ ] Al hacer tap en una review recibida o registrada, debo poder ver en una ventana modal el detalle del mensaje, título, usuario, compra/venta y puntuación - [ ] Al hacer tap en una review pendiente, debo poder ver una ventana modal con un formulario para registrar la review del vendedor/comprador agregando si la transacción se completó o no, un título, un mensaje y una puntuación - [ ] Compartir diseño con desarrollo
1.0
Diseñar vista de reviews - Como usuario, quiero poder ver las reviews que he recibido, las reviews que he dado y las reviews de ventas/compras que tengo pendientes - [ ] Diseñar vista de tabs donde pueda acceder a distintas reviews, en concreto, se debe mostrar un listado por tab de reviews dadas, pendientes y recibidas - [ ] Las reviews estarán basadas en un sistema de puntuación del 1 al 5 - [ ] El listado debe mostrar el título de la review y el puntaje - [ ] Al hacer tap en una review recibida o registrada, debo poder ver en una ventana modal el detalle del mensaje, título, usuario, compra/venta y puntuación - [ ] Al hacer tap en una review pendiente, debo poder ver una ventana modal con un formulario para registrar la review del vendedor/comprador agregando si la transacción se completó o no, un título, un mensaje y una puntuación - [ ] Compartir diseño con desarrollo
non_process
diseñar vista de reviews como usuario quiero poder ver las reviews que he recibido las reviews que he dado y las reviews de ventas compras que tengo pendientes diseñar vista de tabs donde pueda acceder a distintas reviews en concreto se debe mostrar un listado por tab de reviews dadas pendientes y recibidas las reviews estarán basadas en un sistema de puntuación del al el listado debe mostrar el título de la review y el puntaje al hacer tap en una review recibida o registrada debo poder ver en una ventana modal el detalle del mensaje título usuario compra venta y puntuación al hacer tap en una review pendiente debo poder ver una ventana modal con un formulario para registrar la review del vendedor comprador agregando si la transacción se completó o no un título un mensaje y una puntuación compartir diseño con desarrollo
0
11,621
14,484,288,214
IssuesEvent
2020-12-10 16:09:32
elastic/beats
https://api.github.com/repos/elastic/beats
closed
Add "top_level_domain" Enrichment to "registered_domain" Processor
:Processors Team:Security-External Integrations enhancement libbeat
**Describe the enhancement:** https://github.com/elastic/beats/pull/22999 Adds handling for subdomain enrichment, we should do something similar and have the "registered_domain" processor handle all domain enrichment. In addition we may want to consider renaming it.
1.0
Add "top_level_domain" Enrichment to "registered_domain" Processor - **Describe the enhancement:** https://github.com/elastic/beats/pull/22999 Adds handling for subdomain enrichment, we should do something similar and have the "registered_domain" processor handle all domain enrichment. In addition we may want to consider renaming it.
process
add top level domain enrichment to registered domain processor describe the enhancement adds handling for subdomain enrichment we should do something similar and have the registered domain processor handle all domain enrichment in addition we may want to consider renaming it
1
75,285
9,218,847,347
IssuesEvent
2019-03-11 14:16:57
mozilla/foundation.mozilla.org
https://api.github.com/repos/mozilla/foundation.mozilla.org
closed
Scorecard for Misinfo Campaign
Misinfo ui design
The Campaigns team is asking for the creation of a 'Scorecard' they can direct tech companies, EU/UK commissioners, partners and our email list to. The purpose is to put pressure on tech companies to improve their platform APIs and for partners to see where are the areas they can campaign on. **Examples of Scorecards:** [here](https://www.macrumors.com/2015/05/12/apple-tops-2015-greenpeace-clean-energy-index/), [here](https://www.forestandbird.org.nz/resources/five-out-seven-political-parties-want-end-irrigation-subsidies) and [here (page four)](https://www.greenpeace.org.uk/wp-content/uploads/2017/05/9fb0ba4a-palm-oil-scorecard-final.pdf) **More info/context about the Scorecard:** [here](https://docs.google.com/document/d/1MrZ8CLF0BrO_7PoRsjw0BryAtMNrnW7FipGPKI-uXEY/edit) **Larger context about the Misinfo campaign:** [here](https://docs.google.com/document/d/1RS6sD4UB-8NEtsoiL11hesp8y9dzkBfDZ5a_GldVeyw/edit) This will also be a good opportunity for the engagement team to build out a table cms component for the foundation site.
1.0
Scorecard for Misinfo Campaign - The Campaigns team is asking for the creation of a 'Scorecard' they can direct tech companies, EU/UK commissioners, partners and our email list to. The purpose is to put pressure on tech companies to improve their platform APIs and for partners to see where are the areas they can campaign on. **Examples of Scorecards:** [here](https://www.macrumors.com/2015/05/12/apple-tops-2015-greenpeace-clean-energy-index/), [here](https://www.forestandbird.org.nz/resources/five-out-seven-political-parties-want-end-irrigation-subsidies) and [here (page four)](https://www.greenpeace.org.uk/wp-content/uploads/2017/05/9fb0ba4a-palm-oil-scorecard-final.pdf) **More info/context about the Scorecard:** [here](https://docs.google.com/document/d/1MrZ8CLF0BrO_7PoRsjw0BryAtMNrnW7FipGPKI-uXEY/edit) **Larger context about the Misinfo campaign:** [here](https://docs.google.com/document/d/1RS6sD4UB-8NEtsoiL11hesp8y9dzkBfDZ5a_GldVeyw/edit) This will also be a good opportunity for the engagement team to build out a table cms component for the foundation site.
non_process
scorecard for misinfo campaign the campaigns team is asking for the creation of a scorecard they can direct tech companies eu uk commissioners partners and our email list to the purpose is to put pressure on tech companies to improve their platform apis and for partners to see where are the areas they can campaign on examples of scorecards and more info context about the scorecard larger context about the misinfo campaign this will also be a good opportunity for the engagement team to build out a table cms component for the foundation site
0
22,147
30,687,373,365
IssuesEvent
2023-07-26 13:12:12
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[MLv2] `joinable-columns` returns nothing if a given table isn't linked to the source table
.Frontend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
It seems like `joinabe-columns` returns columns only for tables that are linked to the source table via FKs. It doesn't match MLv1 behavior when it's possible to join any table in the same database.
1.0
[MLv2] `joinable-columns` returns nothing if a given table isn't linked to the source table - It seems like `joinabe-columns` returns columns only for tables that are linked to the source table via FKs. It doesn't match MLv1 behavior when it's possible to join any table in the same database.
process
joinable columns returns nothing if a given table isn t linked to the source table it seems like joinabe columns returns columns only for tables that are linked to the source table via fks it doesn t match behavior when it s possible to join any table in the same database
1
3,905
6,824,239,115
IssuesEvent
2017-11-08 04:49:25
azavea/transit-center-viz
https://api.github.com/repos/azavea/transit-center-viz
closed
Modify coordinate of msa centroids
data-preprocessing
- create new dataset in which MSAs are geocoded to the primary city rather than the centroid of the msa - replace existing dataset in carto
1.0
Modify coordinate of msa centroids - - create new dataset in which MSAs are geocoded to the primary city rather than the centroid of the msa - replace existing dataset in carto
process
modify coordinate of msa centroids create new dataset in which msas are geocoded to the primary city rather than the centroid of the msa replace existing dataset in carto
1
4,003
6,928,417,890
IssuesEvent
2017-12-01 04:44:39
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
FAT-DB in Parity
libs-etherlib status-inprocess type-enhancement
Check out this Parity command line option for 'cloud' version --fat-db=[BOOL] Build appropriate information to allow enumeration of all accounts and storage keys. Doubles the size of the state database. BOOL may be one of on, off or auto. (default: auto)
1.0
FAT-DB in Parity - Check out this Parity command line option for 'cloud' version --fat-db=[BOOL] Build appropriate information to allow enumeration of all accounts and storage keys. Doubles the size of the state database. BOOL may be one of on, off or auto. (default: auto)
process
fat db in parity check out this parity command line option for cloud version fat db build appropriate information to allow enumeration of all accounts and storage keys doubles the size of the state database bool may be one of on off or auto default auto
1
165,314
13,998,133,549
IssuesEvent
2020-10-28 09:02:50
koaning/scikit-lego
https://api.github.com/repos/koaning/scikit-lego
closed
[DOCS] Better Docstring for `make_simpleseries`
documentation
- There seems to be a typo or a gramatical mistake in the docsting for `datasets.make_simpleseries` where it says: > Generate a very simple timeseries dataset to play with. The generator **_assumes_** to generate daily data with a season, trend and noise. I think instead of "The generator assumes to generate [...]" it could say something like "The generator returns a daily time-series with a yearly seasonality, trend, and noise." I believe it would be clearer. I'd be glad to help out.
1.0
[DOCS] Better Docstring for `make_simpleseries` - - There seems to be a typo or a gramatical mistake in the docsting for `datasets.make_simpleseries` where it says: > Generate a very simple timeseries dataset to play with. The generator **_assumes_** to generate daily data with a season, trend and noise. I think instead of "The generator assumes to generate [...]" it could say something like "The generator returns a daily time-series with a yearly seasonality, trend, and noise." I believe it would be clearer. I'd be glad to help out.
non_process
better docstring for make simpleseries there seems to be a typo or a gramatical mistake in the docsting for datasets make simpleseries where it says generate a very simple timeseries dataset to play with the generator assumes to generate daily data with a season trend and noise i think instead of the generator assumes to generate it could say something like the generator returns a daily time series with a yearly seasonality trend and noise i believe it would be clearer i d be glad to help out
0
11,404
14,237,907,839
IssuesEvent
2020-11-18 17:51:33
ORNL-AMO/AMO-Tools-Desktop
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
opened
Validation updates to Wall
Process Heating
Changing all warnings to validation in new wall standalone and assessment.
1.0
Validation updates to Wall - Changing all warnings to validation in new wall standalone and assessment.
process
validation updates to wall changing all warnings to validation in new wall standalone and assessment
1
51,730
6,194,887,623
IssuesEvent
2017-07-05 11:05:16
TheScienceMuseum/collectionsonline
https://api.github.com/repos/TheScienceMuseum/collectionsonline
closed
Show↓ link under System of Arrangement not working in Chrome.
bug please-test priority-2 T2h
It's also a bit flaky in Safari. http://collection.sciencemuseum.org.uk/documents/aa110000005/records-of-s-pearson-son-ltd-london-and-associated-companies Also, see this related issue https://github.com/TheScienceMuseum/collectionsonline/issues/813
1.0
Show↓ link under System of Arrangement not working in Chrome. - It's also a bit flaky in Safari. http://collection.sciencemuseum.org.uk/documents/aa110000005/records-of-s-pearson-son-ltd-london-and-associated-companies Also, see this related issue https://github.com/TheScienceMuseum/collectionsonline/issues/813
non_process
show↓ link under system of arrangement not working in chrome it s also a bit flaky in safari also see this related issue
0
148,979
13,252,214,018
IssuesEvent
2020-08-20 04:37:42
synfinatic/vpnexiter
https://api.github.com/repos/synfinatic/vpnexiter
opened
Better docs for Speedtest/Custom support
documentation
There a few things people need to be aware of for this to work.
1.0
Better docs for Speedtest/Custom support - There a few things people need to be aware of for this to work.
non_process
better docs for speedtest custom support there a few things people need to be aware of for this to work
0
19,678
26,031,908,212
IssuesEvent
2022-12-21 22:20:21
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Missing requirements for build numbers
doc-enhancement devops/prod Pri1 devops-cicd-process/tech
(I received an error from violating these rules, which weren't listed on this page.. so...) Page should include the requirements for run(build) number.. Namely: * The maximum length of a build number is 255 characters. * Characters which are not allowed include '"', '/', ':', '<', '>', '\', '|', '?', '@', and '*'. * Run (Build) number can not end with '.' -Peter Error I received was: The build number format string $(versionNumber) generated a build number 0.2.24. which contains invalid character(s), is too long, or ends with '.'. The maximum length of a build number is 255 characters. Characters which are not allowed include '"', '/', ':', '<', '>', '\', '|', '?', '@', and '*'. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93 * Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7 * Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Missing requirements for build numbers - (I received an error from violating these rules, which weren't listed on this page.. so...) Page should include the requirements for run(build) number.. Namely: * The maximum length of a build number is 255 characters. * Characters which are not allowed include '"', '/', ':', '<', '>', '\', '|', '?', '@', and '*'. * Run (Build) number can not end with '.' -Peter Error I received was: The build number format string $(versionNumber) generated a build number 0.2.24. which contains invalid character(s), is too long, or ends with '.'. The maximum length of a build number is 255 characters. Characters which are not allowed include '"', '/', ':', '<', '>', '\', '|', '?', '@', and '*'. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93 * Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7 * Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
missing requirements for build numbers i received an error from violating these rules which weren t listed on this page so page should include the requirements for run build number namely the maximum length of a build number is characters characters which are not allowed include and run build number can not end with peter error i received was the build number format string versionnumber generated a build number which contains invalid character s is too long or ends with the maximum length of a build number is characters characters which are not allowed include and document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
6,793
9,923,500,055
IssuesEvent
2019-07-01 07:24:26
linnovate/root
https://api.github.com/repos/linnovate/root
closed
multiple select, tags tab, update button disappears
2.0.7 Fixed Not Reproducible Process bug Search
multiple select, in tags addition, the update button disappears after inputing too many tags, making the user unable to add the new tags ![image](https://user-images.githubusercontent.com/38312178/52791985-ec472000-3072-11e9-89af-44660c0a3f27.png)
1.0
multiple select, tags tab, update button disappears - multiple select, in tags addition, the update button disappears after inputing too many tags, making the user unable to add the new tags ![image](https://user-images.githubusercontent.com/38312178/52791985-ec472000-3072-11e9-89af-44660c0a3f27.png)
process
multiple select tags tab update button disappears multiple select in tags addition the update button disappears after inputing too many tags making the user unable to add the new tags
1
171,895
21,004,766,590
IssuesEvent
2022-03-29 21:15:44
phamleduy04/loto-website
https://api.github.com/repos/phamleduy04/loto-website
closed
axios-0.24.0.tgz: 1 vulnerabilities (highest severity is: 5.9) - autoclosed
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>axios-0.24.0.tgz</b></p></summary> <p></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/phamleduy04/loto-website/commit/7b0c2bc21942b6d045dc285b78c175b99f672801">7b0c2bc21942b6d045dc285b78c175b99f672801</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-0536](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.9 | follow-redirects-1.14.7.tgz | Transitive | 0.25.0 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-0536</summary> ### Vulnerable Library - <b>follow-redirects-1.14.7.tgz</b></p> <p>HTTP and HTTPS modules that follow redirects.</p> <p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p> <p> Dependency Hierarchy: - axios-0.24.0.tgz (Root Library) - :x: **follow-redirects-1.14.7.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/phamleduy04/loto-website/commit/7b0c2bc21942b6d045dc285b78c175b99f672801">7b0c2bc21942b6d045dc285b78c175b99f672801</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8. <p>Publish Date: 2022-02-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536>CVE-2022-0536</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.9</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536</a></p> <p>Release Date: 2022-02-09</p> <p>Fix Resolution (follow-redirects): 1.14.8</p> <p>Direct dependency fix Resolution (axios): 0.25.0</p> </p> <p></p> Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details> <!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"axios","packageVersion":"0.24.0","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"axios:0.24.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.25.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-0536","vulnerabilityDetails":"Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> -->
True
axios-0.24.0.tgz: 1 vulnerabilities (highest severity is: 5.9) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>axios-0.24.0.tgz</b></p></summary> <p></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/phamleduy04/loto-website/commit/7b0c2bc21942b6d045dc285b78c175b99f672801">7b0c2bc21942b6d045dc285b78c175b99f672801</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-0536](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.9 | follow-redirects-1.14.7.tgz | Transitive | 0.25.0 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-0536</summary> ### Vulnerable Library - <b>follow-redirects-1.14.7.tgz</b></p> <p>HTTP and HTTPS modules that follow redirects.</p> <p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p> <p> Dependency Hierarchy: - axios-0.24.0.tgz (Root Library) - :x: **follow-redirects-1.14.7.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/phamleduy04/loto-website/commit/7b0c2bc21942b6d045dc285b78c175b99f672801">7b0c2bc21942b6d045dc285b78c175b99f672801</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8. <p>Publish Date: 2022-02-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536>CVE-2022-0536</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.9</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536</a></p> <p>Release Date: 2022-02-09</p> <p>Fix Resolution (follow-redirects): 1.14.8</p> <p>Direct dependency fix Resolution (axios): 0.25.0</p> </p> <p></p> Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details> <!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"axios","packageVersion":"0.24.0","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"axios:0.24.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.25.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-0536","vulnerabilityDetails":"Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> -->
non_process
axios tgz vulnerabilities highest severity is autoclosed vulnerable library axios tgz path to dependency file package json path to vulnerable library node modules follow redirects package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium follow redirects tgz transitive details cve vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file package json path to vulnerable library node modules follow redirects package json dependency hierarchy axios tgz root library x follow redirects tgz vulnerable library found in head commit a href found in base branch main vulnerability details exposure of sensitive information to an unauthorized actor in npm follow redirects prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects direct dependency fix resolution axios step up your open source security game with whitesource istransitivedependency false dependencytree axios isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails exposure of sensitive information to an unauthorized actor in npm follow redirects prior to vulnerabilityurl
0
503,365
14,590,126,601
IssuesEvent
2020-12-19 05:57:09
kubeflow/katib
https://api.github.com/repos/kubeflow/katib
closed
Use kfctl for E2E tests
area/0.4.0 lifecycle/stale priority/p2
Right now the E2E tests in kubeflow/katib are using their own custom scripts to deploy https://github.com/kubeflow/katib/tree/master/test/scripts Should we switch to using kfctl like we do in kubeflow/kubeflow https://github.com/kubeflow/kubeflow/blob/70c13c6fe77cd96dcad9ebc5a081e1c5d1a1f1e2/testing/workflows/components/workflows.libsonnet#L245 So we can reuse logic for creating clusters as well as deploying kubeflow?
1.0
Use kfctl for E2E tests - Right now the E2E tests in kubeflow/katib are using their own custom scripts to deploy https://github.com/kubeflow/katib/tree/master/test/scripts Should we switch to using kfctl like we do in kubeflow/kubeflow https://github.com/kubeflow/kubeflow/blob/70c13c6fe77cd96dcad9ebc5a081e1c5d1a1f1e2/testing/workflows/components/workflows.libsonnet#L245 So we can reuse logic for creating clusters as well as deploying kubeflow?
non_process
use kfctl for tests right now the tests in kubeflow katib are using their own custom scripts to deploy should we switch to using kfctl like we do in kubeflow kubeflow so we can reuse logic for creating clusters as well as deploying kubeflow
0
18,100
24,126,989,752
IssuesEvent
2022-09-21 02:04:24
bitPogo/kmock
https://api.github.com/repos/bitPogo/kmock
closed
AccessMethods can collide with Generics
bug kmock-processor
## Description <!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug --> Currently things like: ```kotlin interface Collision { fun <T : Something<K>, K> foo(arg: T) fun <T : Something<R>, R> bar(arg: T) } ``` cause collisions since the Parameter are not entirely resloved.
1.0
AccessMethods can collide with Generics - ## Description <!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug --> Currently things like: ```kotlin interface Collision { fun <T : Something<K>, K> foo(arg: T) fun <T : Something<R>, R> bar(arg: T) } ``` cause collisions since the Parameter are not entirely resloved.
process
accessmethods can collide with generics description currently things like kotlin interface collision fun k foo arg t fun r bar arg t cause collisions since the parameter are not entirely resloved
1
319,591
23,780,372,243
IssuesEvent
2022-09-02 03:37:06
Vetpetmon-Labs/SNAZpedia-Issue-Tracker
https://api.github.com/repos/Vetpetmon-Labs/SNAZpedia-Issue-Tracker
reopened
Uploaded animated GIFs do NOT animate
documentation Has Workaround
Link 1: https://vetpetmon.com/snazpedia/index.php/Umbral_Coalition_(Game) Link 2: https://vetpetmon.com/snazpedia/index.php/File:Heated.gif Description: GIFs uploaded do not animate when they are on a page (See Link 1). However, when viewed directly, they are animated (See Link 2). Steps to reproduce: Upload an animated GIF file to the server, then insert it onto a wiki page. What should happen: Animated GIFs should play. What actually happens: Animated GIFs do not play. ==================================== Comments: Possibly something in the backend/serverside that's causing this. I am looking into it.
1.0
Uploaded animated GIFs do NOT animate - Link 1: https://vetpetmon.com/snazpedia/index.php/Umbral_Coalition_(Game) Link 2: https://vetpetmon.com/snazpedia/index.php/File:Heated.gif Description: GIFs uploaded do not animate when they are on a page (See Link 1). However, when viewed directly, they are animated (See Link 2). Steps to reproduce: Upload an animated GIF file to the server, then insert it onto a wiki page. What should happen: Animated GIFs should play. What actually happens: Animated GIFs do not play. ==================================== Comments: Possibly something in the backend/serverside that's causing this. I am looking into it.
non_process
uploaded animated gifs do not animate link link description gifs uploaded do not animate when they are on a page see link however when viewed directly they are animated see link steps to reproduce upload an animated gif file to the server then insert it onto a wiki page what should happen animated gifs should play what actually happens animated gifs do not play comments possibly something in the backend serverside that s causing this i am looking into it
0
189,125
14,490,088,666
IssuesEvent
2020-12-11 01:27:58
cocotb/cocotb
https://api.github.com/repos/cocotb/cocotb
opened
Expand CI tests to other OSes
category:simulators:ghdl category:simulators:verilator category:tests-ci
Currently we are only testing Icarus in CI on other OSes. We should expand that to test GHDL and Verilator on Windows and Mac OS. Verilator support on WIndows is broken, we could have caught that a bit sooner if we were testing more.
1.0
Expand CI tests to other OSes - Currently we are only testing Icarus in CI on other OSes. We should expand that to test GHDL and Verilator on Windows and Mac OS. Verilator support on WIndows is broken, we could have caught that a bit sooner if we were testing more.
non_process
expand ci tests to other oses currently we are only testing icarus in ci on other oses we should expand that to test ghdl and verilator on windows and mac os verilator support on windows is broken we could have caught that a bit sooner if we were testing more
0
444,953
31,158,158,853
IssuesEvent
2023-08-16 14:20:58
redhat-cop/vault-config-operator
https://api.github.com/repos/redhat-cop/vault-config-operator
closed
The operator keeps sending login requests to vault every second
documentation
I've configured the operator with the kubernetes auth method. It seems to be sending one login request every second to vault. Login succeeds, vault grants it a lease, but it sends another request. I've been running the operator for a couple of days now, and my `vault.db` has ballooned to about 2 GB (and growing) with ~500,000 leases at `auth/kubernetes/login`, and growing. Any idea what might be causing the operator to behave like this? I don't see anything noteworthy in the operator's logs. The only custom resources I'm using are: - Kubernetes auth roles - Policies - Database secrets engine with auto-rotation - Database static creds - Database creds Vault version 1.11.2
1.0
The operator keeps sending login requests to vault every second - I've configured the operator with the kubernetes auth method. It seems to be sending one login request every second to vault. Login succeeds, vault grants it a lease, but it sends another request. I've been running the operator for a couple of days now, and my `vault.db` has ballooned to about 2 GB (and growing) with ~500,000 leases at `auth/kubernetes/login`, and growing. Any idea what might be causing the operator to behave like this? I don't see anything noteworthy in the operator's logs. The only custom resources I'm using are: - Kubernetes auth roles - Policies - Database secrets engine with auto-rotation - Database static creds - Database creds Vault version 1.11.2
non_process
the operator keeps sending login requests to vault every second i ve configured the operator with the kubernetes auth method it seems to be sending one login request every second to vault login succeeds vault grants it a lease but it sends another request i ve been running the operator for a couple of days now and my vault db has ballooned to about gb and growing with leases at auth kubernetes login and growing any idea what might be causing the operator to behave like this i don t see anything noteworthy in the operator s logs the only custom resources i m using are kubernetes auth roles policies database secrets engine with auto rotation database static creds database creds vault version
0
9,310
2,615,143,695
IssuesEvent
2015-03-01 06:19:06
chrsmith/html5rocks
https://api.github.com/repos/chrsmith/html5rocks
closed
HTML Aduio + Video -> Audio sample play/pause buttons don't work as expected.
auto-migrated Priority-Medium Slides Type-Defect
``` http://slides.html5rocks.com/#slide22 When playing the US national anthem and pressing the pause icon on the audio element controls. I expect the "Pause" button to be disabled and the "Play" button to be enabled. However, this does not happen. I need to click the "Pause" button in order to make this happen. After that, I can use both the play icon in the autio controls and the "Play" button to un-pause the audio. ``` Original issue reported on code.google.com by `skylined@chromium.org` on 24 Jun 2010 at 8:20 * Merged into: #5
1.0
HTML Aduio + Video -> Audio sample play/pause buttons don't work as expected. - ``` http://slides.html5rocks.com/#slide22 When playing the US national anthem and pressing the pause icon on the audio element controls. I expect the "Pause" button to be disabled and the "Play" button to be enabled. However, this does not happen. I need to click the "Pause" button in order to make this happen. After that, I can use both the play icon in the autio controls and the "Play" button to un-pause the audio. ``` Original issue reported on code.google.com by `skylined@chromium.org` on 24 Jun 2010 at 8:20 * Merged into: #5
non_process
html aduio video audio sample play pause buttons don t work as expected when playing the us national anthem and pressing the pause icon on the audio element controls i expect the pause button to be disabled and the play button to be enabled however this does not happen i need to click the pause button in order to make this happen after that i can use both the play icon in the autio controls and the play button to un pause the audio original issue reported on code google com by skylined chromium org on jun at merged into
0
69,734
8,446,139,921
IssuesEvent
2018-10-19 00:52:00
ricarthlima/eo-project-es
https://api.github.com/repos/ricarthlima/eo-project-es
closed
Reunião presencial com cliente
Categoria: Design Categoria: Estudo Categoria: Gestão Categoria: Validação P1
# Ata de Reunião (Reunião com a cliente) ## Informações **Data: 18/10/2018** **Horário: 18:40** **Local: Bloco E** **Participantes:** Monalisa Sousa, Guilherme Prado, Ricarth Lima, Warley Souza e Paula Vaz ## Pauta da Reunião Nesta reunião foi validado o storyboard "completo" da nossa aplicação junto com algumas alterações específicas que fizemos no mesmo. Além de uma conversa geral sobre o público vegano e dicas de ferramentas interessantes para nós. ## Discussões Abordadas #### Validação do storyboard completo Nesta discussão foi abordado as alterações no storyboard, como a tela de consultas, além da validação do nosso storyboard completo com Paula, principalmente da tela inicial da nossa aplicação. #### Conversa geral sobre o público vegano e dicas Nesta discussão foi abordado uma conversa sobre o público vegano, fontes de informação sobre o mesmo, além de indicações dela de ferramentas úteis para nós, como o "figma" e o "flaticon". ## Registro das Decisões A partir dessa reunião ficou acordado pesquisas através dos links/dicas que ela nos mandou e uma reunião mais longa entre o time para definição de estratégias na próxima semana. ## Compromissos - **Todos:** - Implantar melhorias gerais e pesquisas de aprofundamento; - **Próximas reuniões**: - Segunda, quarta e sexta; Bloco E, 16:00 até as 17:00.
1.0
Reunião presencial com cliente - # Ata de Reunião (Reunião com a cliente) ## Informações **Data: 18/10/2018** **Horário: 18:40** **Local: Bloco E** **Participantes:** Monalisa Sousa, Guilherme Prado, Ricarth Lima, Warley Souza e Paula Vaz ## Pauta da Reunião Nesta reunião foi validado o storyboard "completo" da nossa aplicação junto com algumas alterações específicas que fizemos no mesmo. Além de uma conversa geral sobre o público vegano e dicas de ferramentas interessantes para nós. ## Discussões Abordadas #### Validação do storyboard completo Nesta discussão foi abordado as alterações no storyboard, como a tela de consultas, além da validação do nosso storyboard completo com Paula, principalmente da tela inicial da nossa aplicação. #### Conversa geral sobre o público vegano e dicas Nesta discussão foi abordado uma conversa sobre o público vegano, fontes de informação sobre o mesmo, além de indicações dela de ferramentas úteis para nós, como o "figma" e o "flaticon". ## Registro das Decisões A partir dessa reunião ficou acordado pesquisas através dos links/dicas que ela nos mandou e uma reunião mais longa entre o time para definição de estratégias na próxima semana. ## Compromissos - **Todos:** - Implantar melhorias gerais e pesquisas de aprofundamento; - **Próximas reuniões**: - Segunda, quarta e sexta; Bloco E, 16:00 até as 17:00.
non_process
reunião presencial com cliente ata de reunião reunião com a cliente informações data horário local bloco e participantes monalisa sousa guilherme prado ricarth lima warley souza e paula vaz pauta da reunião nesta reunião foi validado o storyboard completo da nossa aplicação junto com algumas alterações específicas que fizemos no mesmo além de uma conversa geral sobre o público vegano e dicas de ferramentas interessantes para nós discussões abordadas validação do storyboard completo nesta discussão foi abordado as alterações no storyboard como a tela de consultas além da validação do nosso storyboard completo com paula principalmente da tela inicial da nossa aplicação conversa geral sobre o público vegano e dicas nesta discussão foi abordado uma conversa sobre o público vegano fontes de informação sobre o mesmo além de indicações dela de ferramentas úteis para nós como o figma e o flaticon registro das decisões a partir dessa reunião ficou acordado pesquisas através dos links dicas que ela nos mandou e uma reunião mais longa entre o time para definição de estratégias na próxima semana compromissos todos implantar melhorias gerais e pesquisas de aprofundamento próximas reuniões segunda quarta e sexta bloco e até as
0
5,434
8,299,430,897
IssuesEvent
2018-09-21 02:51:46
googleapis/nodejs-error-reporting
https://api.github.com/repos/googleapis/nodejs-error-reporting
closed
Fix the linting tests
priority: p2 type: process
Problems with the linting tests that seem to be unrelated to linting problems, but are instead related to the use of `npm link` with `npm install` were causing the linting tests to fail, prevent landing changes for PR #139. The linting test would fail with the following error: ``` cd samples/ npm link ../ npm install cd .. > @google-cloud/error-reporting@0.5.0 prepare /home/node/project > npm run compile > @google-cloud/error-reporting@0.5.0 compile /home/node/project > tsc -p . > @google-cloud/error-reporting@0.5.0 postcompile /home/node/project > cpy 'utils/**/*.*' build --parents && cpy 'test/**/*.*' build --parents npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.4 (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.4: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) up to date in 15.714s /home/node/.npm-global/lib/node_modules/@google-cloud/error-reporting -> /home/node/project /home/node/project/samples/node_modules/@google-cloud/error-reporting -> /home/node/.npm-global/lib/node_modules/@google-cloud/error-reporting -> /home/node/project npm ERR! path /home/node/project/node_modules/@google-cloud/common npm ERR! code ENOENT npm ERR! errno -2 npm ERR! syscall rename npm ERR! enoent ENOENT: no such file or directory, rename '/home/node/project/node_modules/@google-cloud/common' -> '/home/node/project/node_modules/@google-cloud/.common.DELETE' npm ERR! enoent This is related to npm not being able to find a file. npm ERR! enoent npm ERR! A complete log of this run can be found in: npm ERR! /home/node/.npm/_logs/2018-06-23T18_56_25_590Z-debug.log Exited with code 254 ``` Re-enable and fix the linting test so that it reliably gives signals to the status of code health.
1.0
Fix the linting tests - Problems with the linting tests that seem to be unrelated to linting problems, but are instead related to the use of `npm link` with `npm install` were causing the linting tests to fail, prevent landing changes for PR #139. The linting test would fail with the following error: ``` cd samples/ npm link ../ npm install cd .. > @google-cloud/error-reporting@0.5.0 prepare /home/node/project > npm run compile > @google-cloud/error-reporting@0.5.0 compile /home/node/project > tsc -p . > @google-cloud/error-reporting@0.5.0 postcompile /home/node/project > cpy 'utils/**/*.*' build --parents && cpy 'test/**/*.*' build --parents npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.4 (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.4: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) up to date in 15.714s /home/node/.npm-global/lib/node_modules/@google-cloud/error-reporting -> /home/node/project /home/node/project/samples/node_modules/@google-cloud/error-reporting -> /home/node/.npm-global/lib/node_modules/@google-cloud/error-reporting -> /home/node/project npm ERR! path /home/node/project/node_modules/@google-cloud/common npm ERR! code ENOENT npm ERR! errno -2 npm ERR! syscall rename npm ERR! enoent ENOENT: no such file or directory, rename '/home/node/project/node_modules/@google-cloud/common' -> '/home/node/project/node_modules/@google-cloud/.common.DELETE' npm ERR! enoent This is related to npm not being able to find a file. npm ERR! enoent npm ERR! A complete log of this run can be found in: npm ERR! /home/node/.npm/_logs/2018-06-23T18_56_25_590Z-debug.log Exited with code 254 ``` Re-enable and fix the linting test so that it reliably gives signals to the status of code health.
process
fix the linting tests problems with the linting tests that seem to be unrelated to linting problems but are instead related to the use of npm link with npm install were causing the linting tests to fail prevent landing changes for pr the linting test would fail with the following error cd samples npm link npm install cd google cloud error reporting prepare home node project npm run compile google cloud error reporting compile home node project tsc p google cloud error reporting postcompile home node project cpy utils build parents cpy test build parents npm warn optional skipping optional dependency fsevents node modules fsevents npm warn notsup skipping optional dependency unsupported platform for fsevents wanted os darwin arch any current os linux arch up to date in home node npm global lib node modules google cloud error reporting home node project home node project samples node modules google cloud error reporting home node npm global lib node modules google cloud error reporting home node project npm err path home node project node modules google cloud common npm err code enoent npm err errno npm err syscall rename npm err enoent enoent no such file or directory rename home node project node modules google cloud common home node project node modules google cloud common delete npm err enoent this is related to npm not being able to find a file npm err enoent npm err a complete log of this run can be found in npm err home node npm logs debug log exited with code re enable and fix the linting test so that it reliably gives signals to the status of code health
1
17,002
22,365,843,354
IssuesEvent
2022-06-16 03:54:56
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Processing: confusion when listing inputs from "virtual layers"
Processing Bug
### What is the bug or the crash? Running some processing algorithms within a project with numerous layers contained in a geopackage, I noticed that the same layers appeared many times in the form input layer. After going through some steps I remembered having looked at the preview of the layers inside the db manager. From there I did some tests and I saw that once the preview of the table was created, a virtual layer was automatically created within the project layers, inside db manager. By saving the project these layers are saved, generating a redundancy of layers that makes it difficult to manage during processing operations. Furthemore is not possible to delete these virtual layers. ### Steps to reproduce the issue I gave it a try by creating a new project from scratch, so i loaded the layers connected the geopackage in the db manager and checked the virtual layers in the project layers ![Screenshot (11)](https://user-images.githubusercontent.com/43775440/152252206-49e30a86-258e-48ea-9420-0bd8a136626b.png) I tried to launch a geoprocessing tool and only the layers present in the panel appeared ![Screenshot (10)](https://user-images.githubusercontent.com/43775440/152251842-66a28d26-6927-4d19-a0aa-2e435e7c89c4.png) I activated a preview of the table in the db manager and the virtual layer appeared ![Screenshot (12)](https://user-images.githubusercontent.com/43775440/152252298-d23a3c55-3dae-43f5-924a-62efb69bed9e.png) ![Screenshot (13)](https://user-images.githubusercontent.com/43775440/152252316-d6bfe6e8-214a-432c-9d49-7d2372750d96.png) relaunching ![Screenshot (14)](https://user-images.githubusercontent.com/43775440/152252400-d1f2bd64-7809-4380-a3ad-e8246cb454c1.png) the geoprocessing tool the layer appears duplicated ### Versions QGIS version 3.16.8-Hannover QGIS code revision 8c50902ea4 Compiled against Qt 5.11.2 Running against Qt 5.11.2 Compiled against GDAL/OGR 3.1.4 Running against GDAL/OGR 3.1.4 Compiled against GEOS 3.8.1-CAPI-1.13.3 Running against GEOS 3.8.1-CAPI-1.13.3 Compiled against SQLite 3.29.0 Running against SQLite 3.29.0 PostgreSQL Client Version 11.5 SpatiaLite Version 4.3.0 QWT Version 6.1.3 QScintilla2 Version 2.10.8 Compiled against PROJ 6.3.2 Running against PROJ Rel. 6.3.2, May 1st, 2020 OS Version Windows 10 (10.0) Active python plugins DataPlotly; DEMto3D; GroupStats; mmqgis; Multi_Ring_Buffer; pointsamplingtool; qfieldsync; quick_map_services; shapetools; SwapVectorDirection; TerrainShading; db_manager; MetaSearch; processing ### Supported QGIS version - [ ] I'm running a supported QGIS version according to the roadmap. ### New profile - [ ] I tried with a new QGIS profile ### Additional context _No response_
1.0
Processing: confusion when listing inputs from "virtual layers" - ### What is the bug or the crash? Running some processing algorithms within a project with numerous layers contained in a geopackage, I noticed that the same layers appeared many times in the form input layer. After going through some steps I remembered having looked at the preview of the layers inside the db manager. From there I did some tests and I saw that once the preview of the table was created, a virtual layer was automatically created within the project layers, inside db manager. By saving the project these layers are saved, generating a redundancy of layers that makes it difficult to manage during processing operations. Furthemore is not possible to delete these virtual layers. ### Steps to reproduce the issue I gave it a try by creating a new project from scratch, so i loaded the layers connected the geopackage in the db manager and checked the virtual layers in the project layers ![Screenshot (11)](https://user-images.githubusercontent.com/43775440/152252206-49e30a86-258e-48ea-9420-0bd8a136626b.png) I tried to launch a geoprocessing tool and only the layers present in the panel appeared ![Screenshot (10)](https://user-images.githubusercontent.com/43775440/152251842-66a28d26-6927-4d19-a0aa-2e435e7c89c4.png) I activated a preview of the table in the db manager and the virtual layer appeared ![Screenshot (12)](https://user-images.githubusercontent.com/43775440/152252298-d23a3c55-3dae-43f5-924a-62efb69bed9e.png) ![Screenshot (13)](https://user-images.githubusercontent.com/43775440/152252316-d6bfe6e8-214a-432c-9d49-7d2372750d96.png) relaunching ![Screenshot (14)](https://user-images.githubusercontent.com/43775440/152252400-d1f2bd64-7809-4380-a3ad-e8246cb454c1.png) the geoprocessing tool the layer appears duplicated ### Versions QGIS version 3.16.8-Hannover QGIS code revision 8c50902ea4 Compiled against Qt 5.11.2 Running against Qt 5.11.2 Compiled against GDAL/OGR 3.1.4 Running against GDAL/OGR 3.1.4 Compiled against GEOS 3.8.1-CAPI-1.13.3 Running against GEOS 3.8.1-CAPI-1.13.3 Compiled against SQLite 3.29.0 Running against SQLite 3.29.0 PostgreSQL Client Version 11.5 SpatiaLite Version 4.3.0 QWT Version 6.1.3 QScintilla2 Version 2.10.8 Compiled against PROJ 6.3.2 Running against PROJ Rel. 6.3.2, May 1st, 2020 OS Version Windows 10 (10.0) Active python plugins DataPlotly; DEMto3D; GroupStats; mmqgis; Multi_Ring_Buffer; pointsamplingtool; qfieldsync; quick_map_services; shapetools; SwapVectorDirection; TerrainShading; db_manager; MetaSearch; processing ### Supported QGIS version - [ ] I'm running a supported QGIS version according to the roadmap. ### New profile - [ ] I tried with a new QGIS profile ### Additional context _No response_
process
processing confusion when listing inputs from virtual layers what is the bug or the crash running some processing algorithms within a project with numerous layers contained in a geopackage i noticed that the same layers appeared many times in the form input layer after going through some steps i remembered having looked at the preview of the layers inside the db manager from there i did some tests and i saw that once the preview of the table was created a virtual layer was automatically created within the project layers inside db manager by saving the project these layers are saved generating a redundancy of layers that makes it difficult to manage during processing operations furthemore is not possible to delete these virtual layers steps to reproduce the issue i gave it a try by creating a new project from scratch so i loaded the layers connected the geopackage in the db manager and checked the virtual layers in the project layers i tried to launch a geoprocessing tool and only the layers present in the panel appeared i activated a preview of the table in the db manager and the virtual layer appeared relaunching the geoprocessing tool the layer appears duplicated versions qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version windows active python plugins dataplotly groupstats mmqgis multi ring buffer pointsamplingtool qfieldsync quick map services shapetools swapvectordirection terrainshading db manager metasearch processing supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
1
22,392
31,142,286,956
IssuesEvent
2023-08-16 01:44:20
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Flaky test: Timed out retrying after 4000ms: Expected not to find content: 'Your tests are loading...' but continuously found it.
OS: windows process: flaky test topic: flake ❄️ stage: flake stale
### Link to dashboard or CircleCI failure https://app.circleci.com/pipelines/github/cypress-io/cypress/41754/workflows/89edd596-de17-4dc1-8183-f4d037217241/jobs/1730646/tests ### Link to failing test in GitHub https://github.com/cypress-io/cypress/blob/develop/packages/app/cypress/e2e/cypress-in-cypress-run-mode.cy.ts#L44 ### Analysis <img width="1042" alt="Screen Shot 2022-08-11 at 6 59 33 PM" src="https://user-images.githubusercontent.com/26726429/184271151-b8402dd2-24ba-4e01-b8b3-375238291850.png"> ### Cypress Version 10.4.0 ### Other _No response_
1.0
Flaky test: Timed out retrying after 4000ms: Expected not to find content: 'Your tests are loading...' but continuously found it. - ### Link to dashboard or CircleCI failure https://app.circleci.com/pipelines/github/cypress-io/cypress/41754/workflows/89edd596-de17-4dc1-8183-f4d037217241/jobs/1730646/tests ### Link to failing test in GitHub https://github.com/cypress-io/cypress/blob/develop/packages/app/cypress/e2e/cypress-in-cypress-run-mode.cy.ts#L44 ### Analysis <img width="1042" alt="Screen Shot 2022-08-11 at 6 59 33 PM" src="https://user-images.githubusercontent.com/26726429/184271151-b8402dd2-24ba-4e01-b8b3-375238291850.png"> ### Cypress Version 10.4.0 ### Other _No response_
process
flaky test timed out retrying after expected not to find content your tests are loading but continuously found it link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at pm src cypress version other no response
1
26,666
4,238,295,073
IssuesEvent
2016-07-06 02:34:27
coreos/etcd
https://api.github.com/repos/coreos/etcd
closed
test: curl tls tests fail on fedora, others
area/testing kind/bug
The test certs (generated by cfssl) use ECDSA which (intentionally) isn't supported on some systems.
1.0
test: curl tls tests fail on fedora, others - The test certs (generated by cfssl) use ECDSA which (intentionally) isn't supported on some systems.
non_process
test curl tls tests fail on fedora others the test certs generated by cfssl use ecdsa which intentionally isn t supported on some systems
0
22,513
31,563,901,427
IssuesEvent
2023-09-03 15:27:53
rladstaetter/LogoRRR
https://api.github.com/repos/rladstaetter/LogoRRR
opened
LogoRRR 23.3.0 - Release
release process
### Release of LogoRRR 23.2.1 - [] perform release test (see ReleaseTest.md) - [] cleanup Milestone plan - [] update images via `ScreenShotterApp` - [] create dedicated Screenshot for github front page (use intellij log) - [] search for version number of last released version and update it to current version (-> 23.1.0) - [] Update Release Notes - [] Build binary artefacts MacOsX (scripts available in project root) - [] Build binary artefacts for Linux - [] Build binary artefacts Windows - [] Check readmes and links - [] Update Homepage - [] Release via Github, update links - [] close all issues - [] tweet - [] update version to next version
1.0
LogoRRR 23.3.0 - Release - ### Release of LogoRRR 23.2.1 - [] perform release test (see ReleaseTest.md) - [] cleanup Milestone plan - [] update images via `ScreenShotterApp` - [] create dedicated Screenshot for github front page (use intellij log) - [] search for version number of last released version and update it to current version (-> 23.1.0) - [] Update Release Notes - [] Build binary artefacts MacOsX (scripts available in project root) - [] Build binary artefacts for Linux - [] Build binary artefacts Windows - [] Check readmes and links - [] Update Homepage - [] Release via Github, update links - [] close all issues - [] tweet - [] update version to next version
process
logorrr release release of logorrr perform release test see releasetest md cleanup milestone plan update images via screenshotterapp create dedicated screenshot for github front page use intellij log search for version number of last released version and update it to current version update release notes build binary artefacts macosx scripts available in project root build binary artefacts for linux build binary artefacts windows check readmes and links update homepage release via github update links close all issues tweet update version to next version
1
396,916
27,141,378,434
IssuesEvent
2023-02-16 16:34:37
falcosecurity/.github
https://api.github.com/repos/falcosecurity/.github
opened
[NOMINATION] Falco contributor of the month - January
kind/documentation
**What to document** <!-- Please comment on the issue your nominations for the month of January. Please include name + link to their contribution. -->
1.0
[NOMINATION] Falco contributor of the month - January - **What to document** <!-- Please comment on the issue your nominations for the month of January. Please include name + link to their contribution. -->
non_process
falco contributor of the month january what to document please comment on the issue your nominations for the month of january please include name link to their contribution
0
183,482
31,496,233,249
IssuesEvent
2023-08-31 02:44:22
fireside68/rolling-bites
https://api.github.com/repos/fireside68/rolling-bites
opened
Design Map/Filtered Directory Tab
design
Build up the design for the map and truck list. The map should take up 60% of the available container; the list the other 40%. Be explicit about the way each truck's information is displayed. Name, icon, what other information can be displayed, etc. Pay special attention to mobile design. Where will the filters go? Checkboxes, radio buttons? Build a wireframe. Ensure the navbar and header are included, _especially_ mobile. Any committed work should be done in a `design` feature branch.
1.0
Design Map/Filtered Directory Tab - Build up the design for the map and truck list. The map should take up 60% of the available container; the list the other 40%. Be explicit about the way each truck's information is displayed. Name, icon, what other information can be displayed, etc. Pay special attention to mobile design. Where will the filters go? Checkboxes, radio buttons? Build a wireframe. Ensure the navbar and header are included, _especially_ mobile. Any committed work should be done in a `design` feature branch.
non_process
design map filtered directory tab build up the design for the map and truck list the map should take up of the available container the list the other be explicit about the way each truck s information is displayed name icon what other information can be displayed etc pay special attention to mobile design where will the filters go checkboxes radio buttons build a wireframe ensure the navbar and header are included especially mobile any committed work should be done in a design feature branch
0
961
3,419,812,084
IssuesEvent
2015-12-08 11:44:32
symfony/symfony
https://api.github.com/repos/symfony/symfony
closed
Symfony\Component\Process\Process Bug in Windows Server
Process Unconfirmed
I'm using the wkhtmltopdf.exe command line program to generate PDF from HTML I'm using it through *Knp/Snappy* package which is using ```Symfony\Component\Process\Process``` to run the program in the cmd. It works fine in my computer (Win 8.1 64bit and Win 7 32bit) but not in the deployment machine (Windows Server 2012 R2, 12GB, Intel Xeon 2.4 Ghz) here is the code using Process class: ``` $binary = \Config::get('snappy.pdf.binary'); $process = new Symfony\Component\Process\Process($binary.' http://localhost/gca/public/demandes-achat/1138/pdf processs.pdf'); $process->run(function ($type, $buffer) { if (\Symfony\Component\Process\Process::ERR === $type) { echo 'ERR > '.$buffer; } else { echo 'OUT > '.$buffer; } }); ``` I get a 60 seconds timeout error: ``` Symfony\Component\Process\Exception\ProcessTimedOutExceptionGET /gca/public/test The process "C:\xampp\htdocs\gca/wkhtmltopdf/bin/wkhtmltopdf.exe http://localhost/gca/public/demandes-achat/1138/pdf process.pdf" exceeded the timeout of 60 seconds ``` Knowing that wkhtmltopdf is working as expected using the windows cmd, I tried to use the the native PHP function ```exec()``` and it worked fine ! From this, I noticed that there is a problem with the Process class. I opened an issue in the *Knp/Snappy* reposetory, it may help: https://github.com/KnpLabs/snappy/issues/169 Thank you
1.0
Symfony\Component\Process\Process Bug in Windows Server - I'm using the wkhtmltopdf.exe command line program to generate PDF from HTML I'm using it through *Knp/Snappy* package which is using ```Symfony\Component\Process\Process``` to run the program in the cmd. It works fine in my computer (Win 8.1 64bit and Win 7 32bit) but not in the deployment machine (Windows Server 2012 R2, 12GB, Intel Xeon 2.4 Ghz) here is the code using Process class: ``` $binary = \Config::get('snappy.pdf.binary'); $process = new Symfony\Component\Process\Process($binary.' http://localhost/gca/public/demandes-achat/1138/pdf processs.pdf'); $process->run(function ($type, $buffer) { if (\Symfony\Component\Process\Process::ERR === $type) { echo 'ERR > '.$buffer; } else { echo 'OUT > '.$buffer; } }); ``` I get a 60 seconds timeout error: ``` Symfony\Component\Process\Exception\ProcessTimedOutExceptionGET /gca/public/test The process "C:\xampp\htdocs\gca/wkhtmltopdf/bin/wkhtmltopdf.exe http://localhost/gca/public/demandes-achat/1138/pdf process.pdf" exceeded the timeout of 60 seconds ``` Knowing that wkhtmltopdf is working as expected using the windows cmd, I tried to use the the native PHP function ```exec()``` and it worked fine ! From this, I noticed that there is a problem with the Process class. I opened an issue in the *Knp/Snappy* reposetory, it may help: https://github.com/KnpLabs/snappy/issues/169 Thank you
process
symfony component process process bug in windows server i m using the wkhtmltopdf exe command line program to generate pdf from html i m using it through knp snappy package which is using symfony component process process to run the program in the cmd it works fine in my computer win and win but not in the deployment machine windows server intel xeon ghz here is the code using process class binary config get snappy pdf binary process new symfony component process process binary processs pdf process run function type buffer if symfony component process process err type echo err buffer else echo out buffer i get a seconds timeout error symfony component process exception processtimedoutexceptionget gca public test the process c xampp htdocs gca wkhtmltopdf bin wkhtmltopdf exe process pdf exceeded the timeout of seconds knowing that wkhtmltopdf is working as expected using the windows cmd i tried to use the the native php function exec and it worked fine from this i noticed that there is a problem with the process class i opened an issue in the knp snappy reposetory it may help thank you
1
11,533
14,406,094,357
IssuesEvent
2020-12-03 19:41:34
googleapis/java-dns
https://api.github.com/repos/googleapis/java-dns
closed
Promote to Beta
api: dns type: process
Package name: **google-cloud-dns** Current release: **alpha** Proposed release: **beta** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [x] Server API is beta or GA - [x] Service API is public - [x] Client surface is mostly stable (no known issues that could significantly change the surface) - [ ] All manual types and methods have comment documentation - [ ] Package name is idiomatic for the platform - [ ] At least one integration/smoke test is defined and passing - [ ] Central GitHub README lists and points to the per-API README - [ ] Per-API README links to product page on cloud.google.com - [ ] Manual code has been reviewed for API stability by repo owner ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [ ] Per-API README includes a full description of the API - [ ] Per-API README contains at least one “getting started” sample using the most common API scenario - [ ] Manual code has been reviewed by API producer - [ ] Manual code has been reviewed by a DPE responsible for samples - [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
1.0
Promote to Beta - Package name: **google-cloud-dns** Current release: **alpha** Proposed release: **beta** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [x] Server API is beta or GA - [x] Service API is public - [x] Client surface is mostly stable (no known issues that could significantly change the surface) - [ ] All manual types and methods have comment documentation - [ ] Package name is idiomatic for the platform - [ ] At least one integration/smoke test is defined and passing - [ ] Central GitHub README lists and points to the per-API README - [ ] Per-API README links to product page on cloud.google.com - [ ] Manual code has been reviewed for API stability by repo owner ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [ ] Per-API README includes a full description of the API - [ ] Per-API README contains at least one “getting started” sample using the most common API scenario - [ ] Manual code has been reviewed by API producer - [ ] Manual code has been reviewed by a DPE responsible for samples - [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
process
promote to beta package name google cloud dns current release alpha proposed release beta instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required server api is beta or ga service api is public client surface is mostly stable no known issues that could significantly change the surface all manual types and methods have comment documentation package name is idiomatic for the platform at least one integration smoke test is defined and passing central github readme lists and points to the per api readme per api readme links to product page on cloud google com manual code has been reviewed for api stability by repo owner optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
1
44,920
23,825,348,837
IssuesEvent
2022-09-05 14:27:01
kumahq/kuma
https://api.github.com/repos/kumahq/kuma
closed
EDS returning envoy IP not service IP for universal service
triage/pending kind/improvement area/performance
### Summary EDS is returning the envoy address instead of the service address. This causes traffic, including health checks, to go back through envoy instead of directly through the service. ### Steps To Reproduce Below, envoy should have `127.0.0.1:5000` not `172.31.29.90:8080` for the `echo-service_default_svc_8080` cluster ``` kuma-dp run \ --name=internalhttpbin-52i8jq \ --mesh=default \ --cp-address=https://ip-172-31-20-18.us-east-2.compute.internal:5678/ \ --dataplane="type: Dataplane mesh: default name: internalhttpbin-52i8jq networking: address: ip-172-31-29-90.us-east-2.compute.internal outbound: - port: 10000 tags: kuma.io/service: externalhttpbin - port: 10001 tags: kuma.io/service: postman-echo - port: 10002 tags: kuma.io/service: internalhttpbin - port: 10003 tags: kuma.io/service: echo-service_default_svc_8080 inbound: - port: 8080 servicePort: 5000 serviceAddress: 127.0.0.1 tags: kuma.io/service: echo-service_default_svc_8080 kuma.io/zone: universal kuma.io/protocol: http" \ --dataplane-token-file=kuma-token-internalhttpbin-52i8jq ``` ``` echo-service_default_svc_8080::default_priority::max_connections::1024 echo-service_default_svc_8080::default_priority::max_pending_requests::1024 echo-service_default_svc_8080::default_priority::max_requests::1024 echo-service_default_svc_8080::default_priority::max_retries::3 echo-service_default_svc_8080::high_priority::max_connections::1024 echo-service_default_svc_8080::high_priority::max_pending_requests::1024 echo-service_default_svc_8080::high_priority::max_requests::1024 echo-service_default_svc_8080::high_priority::max_retries::3 echo-service_default_svc_8080::added_via_api::true echo-service_default_svc_8080::172.31.29.90:8080::cx_active::0 echo-service_default_svc_8080::172.31.29.90:8080::cx_connect_fail::0 echo-service_default_svc_8080::172.31.29.90:8080::cx_total::0 echo-service_default_svc_8080::172.31.29.90:8080::rq_active::0 echo-service_default_svc_8080::172.31.29.90:8080::rq_error::0 echo-service_default_svc_8080::172.31.29.90:8080::rq_success::0 echo-service_default_svc_8080::172.31.29.90:8080::rq_timeout::0 echo-service_default_svc_8080::172.31.29.90:8080::rq_total::0 echo-service_default_svc_8080::172.31.29.90:8080::hostname:: echo-service_default_svc_8080::172.31.29.90:8080::health_flags::/failed_active_hc echo-service_default_svc_8080::172.31.29.90:8080::weight::2 echo-service_default_svc_8080::172.31.29.90:8080::region:: echo-service_default_svc_8080::172.31.29.90:8080::zone::universal echo-service_default_svc_8080::172.31.29.90:8080::sub_zone:: echo-service_default_svc_8080::172.31.29.90:8080::canary::false echo-service_default_svc_8080::172.31.29.90:8080::priority::0 echo-service_default_svc_8080::172.31.29.90:8080::success_rate::-1.0 echo-service_default_svc_8080::172.31.29.90:8080::local_origin_success_rate::-1.0 ``` ### Additional Details & Logs
True
EDS returning envoy IP not service IP for universal service - ### Summary EDS is returning the envoy address instead of the service address. This causes traffic, including health checks, to go back through envoy instead of directly through the service. ### Steps To Reproduce Below, envoy should have `127.0.0.1:5000` not `172.31.29.90:8080` for the `echo-service_default_svc_8080` cluster ``` kuma-dp run \ --name=internalhttpbin-52i8jq \ --mesh=default \ --cp-address=https://ip-172-31-20-18.us-east-2.compute.internal:5678/ \ --dataplane="type: Dataplane mesh: default name: internalhttpbin-52i8jq networking: address: ip-172-31-29-90.us-east-2.compute.internal outbound: - port: 10000 tags: kuma.io/service: externalhttpbin - port: 10001 tags: kuma.io/service: postman-echo - port: 10002 tags: kuma.io/service: internalhttpbin - port: 10003 tags: kuma.io/service: echo-service_default_svc_8080 inbound: - port: 8080 servicePort: 5000 serviceAddress: 127.0.0.1 tags: kuma.io/service: echo-service_default_svc_8080 kuma.io/zone: universal kuma.io/protocol: http" \ --dataplane-token-file=kuma-token-internalhttpbin-52i8jq ``` ``` echo-service_default_svc_8080::default_priority::max_connections::1024 echo-service_default_svc_8080::default_priority::max_pending_requests::1024 echo-service_default_svc_8080::default_priority::max_requests::1024 echo-service_default_svc_8080::default_priority::max_retries::3 echo-service_default_svc_8080::high_priority::max_connections::1024 echo-service_default_svc_8080::high_priority::max_pending_requests::1024 echo-service_default_svc_8080::high_priority::max_requests::1024 echo-service_default_svc_8080::high_priority::max_retries::3 echo-service_default_svc_8080::added_via_api::true echo-service_default_svc_8080::172.31.29.90:8080::cx_active::0 echo-service_default_svc_8080::172.31.29.90:8080::cx_connect_fail::0 echo-service_default_svc_8080::172.31.29.90:8080::cx_total::0 echo-service_default_svc_8080::172.31.29.90:8080::rq_active::0 echo-service_default_svc_8080::172.31.29.90:8080::rq_error::0 echo-service_default_svc_8080::172.31.29.90:8080::rq_success::0 echo-service_default_svc_8080::172.31.29.90:8080::rq_timeout::0 echo-service_default_svc_8080::172.31.29.90:8080::rq_total::0 echo-service_default_svc_8080::172.31.29.90:8080::hostname:: echo-service_default_svc_8080::172.31.29.90:8080::health_flags::/failed_active_hc echo-service_default_svc_8080::172.31.29.90:8080::weight::2 echo-service_default_svc_8080::172.31.29.90:8080::region:: echo-service_default_svc_8080::172.31.29.90:8080::zone::universal echo-service_default_svc_8080::172.31.29.90:8080::sub_zone:: echo-service_default_svc_8080::172.31.29.90:8080::canary::false echo-service_default_svc_8080::172.31.29.90:8080::priority::0 echo-service_default_svc_8080::172.31.29.90:8080::success_rate::-1.0 echo-service_default_svc_8080::172.31.29.90:8080::local_origin_success_rate::-1.0 ``` ### Additional Details & Logs
non_process
eds returning envoy ip not service ip for universal service summary eds is returning the envoy address instead of the service address this causes traffic including health checks to go back through envoy instead of directly through the service steps to reproduce below envoy should have not for the echo service default svc cluster kuma dp run name internalhttpbin mesh default cp address dataplane type dataplane mesh default name internalhttpbin networking address ip us east compute internal outbound port tags kuma io service externalhttpbin port tags kuma io service postman echo port tags kuma io service internalhttpbin port tags kuma io service echo service default svc inbound port serviceport serviceaddress tags kuma io service echo service default svc kuma io zone universal kuma io protocol http dataplane token file kuma token internalhttpbin echo service default svc default priority max connections echo service default svc default priority max pending requests echo service default svc default priority max requests echo service default svc default priority max retries echo service default svc high priority max connections echo service default svc high priority max pending requests echo service default svc high priority max requests echo service default svc high priority max retries echo service default svc added via api true echo service default svc cx active echo service default svc cx connect fail echo service default svc cx total echo service default svc rq active echo service default svc rq error echo service default svc rq success echo service default svc rq timeout echo service default svc rq total echo service default svc hostname echo service default svc health flags failed active hc echo service default svc weight echo service default svc region echo service default svc zone universal echo service default svc sub zone echo service default svc canary false echo service default svc priority echo service default svc success rate echo service default svc local origin success rate additional details logs
0
7,785
10,925,958,305
IssuesEvent
2019-11-22 13:45:07
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Problem with Geo Location (Unknown)
dependencies log-processing
I am running GoAccess to gather some stats on several reverse proxies handled by Caddy. To launch GoAccess I have customized a startup script: ``` #!/bin/sh ### Get GeoIP db cd /share/Public/goaccess/GeoIP wget -N http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz gunzip -f GeoLite2-City.mmdb.gz ### Launch goaccess goaccess --no-global-config -o /share/Web/goaccess/index.html --real-time-html --config-file=/share/Public/goaccess/goaccess.conf --geoip-database /share/Public/goaccess/GeoIP/GeoLite2-City.mmdb ``` goaccess.conf is the following: ``` port 7890 daemonize true log-format COMMON real-time-html true output /share/Web/goaccess/index.html log-file /share/Public/caddy/accesslogs/goaccess.log agent-list true no-query-string true ``` If I go on goaccess.log, I have the following type of entries: ``` 146.0.177.5 - goaccess [04/Nov/2019:10:44:04 +0100] "GET / HTTP/2.0" 200 116439 192.168.1.1 - [03/Nov/2019:19:27:10 +0100] "GET / HTTP/2.0" 401 17 212.66.75.12 - goaccess [04/Nov/2019:22:59:46 +0100] "GET /favicon.ico HTTP/2.0" 404 236 ``` Does anyone understand why I only get "Unknown" in the Geo Location tab? Thanks
1.0
Problem with Geo Location (Unknown) - I am running GoAccess to gather some stats on several reverse proxies handled by Caddy. To launch GoAccess I have customized a startup script: ``` #!/bin/sh ### Get GeoIP db cd /share/Public/goaccess/GeoIP wget -N http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz gunzip -f GeoLite2-City.mmdb.gz ### Launch goaccess goaccess --no-global-config -o /share/Web/goaccess/index.html --real-time-html --config-file=/share/Public/goaccess/goaccess.conf --geoip-database /share/Public/goaccess/GeoIP/GeoLite2-City.mmdb ``` goaccess.conf is the following: ``` port 7890 daemonize true log-format COMMON real-time-html true output /share/Web/goaccess/index.html log-file /share/Public/caddy/accesslogs/goaccess.log agent-list true no-query-string true ``` If I go on goaccess.log, I have the following type of entries: ``` 146.0.177.5 - goaccess [04/Nov/2019:10:44:04 +0100] "GET / HTTP/2.0" 200 116439 192.168.1.1 - [03/Nov/2019:19:27:10 +0100] "GET / HTTP/2.0" 401 17 212.66.75.12 - goaccess [04/Nov/2019:22:59:46 +0100] "GET /favicon.ico HTTP/2.0" 404 236 ``` Does anyone understand why I only get "Unknown" in the Geo Location tab? Thanks
process
problem with geo location unknown i am running goaccess to gather some stats on several reverse proxies handled by caddy to launch goaccess i have customized a startup script bin sh get geoip db cd share public goaccess geoip wget n gunzip f city mmdb gz launch goaccess goaccess no global config o share web goaccess index html real time html config file share public goaccess goaccess conf geoip database share public goaccess geoip city mmdb goaccess conf is the following port daemonize true log format common real time html true output share web goaccess index html log file share public caddy accesslogs goaccess log agent list true no query string true if i go on goaccess log i have the following type of entries goaccess get http get http goaccess get favicon ico http does anyone understand why i only get unknown in the geo location tab thanks
1
28,213
6,967,635,582
IssuesEvent
2017-12-10 11:48:52
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
[4.0] Alert message changes
No Code Attached Yet
the code for the alert messages etc has changed in J4 - as a result all the old style code will include the untranslated status eg "info/error/warning" see screenshot. Ideally the markup needs to be changed so that the untranslated text is NOT included in the message without all devs needing to update their markup for J4 <img width="893" alt="screenshotr13-53-27" src="https://cloud.githubusercontent.com/assets/1296369/25555613/3fc742d6-2ce3-11e7-8971-9f4826b5839d.png">
1.0
[4.0] Alert message changes - the code for the alert messages etc has changed in J4 - as a result all the old style code will include the untranslated status eg "info/error/warning" see screenshot. Ideally the markup needs to be changed so that the untranslated text is NOT included in the message without all devs needing to update their markup for J4 <img width="893" alt="screenshotr13-53-27" src="https://cloud.githubusercontent.com/assets/1296369/25555613/3fc742d6-2ce3-11e7-8971-9f4826b5839d.png">
non_process
alert message changes the code for the alert messages etc has changed in as a result all the old style code will include the untranslated status eg info error warning see screenshot ideally the markup needs to be changed so that the untranslated text is not included in the message without all devs needing to update their markup for img width alt src
0
322,403
9,817,262,069
IssuesEvent
2019-06-13 16:18:25
scrapinghub/arche
https://api.github.com/repos/scrapinghub/arche
opened
Collections item key doesn't start with 0
Priority: High Type: Bug
start=0 is set as default for Cloud Items puller, however collections have custom indexes, 0 not necessary the first. Maybe the jobs too.
1.0
Collections item key doesn't start with 0 - start=0 is set as default for Cloud Items puller, however collections have custom indexes, 0 not necessary the first. Maybe the jobs too.
non_process
collections item key doesn t start with start is set as default for cloud items puller however collections have custom indexes not necessary the first maybe the jobs too
0
753,720
26,359,441,060
IssuesEvent
2023-01-11 12:17:22
42Atomys/stud42
https://api.github.com/repos/42Atomys/stud42
opened
fix: twemoji cdn is not resolved anymore
help wanted good first issue (╯°□°)╯︵ ┻━┻ 🤯 priority/medium 🟨 aspect/interface 🕹 type/bug 🔥
### Describe the bug The Twemoji CDN (MaxCDN) dont exist anymore. Need to fallback to another CDN ### To Reproduce 1. Go to any link of twemoji or go to the cluster page of app ### Expected behavior _No response_ ### Relevant log output _No response_ ### Version of software v0.21 ### Environment Live (https://s42.app) ### Additional context Tracking issue on twemoji : https://github.com/twitter/twemoji/issues/580 ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
1.0
fix: twemoji cdn is not resolved anymore - ### Describe the bug The Twemoji CDN (MaxCDN) dont exist anymore. Need to fallback to another CDN ### To Reproduce 1. Go to any link of twemoji or go to the cluster page of app ### Expected behavior _No response_ ### Relevant log output _No response_ ### Version of software v0.21 ### Environment Live (https://s42.app) ### Additional context Tracking issue on twemoji : https://github.com/twitter/twemoji/issues/580 ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
non_process
fix twemoji cdn is not resolved anymore describe the bug the twemoji cdn maxcdn dont exist anymore need to fallback to another cdn to reproduce go to any link of twemoji or go to the cluster page of app expected behavior no response relevant log output no response version of software environment live additional context tracking issue on twemoji code of conduct i agree to follow this project s code of conduct
0
39,722
12,698,867,416
IssuesEvent
2020-06-22 14:03:52
mahonec/WebGoat-Legacy
https://api.github.com/repos/mahonec/WebGoat-Legacy
opened
CVE-2014-0114 (High) detected in commons-beanutils-1.6.jar
security vulnerability
## CVE-2014-0114 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.6.jar</b></p></summary> <p>Java Bean Utililities</p> <p>Library home page: <a href="http://jakarta.apache.org/commons/beanutils/">http://jakarta.apache.org/commons/beanutils/</a></p> <p>Path to vulnerable library: /WebGoat-Legacy/target/WebGoat-6.0.1/WEB-INF/lib/commons-beanutils-1.6.jar,/home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils/1.6/commons-beanutils-1.6.jar</p> <p> Dependency Hierarchy: - :x: **commons-beanutils-1.6.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/mahonec/WebGoat-Legacy/commit/9b9155ac6645ae2fcb5f2195a346a9a39d3137e7">9b9155ac6645ae2fcb5f2195a346a9a39d3137e7</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1. <p>Publish Date: 2014-04-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114>CVE-2014-0114</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114</a></p> <p>Release Date: 2014-04-30</p> <p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-beanutils","packageName":"commons-beanutils","packageVersion":"1.6","isTransitiveDependency":false,"dependencyTree":"commons-beanutils:commons-beanutils:1.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-beanutils:commons-beanutils:1.9.4"}],"vulnerabilityIdentifier":"CVE-2014-0114","vulnerabilityDetails":"Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to \"manipulate\" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114","cvss2Severity":"high","cvss2Score":"7.5","extraData":{}}</REMEDIATE> -->
True
CVE-2014-0114 (High) detected in commons-beanutils-1.6.jar - ## CVE-2014-0114 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.6.jar</b></p></summary> <p>Java Bean Utililities</p> <p>Library home page: <a href="http://jakarta.apache.org/commons/beanutils/">http://jakarta.apache.org/commons/beanutils/</a></p> <p>Path to vulnerable library: /WebGoat-Legacy/target/WebGoat-6.0.1/WEB-INF/lib/commons-beanutils-1.6.jar,/home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils/1.6/commons-beanutils-1.6.jar</p> <p> Dependency Hierarchy: - :x: **commons-beanutils-1.6.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/mahonec/WebGoat-Legacy/commit/9b9155ac6645ae2fcb5f2195a346a9a39d3137e7">9b9155ac6645ae2fcb5f2195a346a9a39d3137e7</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1. <p>Publish Date: 2014-04-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114>CVE-2014-0114</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114</a></p> <p>Release Date: 2014-04-30</p> <p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-beanutils","packageName":"commons-beanutils","packageVersion":"1.6","isTransitiveDependency":false,"dependencyTree":"commons-beanutils:commons-beanutils:1.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-beanutils:commons-beanutils:1.9.4"}],"vulnerabilityIdentifier":"CVE-2014-0114","vulnerabilityDetails":"Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to \"manipulate\" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114","cvss2Severity":"high","cvss2Score":"7.5","extraData":{}}</REMEDIATE> -->
non_process
cve high detected in commons beanutils jar cve high severity vulnerability vulnerable library commons beanutils jar java bean utililities library home page a href path to vulnerable library webgoat legacy target webgoat web inf lib commons beanutils jar home wss scanner repository commons beanutils commons beanutils commons beanutils jar dependency hierarchy x commons beanutils jar vulnerable library found in head commit a href vulnerability details apache commons beanutils as distributed in lib commons beanutils jar in apache struts x through and in other products requiring commons beanutils through does not suppress the class property which allows remote attackers to manipulate the classloader and execute arbitrary code via the class parameter as demonstrated by the passing of this parameter to the getclass method of the actionform object in struts publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution commons beanutils commons beanutils rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails apache commons beanutils as distributed in lib commons beanutils jar in apache struts x through and in other products requiring commons beanutils through does not suppress the class property which allows remote attackers to manipulate the classloader and execute arbitrary code via the class parameter as demonstrated by the passing of this parameter to the getclass method of the actionform object in struts vulnerabilityurl
0
413,829
12,092,775,431
IssuesEvent
2020-04-19 16:56:18
osmontrouge/caresteouvert
https://api.github.com/repos/osmontrouge/caresteouvert
closed
Usability: Make typed in opening hours editable OR add button to confirm time
User Experience priority: medium
**Is your feature request related to a problem? Please describe.** * I came across a note that corrected it's own provided opening_hours:covid19 in the comment. * This is due to the following usability issue: When a minute is selected, the time selection closes right away (or goes from the opening to the closing time), but sometimes you just didn't pick the right time (especially minute). Right now you have to discard the current day(s) and start again. **Describe the solution you'd like** 1. Make the time selection appear again when clicking on the time. 2. Add a confirmation step after each time selection to allow correcting the hour/minute. (Ideally: "Add another opening time for the same day(s)." vs. "Finish day") **Describe alternatives you've considered** See above. **Additional context** Can't find the corresponding note, but happened to me several times and was quite frustrating.
1.0
Usability: Make typed in opening hours editable OR add button to confirm time - **Is your feature request related to a problem? Please describe.** * I came across a note that corrected it's own provided opening_hours:covid19 in the comment. * This is due to the following usability issue: When a minute is selected, the time selection closes right away (or goes from the opening to the closing time), but sometimes you just didn't pick the right time (especially minute). Right now you have to discard the current day(s) and start again. **Describe the solution you'd like** 1. Make the time selection appear again when clicking on the time. 2. Add a confirmation step after each time selection to allow correcting the hour/minute. (Ideally: "Add another opening time for the same day(s)." vs. "Finish day") **Describe alternatives you've considered** See above. **Additional context** Can't find the corresponding note, but happened to me several times and was quite frustrating.
non_process
usability make typed in opening hours editable or add button to confirm time is your feature request related to a problem please describe i came across a note that corrected it s own provided opening hours in the comment this is due to the following usability issue when a minute is selected the time selection closes right away or goes from the opening to the closing time but sometimes you just didn t pick the right time especially minute right now you have to discard the current day s and start again describe the solution you d like make the time selection appear again when clicking on the time add a confirmation step after each time selection to allow correcting the hour minute ideally add another opening time for the same day s vs finish day describe alternatives you ve considered see above additional context can t find the corresponding note but happened to me several times and was quite frustrating
0
16,431
21,314,497,915
IssuesEvent
2022-04-16 03:45:15
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
System.Diagnostics.Perf_Process.GetProcessById has regressed 50% on Windows
area-System.Diagnostics.Process os-windows untriaged
`System.Diagnostics.Perf_Process.GetProcessById` has regressed 50% on Windows [Reporting System](https://pvscmdupload.blob.core.windows.net/reports/allTestHistory%2frefs%2fheads%2fmain_x64_Windows%2010.0.18362%2fSystem.Diagnostics.Perf_Process.GetProcessById.html): ![image](https://user-images.githubusercontent.com/6011991/162775224-5a98262f-8613-4f2b-add8-4c7e1e07eacd.png) Looking at the [commit diff](https://github.com/dotnet/runtime/compare/18d9496d0f57def6ea803cbb9d3f085e7428f205...1b75f4264ebafaa72340ff0c972d43e5ab975ebb) it's most likely caused by #64723 (cc @epeshk) Repro: ```cmd git clone https://github.com/dotnet/performance.git py .\performance\scripts\benchmarks_monthly.py net7.0-preview2 net7.0-preview3 --filter System.Diagnostics.Perf_Process.GetProcessById ``` <details> ## System.Diagnostics.Perf_Process.GetProcessById | Result | Ratio | Operating System | Bit | | ------ | -----:| --------------------- | ----- | | Slower | 0.48 | Windows 10 | X64 | | Slower | 0.50 | Windows 11 | X64 | | Slower | 0.49 | Windows 11 | X64 | | Slower | 0.44 | Windows 11 | X64 | | Same | 1.02 | alpine 3.13 | X64 | | Same | 1.02 | centos 7 | X64 | | Same | 0.97 | debian 11 | X64 | | Same | 0.96 | pop 20.04 | X64 | | Same | 1.00 | ubuntu 18.04 | X64 | | Faster | 1.15 | ubuntu 18.04 | X64 | | Same | 0.96 | alpine 3.12 | Arm64 | | Faster | 1.18 | debian 11 | Arm64 | | Same | 1.01 | ubuntu 18.04 | Arm64 | | Slower | 0.47 | Windows 10 | Arm64 | | Slower | 0.42 | Windows 11 | Arm64 | | Slower | 0.43 | Windows 10 | X86 | | Slower | 0.48 | Windows 11 | X86 | | Slower | 0.49 | Windows 11 | X86 | | Slower | 0.34 | Windows 7 SP1 | X86 | | Same | 1.07 | ubuntu 18.04 | Arm | | Slower | 0.44 | Windows 10 | Arm | | Same | 1.02 | macOS Monterey 12.2.1 | X64 | | Same | 1.05 | macOS Monterey 12.3.1 | X64 | </details>
1.0
System.Diagnostics.Perf_Process.GetProcessById has regressed 50% on Windows - `System.Diagnostics.Perf_Process.GetProcessById` has regressed 50% on Windows [Reporting System](https://pvscmdupload.blob.core.windows.net/reports/allTestHistory%2frefs%2fheads%2fmain_x64_Windows%2010.0.18362%2fSystem.Diagnostics.Perf_Process.GetProcessById.html): ![image](https://user-images.githubusercontent.com/6011991/162775224-5a98262f-8613-4f2b-add8-4c7e1e07eacd.png) Looking at the [commit diff](https://github.com/dotnet/runtime/compare/18d9496d0f57def6ea803cbb9d3f085e7428f205...1b75f4264ebafaa72340ff0c972d43e5ab975ebb) it's most likely caused by #64723 (cc @epeshk) Repro: ```cmd git clone https://github.com/dotnet/performance.git py .\performance\scripts\benchmarks_monthly.py net7.0-preview2 net7.0-preview3 --filter System.Diagnostics.Perf_Process.GetProcessById ``` <details> ## System.Diagnostics.Perf_Process.GetProcessById | Result | Ratio | Operating System | Bit | | ------ | -----:| --------------------- | ----- | | Slower | 0.48 | Windows 10 | X64 | | Slower | 0.50 | Windows 11 | X64 | | Slower | 0.49 | Windows 11 | X64 | | Slower | 0.44 | Windows 11 | X64 | | Same | 1.02 | alpine 3.13 | X64 | | Same | 1.02 | centos 7 | X64 | | Same | 0.97 | debian 11 | X64 | | Same | 0.96 | pop 20.04 | X64 | | Same | 1.00 | ubuntu 18.04 | X64 | | Faster | 1.15 | ubuntu 18.04 | X64 | | Same | 0.96 | alpine 3.12 | Arm64 | | Faster | 1.18 | debian 11 | Arm64 | | Same | 1.01 | ubuntu 18.04 | Arm64 | | Slower | 0.47 | Windows 10 | Arm64 | | Slower | 0.42 | Windows 11 | Arm64 | | Slower | 0.43 | Windows 10 | X86 | | Slower | 0.48 | Windows 11 | X86 | | Slower | 0.49 | Windows 11 | X86 | | Slower | 0.34 | Windows 7 SP1 | X86 | | Same | 1.07 | ubuntu 18.04 | Arm | | Slower | 0.44 | Windows 10 | Arm | | Same | 1.02 | macOS Monterey 12.2.1 | X64 | | Same | 1.05 | macOS Monterey 12.3.1 | X64 | </details>
process
system diagnostics perf process getprocessbyid has regressed on windows system diagnostics perf process getprocessbyid has regressed on windows looking at the it s most likely caused by cc epeshk repro cmd git clone py performance scripts benchmarks monthly py filter system diagnostics perf process getprocessbyid system diagnostics perf process getprocessbyid result ratio operating system bit slower windows slower windows slower windows slower windows same alpine same centos same debian same pop same ubuntu faster ubuntu same alpine faster debian same ubuntu slower windows slower windows slower windows slower windows slower windows slower windows same ubuntu arm slower windows arm same macos monterey same macos monterey
1
10,483
13,252,914,973
IssuesEvent
2020-08-20 06:33:25
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
TableScanExecutor decoding PK slower than decoding row
component/performance difficulty/easy sig/coprocessor status/help-wanted
## Bug Report ``` table_scan_primary_key time: [1.8638 us 1.9087 us 1.9560 us] table_scan_datum_front time: [1.5506 us 1.5698 us 1.5898 us] ``` See https://github.com/tikv/tikv/pull/3776
1.0
TableScanExecutor decoding PK slower than decoding row - ## Bug Report ``` table_scan_primary_key time: [1.8638 us 1.9087 us 1.9560 us] table_scan_datum_front time: [1.5506 us 1.5698 us 1.5898 us] ``` See https://github.com/tikv/tikv/pull/3776
process
tablescanexecutor decoding pk slower than decoding row bug report table scan primary key time table scan datum front time see
1
16,949
22,303,136,269
IssuesEvent
2022-06-13 10:32:50
arcus-azure/arcus.messaging
https://api.github.com/repos/arcus-azure/arcus.messaging
opened
Remove deprecated `ValueMissingException`
good first issue area:message-processing breaking-change
**Is your feature request related to a problem? Please describe.** Previously, we used reflection to retrieve open interfaces from the dependency injection system as this was not built-in, but with some rework the reflection code was not needed anymore. **Describe the solution you'd like** Remove the deprecated `ValueMissingException` which was only used in the reflection code and now not needed anymore.
1.0
Remove deprecated `ValueMissingException` - **Is your feature request related to a problem? Please describe.** Previously, we used reflection to retrieve open interfaces from the dependency injection system as this was not built-in, but with some rework the reflection code was not needed anymore. **Describe the solution you'd like** Remove the deprecated `ValueMissingException` which was only used in the reflection code and now not needed anymore.
process
remove deprecated valuemissingexception is your feature request related to a problem please describe previously we used reflection to retrieve open interfaces from the dependency injection system as this was not built in but with some rework the reflection code was not needed anymore describe the solution you d like remove the deprecated valuemissingexception which was only used in the reflection code and now not needed anymore
1
48,881
3,000,815,826
IssuesEvent
2015-07-24 06:22:59
jayway/powermock
https://api.github.com/repos/jayway/powermock
closed
PowerMockito.whenNew() failing trying to call org.mockito.internal.IMockHandler()
bug imported Priority-Medium wontfix
_From [ank...@gmail.com](https://code.google.com/u/101453641862635350005/) on March 17, 2010 00:22:40_ I am using PowerMockito(1.3.6) with Mockito(1.8.3) and Junit(4.7) for testing. I am quite stuck on an error: [junit] Testcase: testNextPNRFromQueueHappyPath(com.orbitz.galileo.host.robot.GalpoQueueConnectionTest): Caused an ERROR [junit] org/mockito/internal/IMockHandler [junit] java.lang.NoClassDefFoundError: org/mockito/internal/IMockHandler [junit] at org.powermock.api.mockito.internal.expectation.DefaultConstructorExpectationSetup.createNewSubsituteMock(DefaultConstructorExpectationSetup.java:80) [junit] at org.powermock.api.mockito.internal.expectation.DefaultConstructorExpectationSetup.withArguments(DefaultConstructorExpectationSetup.java:48) [junit] at com.orbitz.galileo.host.robot.GalpoQueueConnectionTest.testNextPNRFromQueueHappyPath(GalpoQueueConnectionTest.java:62) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) My class under test : GalpoQueueConnection is basically creating a new object of another class : QueueReadBuilder. I am trying to do something like: @RunWith(PowerMockRunner.class) @PrepareForTest({GalpoQueueConnection.class, ApolloProcUtils.class}) public class GalpoQueueConnectionTest extends TestCase{ @Test public void testNextPNRFromQueueHappyPath() throws Exception { QueueReadBuilder mockBuilder = mock(QueueReadBuilder.class); PowerMockito.whenNew(QueueReadBuilder.class).withArguments(anyString(), anyString(), anyString()).thenReturn(mockBuilder); } } It seems like somehow powermockito is trying to call IMockHandler, and I looked at Mockito 1.8.3 api, and there isn’t any. Any suggestions/ clues would be welcome. _Original issue: http://code.google.com/p/powermock/issues/detail?id=240_
1.0
PowerMockito.whenNew() failing trying to call org.mockito.internal.IMockHandler() - _From [ank...@gmail.com](https://code.google.com/u/101453641862635350005/) on March 17, 2010 00:22:40_ I am using PowerMockito(1.3.6) with Mockito(1.8.3) and Junit(4.7) for testing. I am quite stuck on an error: [junit] Testcase: testNextPNRFromQueueHappyPath(com.orbitz.galileo.host.robot.GalpoQueueConnectionTest): Caused an ERROR [junit] org/mockito/internal/IMockHandler [junit] java.lang.NoClassDefFoundError: org/mockito/internal/IMockHandler [junit] at org.powermock.api.mockito.internal.expectation.DefaultConstructorExpectationSetup.createNewSubsituteMock(DefaultConstructorExpectationSetup.java:80) [junit] at org.powermock.api.mockito.internal.expectation.DefaultConstructorExpectationSetup.withArguments(DefaultConstructorExpectationSetup.java:48) [junit] at com.orbitz.galileo.host.robot.GalpoQueueConnectionTest.testNextPNRFromQueueHappyPath(GalpoQueueConnectionTest.java:62) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) My class under test : GalpoQueueConnection is basically creating a new object of another class : QueueReadBuilder. I am trying to do something like: @RunWith(PowerMockRunner.class) @PrepareForTest({GalpoQueueConnection.class, ApolloProcUtils.class}) public class GalpoQueueConnectionTest extends TestCase{ @Test public void testNextPNRFromQueueHappyPath() throws Exception { QueueReadBuilder mockBuilder = mock(QueueReadBuilder.class); PowerMockito.whenNew(QueueReadBuilder.class).withArguments(anyString(), anyString(), anyString()).thenReturn(mockBuilder); } } It seems like somehow powermockito is trying to call IMockHandler, and I looked at Mockito 1.8.3 api, and there isn’t any. Any suggestions/ clues would be welcome. _Original issue: http://code.google.com/p/powermock/issues/detail?id=240_
non_process
powermockito whennew failing trying to call org mockito internal imockhandler from on march i am using powermockito with mockito and junit for testing i am quite stuck on an error testcase testnextpnrfromqueuehappypath com orbitz galileo host robot galpoqueueconnectiontest caused an error org mockito internal imockhandler java lang noclassdeffounderror org mockito internal imockhandler at org powermock api mockito internal expectation defaultconstructorexpectationsetup createnewsubsitutemock defaultconstructorexpectationsetup java at org powermock api mockito internal expectation defaultconstructorexpectationsetup witharguments defaultconstructorexpectationsetup java at com orbitz galileo host robot galpoqueueconnectiontest testnextpnrfromqueuehappypath galpoqueueconnectiontest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java my class under test galpoqueueconnection is basically creating a new object of another class queuereadbuilder i am trying to do something like runwith powermockrunner class preparefortest galpoqueueconnection class apolloprocutils class public class galpoqueueconnectiontest extends testcase test public void testnextpnrfromqueuehappypath throws exception queuereadbuilder mockbuilder mock queuereadbuilder class powermockito whennew queuereadbuilder class witharguments anystring anystring anystring thenreturn mockbuilder it seems like somehow powermockito is trying to call imockhandler and i looked at mockito api and there isn’t any any suggestions clues would be welcome original issue
0