Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
29,266 | 11,738,861,640 | IssuesEvent | 2020-03-11 16:44:30 | timberio/vector | https://api.github.com/repos/timberio/vector | closed | Vector sink does not support TLS | domain: security domain: sinks sink: vector type: enhancement | In #1892 TLS support was added to the Vector source! This is awesome, but the Vector sink does not yet support TLS.
Fixing this issue means adding a TLS option to the Vector sink and making sure it works.
The options should match alongside https://vector.dev/docs/reference/sources/vector/ where possible. | True | Vector sink does not support TLS - In #1892 TLS support was added to the Vector source! This is awesome, but the Vector sink does not yet support TLS.
Fixing this issue means adding a TLS option to the Vector sink and making sure it works.
The options should match alongside https://vector.dev/docs/reference/sources/vector/ where possible. | non_process | vector sink does not support tls in tls support was added to the vector source this is awesome but the vector sink does not yet support tls fixing this issue means adding a tls option to the vector sink and making sure it works the options should match alongside where possible | 0 |
130,627 | 10,618,370,020 | IssuesEvent | 2019-10-13 03:48:55 | carbon-design-system/ibm-dotcom-library | https://api.github.com/repos/carbon-design-system/ibm-dotcom-library | closed | Support the adopter ibm.com Library v1.0.0 QA testing | Sprint Must Have adopter support adopter testing dev dotcom migrate | _hahnrob created the following on Sep 10:_
<!-- Avoid any type of solutions in this user story -->
<!-- replace _{{...}}_ with your own words or remove -->
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
DDS developer
> I need to:
resolve defects logged but the adopter QA testers
> so that I can:
be sure that the code released in ibm.com Library v1.0.0 is of high quality
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- [Engage some adopters to QA the ibm.com Library v1.0.0 release https://github.com/carbon-design-system/ibm-dotcom-library/issues/426](https://zenhub.ibm.com/app/workspaces/digital-design-system-5b3a815a8bf6a932efa6fd03/issues/webstandards/digital-design/1669)
#### Acceptance criteria
- [ ] Sev 1 defects are fixed and all other defects are prioritized to be included in a future release
<!-- Consider the following when writing Acceptance criteria for this story. -->
<!-- *** Each product backlog item or user story should have at least one Acceptance criteria. -->
<!-- *** Acceptance criteria defines a deliverable that can be completed in a single sprint -->
<!-- *** Each Acceptance criterion is independently testable. -->
<!-- *** Include functional as well as non-functional criteria – when relevant. -->
<!-- *** Team members write Acceptance criteria and the Product Owner verifies it. -->
_Original issue: https://github.ibm.com/webstandards/digital-design/issues/1671_ | 1.0 | Support the adopter ibm.com Library v1.0.0 QA testing - _hahnrob created the following on Sep 10:_
<!-- Avoid any type of solutions in this user story -->
<!-- replace _{{...}}_ with your own words or remove -->
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
DDS developer
> I need to:
resolve defects logged but the adopter QA testers
> so that I can:
be sure that the code released in ibm.com Library v1.0.0 is of high quality
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- [Engage some adopters to QA the ibm.com Library v1.0.0 release https://github.com/carbon-design-system/ibm-dotcom-library/issues/426](https://zenhub.ibm.com/app/workspaces/digital-design-system-5b3a815a8bf6a932efa6fd03/issues/webstandards/digital-design/1669)
#### Acceptance criteria
- [ ] Sev 1 defects are fixed and all other defects are prioritized to be included in a future release
<!-- Consider the following when writing Acceptance criteria for this story. -->
<!-- *** Each product backlog item or user story should have at least one Acceptance criteria. -->
<!-- *** Acceptance criteria defines a deliverable that can be completed in a single sprint -->
<!-- *** Each Acceptance criterion is independently testable. -->
<!-- *** Include functional as well as non-functional criteria – when relevant. -->
<!-- *** Team members write Acceptance criteria and the Product Owner verifies it. -->
_Original issue: https://github.ibm.com/webstandards/digital-design/issues/1671_ | non_process | support the adopter ibm com library qa testing hahnrob created the following on sep user story as a dds developer i need to resolve defects logged but the adopter qa testers so that i can be sure that the code released in ibm com library is of high quality additional information acceptance criteria sev defects are fixed and all other defects are prioritized to be included in a future release original issue | 0 |
3,138 | 6,190,348,359 | IssuesEvent | 2017-07-04 15:09:27 | PHPOffice/PHPWord | https://api.github.com/repos/PHPOffice/PHPWord | closed | TemplateProcessor insert PageBreak possible? | Responded Template Processor | Hi
ist it possible to insert a pagebreak inside a processed document?
thanks!
Alex
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/46485607-templateprocessor-insert-pagebreak-possible?utm_campaign=plugin&utm_content=tracker%2F323108&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F323108&utm_medium=issues&utm_source=github).
</bountysource-plugin> | 1.0 | TemplateProcessor insert PageBreak possible? - Hi
ist it possible to insert a pagebreak inside a processed document?
thanks!
Alex
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/46485607-templateprocessor-insert-pagebreak-possible?utm_campaign=plugin&utm_content=tracker%2F323108&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F323108&utm_medium=issues&utm_source=github).
</bountysource-plugin> | process | templateprocessor insert pagebreak possible hi ist it possible to insert a pagebreak inside a processed document thanks alex want to back this issue we accept bounties via | 1 |
52,852 | 10,948,166,939 | IssuesEvent | 2019-11-26 08:16:39 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | Compiler generate the wrong code when enabled "optimize code" for Unsafe.As | area-CodeGen | [origin issue](https://github.com/Tornhoof/SpanJson/issues/139)
It looks like the compiler is generating the wrong code when enabled "optimize code" for `Unsafe.As`
I use SpanJson library for json serialization and there are code:
var buffer = JsonSerializer.Generic.Utf8.SerializeToArrayPool(response);
which serialize directly to ArrayPool.
Internally it use:
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static ArraySegment<byte> InnerSerializeToByteArrayPool(T input)
{
var jsonWriter = new JsonWriter<TSymbol>(_lastSerializationSizeEstimate);
try
{
Formatter.Serialize(ref jsonWriter, input);
_lastSerializationSizeEstimate = jsonWriter.Data.Length;
var temp = jsonWriter.Data;
var data = Unsafe.As<TSymbol[], byte[]>(ref temp);
return new ArraySegment<byte>(data, 0, jsonWriter.Position);
}
catch
{
jsonWriter.Dispose();
throw;
}
}
who sometimes under certain and unknown conditions cause error
An unhandled exception of type 'System.ExecutionEngineException' occurred in Unknown Module.
that can be fixed by `[MethodImpl(MethodImplOptions.NoOptimization)]`
But also can be fixed if rewrite and prevent optimization inlining `var data = Unsafe.As<TSymbol[], byte[]>(ref temp);` to `return new ArraySegment(buff, 0, jsonWriter.Position);`
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static ArraySegment<byte> InnerSerializeToByteArrayPool(T input)
{
byte[] buff = new byte[0]; // !!!
var jsonWriter = new JsonWriter<TSymbol>(_lastSerializationSizeEstimate);
try
{
Formatter.Serialize(ref jsonWriter, input);
_lastSerializationSizeEstimate = jsonWriter.Data.Length;
var temp = jsonWriter.Data;
var data = Unsafe.As<TSymbol[], byte[]>(ref temp);
buff = data; !!!
return new ArraySegment<byte>(buff, 0, jsonWriter.Position);
}
catch
{
jsonWriter.Dispose();
throw new ApplicationException(buff.ToString()); !!!!
}
}
dump say reasons:
The thread tried to read from or write to a virtual address for which it does not have the appropriate access
FAILURE_BUCKET_ID: NULL_POINTER_READ_c0000005_coreclr.dll!TypeHandle::HasTypeEquivalence
reproduced on on 3.0, 3.1, 5.0 Alpha (@Tornhoof)
[https://github.com/vitidev/CoreClrBug2](https://github.com/vitidev/CoreClrBug2)
| 1.0 | Compiler generate the wrong code when enabled "optimize code" for Unsafe.As - [origin issue](https://github.com/Tornhoof/SpanJson/issues/139)
It looks like the compiler is generating the wrong code when enabled "optimize code" for `Unsafe.As`
I use SpanJson library for json serialization and there are code:
var buffer = JsonSerializer.Generic.Utf8.SerializeToArrayPool(response);
which serialize directly to ArrayPool.
Internally it use:
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static ArraySegment<byte> InnerSerializeToByteArrayPool(T input)
{
var jsonWriter = new JsonWriter<TSymbol>(_lastSerializationSizeEstimate);
try
{
Formatter.Serialize(ref jsonWriter, input);
_lastSerializationSizeEstimate = jsonWriter.Data.Length;
var temp = jsonWriter.Data;
var data = Unsafe.As<TSymbol[], byte[]>(ref temp);
return new ArraySegment<byte>(data, 0, jsonWriter.Position);
}
catch
{
jsonWriter.Dispose();
throw;
}
}
who sometimes under certain and unknown conditions cause error
An unhandled exception of type 'System.ExecutionEngineException' occurred in Unknown Module.
that can be fixed by `[MethodImpl(MethodImplOptions.NoOptimization)]`
But also can be fixed if rewrite and prevent optimization inlining `var data = Unsafe.As<TSymbol[], byte[]>(ref temp);` to `return new ArraySegment(buff, 0, jsonWriter.Position);`
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static ArraySegment<byte> InnerSerializeToByteArrayPool(T input)
{
byte[] buff = new byte[0]; // !!!
var jsonWriter = new JsonWriter<TSymbol>(_lastSerializationSizeEstimate);
try
{
Formatter.Serialize(ref jsonWriter, input);
_lastSerializationSizeEstimate = jsonWriter.Data.Length;
var temp = jsonWriter.Data;
var data = Unsafe.As<TSymbol[], byte[]>(ref temp);
buff = data; !!!
return new ArraySegment<byte>(buff, 0, jsonWriter.Position);
}
catch
{
jsonWriter.Dispose();
throw new ApplicationException(buff.ToString()); !!!!
}
}
dump say reasons:
The thread tried to read from or write to a virtual address for which it does not have the appropriate access
FAILURE_BUCKET_ID: NULL_POINTER_READ_c0000005_coreclr.dll!TypeHandle::HasTypeEquivalence
reproduced on on 3.0, 3.1, 5.0 Alpha (@Tornhoof)
[https://github.com/vitidev/CoreClrBug2](https://github.com/vitidev/CoreClrBug2)
| non_process | compiler generate the wrong code when enabled optimize code for unsafe as it looks like the compiler is generating the wrong code when enabled optimize code for unsafe as i use spanjson library for json serialization and there are code var buffer jsonserializer generic serializetoarraypool response which serialize directly to arraypool internally it use public static arraysegment innerserializetobytearraypool t input var jsonwriter new jsonwriter lastserializationsizeestimate try formatter serialize ref jsonwriter input lastserializationsizeestimate jsonwriter data length var temp jsonwriter data var data unsafe as ref temp return new arraysegment data jsonwriter position catch jsonwriter dispose throw who sometimes under certain and unknown conditions cause error an unhandled exception of type system executionengineexception occurred in unknown module that can be fixed by but also can be fixed if rewrite and prevent optimization inlining var data unsafe as ref temp to return new arraysegment buff jsonwriter position public static arraysegment innerserializetobytearraypool t input byte buff new byte var jsonwriter new jsonwriter lastserializationsizeestimate try formatter serialize ref jsonwriter input lastserializationsizeestimate jsonwriter data length var temp jsonwriter data var data unsafe as ref temp buff data return new arraysegment buff jsonwriter position catch jsonwriter dispose throw new applicationexception buff tostring dump say reasons the thread tried to read from or write to a virtual address for which it does not have the appropriate access failure bucket id null pointer read coreclr dll typehandle hastypeequivalence reproduced on on alpha tornhoof | 0 |
18,522 | 24,551,878,629 | IssuesEvent | 2022-10-12 13:12:56 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] Review consent should not get displayed for the enrolled users in the following scenario | Bug P0 iOS Process: Fixed Process: Tested QA Process: Tested dev | Steps:
1. Create a study in SB and launch the study
2. Click on edit study
3. Update the consent and enforce the consent for the enrolled participant
4. Publish the update
5. Launch the mobile app and sign up or sign in to the mobile app
6. Click on the particular study
7. Enroll to the study
8. Go back to the study list screen
9. Again click on the enrolled study
10. Observe
AR: The review consent popup is getting displayed to the newly enrolled participants in the above scenario
ER: The review consent popup should not be displayed to the newly enrolled participants in the above scenario

| 3.0 | [iOS] Review consent should not get displayed for the enrolled users in the following scenario - Steps:
1. Create a study in SB and launch the study
2. Click on edit study
3. Update the consent and enforce the consent for the enrolled participant
4. Publish the update
5. Launch the mobile app and sign up or sign in to the mobile app
6. Click on the particular study
7. Enroll to the study
8. Go back to the study list screen
9. Again click on the enrolled study
10. Observe
AR: The review consent popup is getting displayed to the newly enrolled participants in the above scenario
ER: The review consent popup should not be displayed to the newly enrolled participants in the above scenario

| process | review consent should not get displayed for the enrolled users in the following scenario steps create a study in sb and launch the study click on edit study update the consent and enforce the consent for the enrolled participant publish the update launch the mobile app and sign up or sign in to the mobile app click on the particular study enroll to the study go back to the study list screen again click on the enrolled study observe ar the review consent popup is getting displayed to the newly enrolled participants in the above scenario er the review consent popup should not be displayed to the newly enrolled participants in the above scenario | 1 |
213,882 | 24,022,483,762 | IssuesEvent | 2022-09-15 08:48:42 | nidhi7598/external_tcpdump_AOSP_10_r33_CVE-2018-14463 | https://api.github.com/repos/nidhi7598/external_tcpdump_AOSP_10_r33_CVE-2018-14463 | opened | CVE-2018-14881 (High) detected in platform_external_tcpdumpandroid-mainline-12.0.0_r17 | security vulnerability | ## CVE-2018-14881 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>platform_external_tcpdumpandroid-mainline-12.0.0_r17</b></p></summary>
<p>
<p>Library home page: <a href=https://github.com/aosp-mirror/platform_external_tcpdump.git>https://github.com/aosp-mirror/platform_external_tcpdump.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/external_tcpdump_AOSP_10_r33_CVE-2018-14463/commit/e43391921cf1e42060ac09a2872af046e8f71692">e43391921cf1e42060ac09a2872af046e8f71692</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/print-bgp.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The BGP parser in tcpdump before 4.9.3 has a buffer over-read in print-bgp.c:bgp_capabilities_print() (BGP_CAPCODE_RESTART).
<p>Publish Date: 2019-10-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14881>CVE-2018-14881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14881">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14881</a></p>
<p>Release Date: 2019-10-03</p>
<p>Fix Resolution: 4.9.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-14881 (High) detected in platform_external_tcpdumpandroid-mainline-12.0.0_r17 - ## CVE-2018-14881 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>platform_external_tcpdumpandroid-mainline-12.0.0_r17</b></p></summary>
<p>
<p>Library home page: <a href=https://github.com/aosp-mirror/platform_external_tcpdump.git>https://github.com/aosp-mirror/platform_external_tcpdump.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/external_tcpdump_AOSP_10_r33_CVE-2018-14463/commit/e43391921cf1e42060ac09a2872af046e8f71692">e43391921cf1e42060ac09a2872af046e8f71692</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/print-bgp.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The BGP parser in tcpdump before 4.9.3 has a buffer over-read in print-bgp.c:bgp_capabilities_print() (BGP_CAPCODE_RESTART).
<p>Publish Date: 2019-10-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14881>CVE-2018-14881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14881">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14881</a></p>
<p>Release Date: 2019-10-03</p>
<p>Fix Resolution: 4.9.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in platform external tcpdumpandroid mainline cve high severity vulnerability vulnerable library platform external tcpdumpandroid mainline library home page a href found in head commit a href found in base branch master vulnerable source files print bgp c vulnerability details the bgp parser in tcpdump before has a buffer over read in print bgp c bgp capabilities print bgp capcode restart publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
364,385 | 10,763,567,499 | IssuesEvent | 2019-11-01 04:38:36 | AY1920S1-CS2103T-F12-4/main | https://api.github.com/repos/AY1920S1-CS2103T-F12-4/main | closed | Update Expense class with new requirements | priority.High type.Enhancement | * [x] Add `LocalDateTime` field
* [x] Change `Amount` class
* [x] Update Add and Edit commands to include above changes
| 1.0 | Update Expense class with new requirements - * [x] Add `LocalDateTime` field
* [x] Change `Amount` class
* [x] Update Add and Edit commands to include above changes
| non_process | update expense class with new requirements add localdatetime field change amount class update add and edit commands to include above changes | 0 |
21,341 | 29,087,041,430 | IssuesEvent | 2023-05-16 01:25:32 | model-checking/kani-vscode-extension | https://api.github.com/repos/model-checking/kani-vscode-extension | closed | Option to view full output | enhancement must-have Kani Process Execution | <!--
If this is a security issue, please report it following the
[security reporting procedure](https://github.com/model-checking/kani/security/policy).
-->
Requested feature: An option to view the full output
Use case: Users may want to check the full output to inspect the statuses of cover statements or other checks. This isn't possible to do nowadays in the extension, despite it keeping a copy of this output. | 1.0 | Option to view full output - <!--
If this is a security issue, please report it following the
[security reporting procedure](https://github.com/model-checking/kani/security/policy).
-->
Requested feature: An option to view the full output
Use case: Users may want to check the full output to inspect the statuses of cover statements or other checks. This isn't possible to do nowadays in the extension, despite it keeping a copy of this output. | process | option to view full output if this is a security issue please report it following the requested feature an option to view the full output use case users may want to check the full output to inspect the statuses of cover statements or other checks this isn t possible to do nowadays in the extension despite it keeping a copy of this output | 1 |
6,095 | 8,953,126,703 | IssuesEvent | 2019-01-25 18:31:10 | cloudfoundry-community/stackdriver-tools | https://api.github.com/repos/cloudfoundry-community/stackdriver-tools | closed | release-v2.1.0 | release-process | # Release Process
## Before continuing
- [x] `make lint build test bosh-release tile` runs successfully
- [x] The system should build and all tests pass with the Concourse pipeline.
- [x] Open a ticket with the name `release-vX.Y.Z` and copy the contents of this file into its description.
- [x] Create a new pre-release branch in the GitHub repository labeled `release-vX.Y.Z`.
## Update the project in the branch
- [x] Update the `CHANGELOG.md` to match the new version.
- [x] Ensure `tile.yml.erb` is using the correct PCF version dependencies.
- [x] Commit the changes on the new branch.
## Our repository
- [x] Draft a new release in GitHub with the tag `vX.Y.Z-rc`
- [x] Include the version's changelog entry in the description.
- [x] Upload the built tile and BOSH release to the pre-release.
- [x] Check the box labeled **This is a pre-release**.
- [x] Publish the pre-release.
## Release on PivNet
- [x] Validate that the name in the generated tile's `metadata.yml` matches the slug on PivNet.
- [x] Ensure the release version is consistent on the tile and documentation.
- [x] Create a [new release on PivNet](network.pivotal.io) as an Admin Only release.
- [x] Upload the tile and BOSH release that were staged to GitHub.
- [x] Check that the tile passes the tests in the [build dashboard](https://tile-dashboard.cfapps.io/tiles/gcp-service-broker).
## Upgrade the documentation
- [x] Submit a pull request to the [documentation repository](https://github.com/pivotal-cf/docs-google/tree/master/docs-content).
- [x] Include the new release notes, changes, and the ERT/PAS and Ops Managers versions, as well as your Product Version and Release Date in the Product Snapshot on PivNet.
## File for release
- [x] Fill out the [release form](https://docs.google.com/forms/d/e/1FAIpQLSctLGMU8iOuwq6NqDYI65aMhJ7widDQGo9SawDG0b8TFfq7Ag/viewform).
- [x] An ISV Program Manager will make the release available to "All Users" after review. Partner Admins can make the release available to "Admin Users".
- [x] Merge the release branch once done.
- [ ] Submit an issue to https://github.com/cf-platform-eng/gcp-pcf-quickstart to update the GCP PCF quickstart.
| 1.0 | release-v2.1.0 - # Release Process
## Before continuing
- [x] `make lint build test bosh-release tile` runs successfully
- [x] The system should build and all tests pass with the Concourse pipeline.
- [x] Open a ticket with the name `release-vX.Y.Z` and copy the contents of this file into its description.
- [x] Create a new pre-release branch in the GitHub repository labeled `release-vX.Y.Z`.
## Update the project in the branch
- [x] Update the `CHANGELOG.md` to match the new version.
- [x] Ensure `tile.yml.erb` is using the correct PCF version dependencies.
- [x] Commit the changes on the new branch.
## Our repository
- [x] Draft a new release in GitHub with the tag `vX.Y.Z-rc`
- [x] Include the version's changelog entry in the description.
- [x] Upload the built tile and BOSH release to the pre-release.
- [x] Check the box labeled **This is a pre-release**.
- [x] Publish the pre-release.
## Release on PivNet
- [x] Validate that the name in the generated tile's `metadata.yml` matches the slug on PivNet.
- [x] Ensure the release version is consistent on the tile and documentation.
- [x] Create a [new release on PivNet](network.pivotal.io) as an Admin Only release.
- [x] Upload the tile and BOSH release that were staged to GitHub.
- [x] Check that the tile passes the tests in the [build dashboard](https://tile-dashboard.cfapps.io/tiles/gcp-service-broker).
## Upgrade the documentation
- [x] Submit a pull request to the [documentation repository](https://github.com/pivotal-cf/docs-google/tree/master/docs-content).
- [x] Include the new release notes, changes, and the ERT/PAS and Ops Managers versions, as well as your Product Version and Release Date in the Product Snapshot on PivNet.
## File for release
- [x] Fill out the [release form](https://docs.google.com/forms/d/e/1FAIpQLSctLGMU8iOuwq6NqDYI65aMhJ7widDQGo9SawDG0b8TFfq7Ag/viewform).
- [x] An ISV Program Manager will make the release available to "All Users" after review. Partner Admins can make the release available to "Admin Users".
- [x] Merge the release branch once done.
- [ ] Submit an issue to https://github.com/cf-platform-eng/gcp-pcf-quickstart to update the GCP PCF quickstart.
| process | release release process before continuing make lint build test bosh release tile runs successfully the system should build and all tests pass with the concourse pipeline open a ticket with the name release vx y z and copy the contents of this file into its description create a new pre release branch in the github repository labeled release vx y z update the project in the branch update the changelog md to match the new version ensure tile yml erb is using the correct pcf version dependencies commit the changes on the new branch our repository draft a new release in github with the tag vx y z rc include the version s changelog entry in the description upload the built tile and bosh release to the pre release check the box labeled this is a pre release publish the pre release release on pivnet validate that the name in the generated tile s metadata yml matches the slug on pivnet ensure the release version is consistent on the tile and documentation create a network pivotal io as an admin only release upload the tile and bosh release that were staged to github check that the tile passes the tests in the upgrade the documentation submit a pull request to the include the new release notes changes and the ert pas and ops managers versions as well as your product version and release date in the product snapshot on pivnet file for release fill out the an isv program manager will make the release available to all users after review partner admins can make the release available to admin users merge the release branch once done submit an issue to to update the gcp pcf quickstart | 1 |
12,427 | 14,927,938,972 | IssuesEvent | 2021-01-24 17:18:55 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] Resources > Contents added in Rich text editor are not reflected in mobile | Bug P1 Process: Fixed Process: Tested dev iOS | Steps:
1. Add some content in a resource
2. Publish the updates
3. Navigate to resources in mobile
4. Click on the added resource
5. Observe
Actual: Contents added in Rich text editor are not reflected in mobile
iOS:

SB:

| 2.0 | [iOS] Resources > Contents added in Rich text editor are not reflected in mobile - Steps:
1. Add some content in a resource
2. Publish the updates
3. Navigate to resources in mobile
4. Click on the added resource
5. Observe
Actual: Contents added in Rich text editor are not reflected in mobile
iOS:

SB:

| process | resources contents added in rich text editor are not reflected in mobile steps add some content in a resource publish the updates navigate to resources in mobile click on the added resource observe actual contents added in rich text editor are not reflected in mobile ios sb | 1 |
13,383 | 15,862,481,472 | IssuesEvent | 2021-04-08 11:39:42 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [PM] UI Issues in password strength meter | Bug P1 Participant manager Process: Fixed Process: Tested QA Process: Tested dev UI | UI Issues in password strength meter:
1. Strength meter should be as per UI design i.e length of meter should display below New password field completely
2. Width of meter should be increased i.e as per design
3. Text 'Weak'/'Fair'/'Goog'/'Strong' font size should be increased
4. Colour of top portion of meter is not filled completely
5. Font colour for Green and Red is different compared to design

| 3.0 | [PM] UI Issues in password strength meter - UI Issues in password strength meter:
1. Strength meter should be as per UI design i.e length of meter should display below New password field completely
2. Width of meter should be increased i.e as per design
3. Text 'Weak'/'Fair'/'Goog'/'Strong' font size should be increased
4. Colour of top portion of meter is not filled completely
5. Font colour for Green and Red is different compared to design

| process | ui issues in password strength meter ui issues in password strength meter strength meter should be as per ui design i e length of meter should display below new password field completely width of meter should be increased i e as per design text weak fair goog strong font size should be increased colour of top portion of meter is not filled completely font colour for green and red is different compared to design | 1 |
319,724 | 9,753,773,292 | IssuesEvent | 2019-06-04 09:54:25 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Send authorization error to callback uri and not to oauth2_error.do page | Affected/5.3.0 Priority/High Severity/Major WUM | For some error reasons such as OAuthProblemException occurs, Identity Server redirect the error messages to the authenticationendpoint/oauth2_error.do URL.
According to the OIDC specification [1], unless the redirection URI is invalid, the response should send the redirect URL.
`Unless the Redirection URI is invalid, the Authorization Server returns the Client to the Redirection URI specified in the Authorization Request with the appropriate error and state parameters. Other parameters SHOULD NOT be returned.`
An example scenario is provie a inalid response_type in authorize request.
| 1.0 | Send authorization error to callback uri and not to oauth2_error.do page - For some error reasons such as OAuthProblemException occurs, Identity Server redirect the error messages to the authenticationendpoint/oauth2_error.do URL.
According to the OIDC specification [1], unless the redirection URI is invalid, the response should send the redirect URL.
`Unless the Redirection URI is invalid, the Authorization Server returns the Client to the Redirection URI specified in the Authorization Request with the appropriate error and state parameters. Other parameters SHOULD NOT be returned.`
An example scenario is provie a inalid response_type in authorize request.
| non_process | send authorization error to callback uri and not to error do page for some error reasons such as oauthproblemexception occurs identity server redirect the error messages to the authenticationendpoint error do url according to the oidc specification unless the redirection uri is invalid the response should send the redirect url unless the redirection uri is invalid the authorization server returns the client to the redirection uri specified in the authorization request with the appropriate error and state parameters other parameters should not be returned an example scenario is provie a inalid response type in authorize request | 0 |
85,165 | 3,687,473,979 | IssuesEvent | 2016-02-25 08:33:50 | CloudOpting/cloudopting-manager | https://api.github.com/repos/CloudOpting/cloudopting-manager | closed | create a page for trying to merge toscaIDE in the orchestrator | enhancement high priority | Let us try to see if we manage to have a page visible only to service provider to try to place the toscaIDE inside the orchestrator.
I need a page much like the "button" page at the moment so I can try to take it and port it there.
If you can also remember me where I have to place javascript that has to be applyed just to that page.
Thanks
| 1.0 | create a page for trying to merge toscaIDE in the orchestrator - Let us try to see if we manage to have a page visible only to service provider to try to place the toscaIDE inside the orchestrator.
I need a page much like the "button" page at the moment so I can try to take it and port it there.
If you can also remember me where I have to place javascript that has to be applyed just to that page.
Thanks
| non_process | create a page for trying to merge toscaide in the orchestrator let us try to see if we manage to have a page visible only to service provider to try to place the toscaide inside the orchestrator i need a page much like the button page at the moment so i can try to take it and port it there if you can also remember me where i have to place javascript that has to be applyed just to that page thanks | 0 |
634 | 3,092,123,380 | IssuesEvent | 2015-08-26 16:15:27 | e-government-ua/iBP | https://api.github.com/repos/e-government-ua/iBP | opened | Надвірнянська РДА - Надання довідки про отримання(неотримання) допомоги | in process of creating |
существующий процесс - https://drive.google.com/a/privatbank.ua/file/d/0B4vk1jpTDb_5Q001LUMtZ1VWZkU/view
предлагаемый процесс - https://drive.google.com/a/privatbank.ua/file/d/0B4
vk1jpTDb_5a2FJOVlNeGpmSFk/view
| 1.0 | Надвірнянська РДА - Надання довідки про отримання(неотримання) допомоги -
существующий процесс - https://drive.google.com/a/privatbank.ua/file/d/0B4vk1jpTDb_5Q001LUMtZ1VWZkU/view
предлагаемый процесс - https://drive.google.com/a/privatbank.ua/file/d/0B4
vk1jpTDb_5a2FJOVlNeGpmSFk/view
| process | надвірнянська рда надання довідки про отримання неотримання допомоги существующий процесс предлагаемый процесс view | 1 |
20,095 | 26,628,275,011 | IssuesEvent | 2023-01-24 15:59:19 | gobuffalo/pop | https://api.github.com/repos/gobuffalo/pop | opened | process: update supported database versions | process | Currently, Pop supports the following versions as the minimum supported version.
* postgres:10
* cockroachdb/cockroach:latest-v21.1
* mysql:5.7
Time to update supported versions postgres, cockroach, and hopefully mysql too :-)
| Database | Latest | Minimum | Ubuntu 20.04 / 22.04 | Debian 10 / 11 | RHEL 8 / 9 |
|------------------|-----------|-------------|------------------------------|---------------------|---------------------|
| SQLite3 | 3.40.0 | - | 3.31.1 / 3.37.2 | 3.27.2 / 3.34.1 | 3.26.0 / 3.34.1 |
| PostgreSQL | 15 | **11** | 12 / 14 | **11** / 13 | 13.7 / 13.7 |
| MySQL | 8.0 | **5.7** | 8.0 / 8.0 | **_5.7_** / 8.0 | 8.0 / 8.0 |
| MariaDB | 10.6 | **10.3** | 10.3.37 / 10.6.11 | 10.3.36 / 10.5.18 | 10.5.16 / 10.5.16 |
| Cockroach | v22.2 | _v21.2_ | | | |
Considerations:
* EOS date for Cockroach v21.2 is 2023-05-16
* MySQL 5.7 is only supported on Debian 10 (EOS for Debian 10 is 2024-06-30 but...) | 1.0 | process: update supported database versions - Currently, Pop supports the following versions as the minimum supported version.
* postgres:10
* cockroachdb/cockroach:latest-v21.1
* mysql:5.7
Time to update supported versions postgres, cockroach, and hopefully mysql too :-)
| Database | Latest | Minimum | Ubuntu 20.04 / 22.04 | Debian 10 / 11 | RHEL 8 / 9 |
|------------------|-----------|-------------|------------------------------|---------------------|---------------------|
| SQLite3 | 3.40.0 | - | 3.31.1 / 3.37.2 | 3.27.2 / 3.34.1 | 3.26.0 / 3.34.1 |
| PostgreSQL | 15 | **11** | 12 / 14 | **11** / 13 | 13.7 / 13.7 |
| MySQL | 8.0 | **5.7** | 8.0 / 8.0 | **_5.7_** / 8.0 | 8.0 / 8.0 |
| MariaDB | 10.6 | **10.3** | 10.3.37 / 10.6.11 | 10.3.36 / 10.5.18 | 10.5.16 / 10.5.16 |
| Cockroach | v22.2 | _v21.2_ | | | |
Considerations:
* EOS date for Cockroach v21.2 is 2023-05-16
* MySQL 5.7 is only supported on Debian 10 (EOS for Debian 10 is 2024-06-30 but...) | process | process update supported database versions currently pop supports the following versions as the minimum supported version postgres cockroachdb cockroach latest mysql time to update supported versions postgres cockroach and hopefully mysql too database latest minimum ubuntu debian rhel postgresql mysql mariadb cockroach considerations eos date for cockroach is mysql is only supported on debian eos for debian is but | 1 |
824,361 | 31,151,951,775 | IssuesEvent | 2023-08-16 10:33:06 | grpc/grpc | https://api.github.com/repos/grpc/grpc | opened | `StatusCode.UNKNOWN Stream removed` when it should be unavailable | kind/bug lang/Python priority/P2 | <!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
1.56.2 and 1.57.0
### What operating system (Linux, Windows,...) and version?
Linux GCP
### What runtime / compiler are you using (e.g. python version or version of gcc)
3.8.13
### What did you do?
Please provide either 1) A unit test for reproducing the bug or 2) Specific steps for us to follow to reproduce the bug. If there’s not enough information to debug the problem, gRPC team may close the issue at their discretion. You’re welcome to re-open the issue once you have a reproduction.
We have a high availability Kubernetes cluster using Envoy as gRPC proxy since we use SRV records on our DNS. gRPC has some issues working with SRV records using the Python client.
We recently updated from `1.48.1` to both `1.57.0 and `1.56.2` and in both libraries we are seen a new behaviour which is causing some issues.
When there is high load and one of our upstream services does a Kubernetes deployment, while new pods are spinning up and old pods are dying, Envoy starts to respond `unavailable`. With the version `1.48.1` these unavailable calls would tell the gRPC that the client was no longer available and it would open a new connection, but with the new versions gRPC client doesn't process the `unavailable` response and instead throws an unrecoverable error.
From here the only thing we can do is restart Envoy to mitigate the issue
>
### What did you expect to see?
We would expect to see an unavailable error in gRPC and create a new connection from Envoy
### What did you see instead?
```
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 277, in __call__
response, ignored_call = self._with_call(
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 332, in _with_call
return call.result(), call
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_channel.py", line 438, in result
raise self
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 315, in continuation
response, call = self._thunk(new_method).with_call(
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 343, in with_call
return self._with_call(
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 332, in _with_call
return call.result(), call
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_channel.py", line 438, in result
raise self
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 315, in continuation
response, call = self._thunk(new_method).with_call(
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_channel.py", line 1178, in with_call
return _end_unary_response_blocking(state, call, True, None)
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_channel.py", line 1004, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Stream removed"
debug_error_string = "UNKNOWN:Error received from peer {grpc_message:"Stream removed", grpc_status:2, created_time:"2023-08-16T10:21:12.913725397+00:00"}"
```
Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs).
See [TROUBLESHOOTING.md](https://github.com/grpc/grpc/blob/master/TROUBLESHOOTING.md) for how to diagnose problems better.
### Anything else we should know about your project / environment?
Its happening with high load around 50 RPS | 1.0 | `StatusCode.UNKNOWN Stream removed` when it should be unavailable - <!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
1.56.2 and 1.57.0
### What operating system (Linux, Windows,...) and version?
Linux GCP
### What runtime / compiler are you using (e.g. python version or version of gcc)
3.8.13
### What did you do?
Please provide either 1) A unit test for reproducing the bug or 2) Specific steps for us to follow to reproduce the bug. If there’s not enough information to debug the problem, gRPC team may close the issue at their discretion. You’re welcome to re-open the issue once you have a reproduction.
We have a high availability Kubernetes cluster using Envoy as gRPC proxy since we use SRV records on our DNS. gRPC has some issues working with SRV records using the Python client.
We recently updated from `1.48.1` to both `1.57.0 and `1.56.2` and in both libraries we are seen a new behaviour which is causing some issues.
When there is high load and one of our upstream services does a Kubernetes deployment, while new pods are spinning up and old pods are dying, Envoy starts to respond `unavailable`. With the version `1.48.1` these unavailable calls would tell the gRPC that the client was no longer available and it would open a new connection, but with the new versions gRPC client doesn't process the `unavailable` response and instead throws an unrecoverable error.
From here the only thing we can do is restart Envoy to mitigate the issue
>
### What did you expect to see?
We would expect to see an unavailable error in gRPC and create a new connection from Envoy
### What did you see instead?
```
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 277, in __call__
response, ignored_call = self._with_call(
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 332, in _with_call
return call.result(), call
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_channel.py", line 438, in result
raise self
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 315, in continuation
response, call = self._thunk(new_method).with_call(
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 343, in with_call
return self._with_call(
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 332, in _with_call
return call.result(), call
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_channel.py", line 438, in result
raise self
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_interceptor.py", line 315, in continuation
response, call = self._thunk(new_method).with_call(
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_channel.py", line 1178, in with_call
return _end_unary_response_blocking(state, call, True, None)
File "/usr/src/app/.venv/lib/python3.8/site-packages/grpc/_channel.py", line 1004, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Stream removed"
debug_error_string = "UNKNOWN:Error received from peer {grpc_message:"Stream removed", grpc_status:2, created_time:"2023-08-16T10:21:12.913725397+00:00"}"
```
Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs).
See [TROUBLESHOOTING.md](https://github.com/grpc/grpc/blob/master/TROUBLESHOOTING.md) for how to diagnose problems better.
### Anything else we should know about your project / environment?
Its happening with high load around 50 RPS | non_process | statuscode unknown stream removed when it should be unavailable please do not post a question here this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers at stackoverflow with grpc tag for questions that specifically need to be answered by grpc team members please ask look for answers at grpc io mailing list issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using and what operating system linux windows and version linux gcp what runtime compiler are you using e g python version or version of gcc what did you do please provide either a unit test for reproducing the bug or specific steps for us to follow to reproduce the bug if there’s not enough information to debug the problem grpc team may close the issue at their discretion you’re welcome to re open the issue once you have a reproduction we have a high availability kubernetes cluster using envoy as grpc proxy since we use srv records on our dns grpc has some issues working with srv records using the python client we recently updated from to both and and in both libraries we are seen a new behaviour which is causing some issues when there is high load and one of our upstream services does a kubernetes deployment while new pods are spinning up and old pods are dying envoy starts to respond unavailable with the version these unavailable calls would tell the grpc that the client was no longer available and it would open a new connection but with the new versions grpc client doesn t process the unavailable response and instead throws an unrecoverable error from here the only thing we can do is restart envoy to mitigate the issue what did you expect to see we would expect to see an unavailable error in grpc and create a new connection from envoy what did you see instead file usr src app venv lib site packages grpc interceptor py line in call response ignored call self with call file usr src app venv lib site packages grpc interceptor py line in with call return call result call file usr src app venv lib site packages grpc channel py line in result raise self file usr src app venv lib site packages grpc interceptor py line in continuation response call self thunk new method with call file usr src app venv lib site packages grpc interceptor py line in with call return self with call file usr src app venv lib site packages grpc interceptor py line in with call return call result call file usr src app venv lib site packages grpc channel py line in result raise self file usr src app venv lib site packages grpc interceptor py line in continuation response call self thunk new method with call file usr src app venv lib site packages grpc channel py line in with call return end unary response blocking state call true none file usr src app venv lib site packages grpc channel py line in end unary response blocking raise inactiverpcerror state pytype disable not instantiable grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with status statuscode unknown details stream removed debug error string unknown error received from peer grpc message stream removed grpc status created time make sure you include information that can help us debug full error message exception listing stack trace logs see for how to diagnose problems better anything else we should know about your project environment its happening with high load around rps | 0 |
17,512 | 23,325,676,804 | IssuesEvent | 2022-08-08 20:55:55 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | The terminal process failed to launch when I reopen VSCode after using Command+Q to quit VSCode | bug terminal-process | 1. create a directory at path: /Users/ma/本地/demo_project.(contains Chinese component)
2. open the directory from VSCode top menu: File - Open...
3. open a new terminal from top menu: Terminal - New Terminal. and terminal window automatically goes to path (demo_project).
4. use Command + Q shortcut to quit VSCode completely.
5. reopen VSCode from mac dock icon. and VSCode shows error: The terminal process failed to launch: Starting directory (cwd) "/Users/ma/\xe6\x9c\xac\xe5\x9c\xb0/demo_project" does not exist.
6. when the path in step1 does not contain Chinese(like /Users/ma/local/demo_project.), no error shows. the error only shows when the path contains Chinese.
Extra VSCode information:
Version: 1.65.2 (Universal)
Commit: c722ca6c7eed3d7987c0d5c3df5c45f6b15e77d1
Date: 2022-03-10T14:33:49.188Z (1 wk ago)
Electron: 13.5.2
Chromium: 91.0.4472.164
Node.js: 14.16.0
V8: 9.1.269.39-electron.0
OS: Darwin x64 21.4.0
| 1.0 | The terminal process failed to launch when I reopen VSCode after using Command+Q to quit VSCode - 1. create a directory at path: /Users/ma/本地/demo_project.(contains Chinese component)
2. open the directory from VSCode top menu: File - Open...
3. open a new terminal from top menu: Terminal - New Terminal. and terminal window automatically goes to path (demo_project).
4. use Command + Q shortcut to quit VSCode completely.
5. reopen VSCode from mac dock icon. and VSCode shows error: The terminal process failed to launch: Starting directory (cwd) "/Users/ma/\xe6\x9c\xac\xe5\x9c\xb0/demo_project" does not exist.
6. when the path in step1 does not contain Chinese(like /Users/ma/local/demo_project.), no error shows. the error only shows when the path contains Chinese.
Extra VSCode information:
Version: 1.65.2 (Universal)
Commit: c722ca6c7eed3d7987c0d5c3df5c45f6b15e77d1
Date: 2022-03-10T14:33:49.188Z (1 wk ago)
Electron: 13.5.2
Chromium: 91.0.4472.164
Node.js: 14.16.0
V8: 9.1.269.39-electron.0
OS: Darwin x64 21.4.0
| process | the terminal process failed to launch when i reopen vscode after using command q to quit vscode create a directory at path users ma 本地 demo project contains chinese component open the directory from vscode top menu file open open a new terminal from top menu terminal new terminal and terminal window automatically goes to path demo project use command q shortcut to quit vscode completely reopen vscode from mac dock icon and vscode shows error the terminal process failed to launch starting directory cwd users ma xac demo project does not exist when the path in does not contain chinese like users ma local demo project no error shows the error only shows when the path contains chinese extra vscode information: version universal commit date wk ago electron chromium node js electron os darwin | 1 |
38,520 | 2,848,287,365 | IssuesEvent | 2015-05-29 21:56:17 | mbainrot/home_automation | https://api.github.com/repos/mbainrot/home_automation | opened | Document and support the use of a custom DHCP code | priority:low status:not started type:new feature | isc-dhcp-server dhcpd.conf example
```
option homeautomation-server code 200 = text;
option homeautomation-server "your ip address";
```
This would tell clients where your automation server is, reducing the amount of configuration required /provided/ node-mcu's firmware has a way of getting the custom dhcp options and that we can find a code that won't break anything | 1.0 | Document and support the use of a custom DHCP code - isc-dhcp-server dhcpd.conf example
```
option homeautomation-server code 200 = text;
option homeautomation-server "your ip address";
```
This would tell clients where your automation server is, reducing the amount of configuration required /provided/ node-mcu's firmware has a way of getting the custom dhcp options and that we can find a code that won't break anything | non_process | document and support the use of a custom dhcp code isc dhcp server dhcpd conf example option homeautomation server code text option homeautomation server your ip address this would tell clients where your automation server is reducing the amount of configuration required provided node mcu s firmware has a way of getting the custom dhcp options and that we can find a code that won t break anything | 0 |
112,452 | 4,532,883,636 | IssuesEvent | 2016-09-08 09:36:05 | nbnuk/als-issues | https://api.github.com/repos/nbnuk/als-issues | closed | Remove option for CC-BY-SA license from Submit a Sighting. | bug low-priority | The options should be OGL, CC0, CC-BY, CC-BY-NC | 1.0 | Remove option for CC-BY-SA license from Submit a Sighting. - The options should be OGL, CC0, CC-BY, CC-BY-NC | non_process | remove option for cc by sa license from submit a sighting the options should be ogl cc by cc by nc | 0 |
204,080 | 15,398,716,697 | IssuesEvent | 2021-03-04 00:37:14 | nucleus-security/Test-repo | https://api.github.com/repos/nucleus-security/Test-repo | opened | Nucleus - Project: Ticketing Rules now apply to all vulnerabilities - [Medium] - CentOS Security Update for kernel (CESA-2017:0817) | Testing | Source: QUALYS
Finding Description: CentOS has released security update for kernel to fix the vulnerabilities.<p>Affected Products:<br /><br />centos 6
Impact: This vulnerability could be exploited to gain complete access to sensitive information. Malicious users could also use this vulnerability to change all the contents or configuration on the system. Additionally this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable.</p>
Target(s): Asset name: 45.55.254.143
IP: 45.55.254.143
Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory <a href="https://lists.centos.org/pipermail/centos-cr-announce/2017-march/003811.html">centos 6</a> for updates and patch information.
<p>Patch:<br />
Following are links for downloading patches to fix the vulnerabilities:
</p><p> <a href="https://lists.centos.org/pipermail/centos-cr-announce/2017-march/003811.html">CESA-2017:0817: centos 6</a></p>
References:
ID:256192
CVE:CVE-2016-10088,CVE-2016-10142,CVE-2016-2069,CVE-2016-2384,CVE-2016-6480,CVE-2016-7042,CVE-2016-7097,CVE-2016-8399,CVE-2016-9576
Category:CentOS
PCI Flagged:1
Vendor References:CESA-2017:0817 centos 6
Bugtraq IDs:92659,81809,94708,94821,83256,84216,95169,95797,92214,93544
Severity: Medium
Date Discovered: 2020-01-07 14:35:48
Nucleus Notification Rules Triggered: GitHub Rule
Project Name: Ticketing Rules now apply to all vulnerabilities
| 1.0 | Nucleus - Project: Ticketing Rules now apply to all vulnerabilities - [Medium] - CentOS Security Update for kernel (CESA-2017:0817) - Source: QUALYS
Finding Description: CentOS has released security update for kernel to fix the vulnerabilities.<p>Affected Products:<br /><br />centos 6
Impact: This vulnerability could be exploited to gain complete access to sensitive information. Malicious users could also use this vulnerability to change all the contents or configuration on the system. Additionally this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable.</p>
Target(s): Asset name: 45.55.254.143
IP: 45.55.254.143
Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory <a href="https://lists.centos.org/pipermail/centos-cr-announce/2017-march/003811.html">centos 6</a> for updates and patch information.
<p>Patch:<br />
Following are links for downloading patches to fix the vulnerabilities:
</p><p> <a href="https://lists.centos.org/pipermail/centos-cr-announce/2017-march/003811.html">CESA-2017:0817: centos 6</a></p>
References:
ID:256192
CVE:CVE-2016-10088,CVE-2016-10142,CVE-2016-2069,CVE-2016-2384,CVE-2016-6480,CVE-2016-7042,CVE-2016-7097,CVE-2016-8399,CVE-2016-9576
Category:CentOS
PCI Flagged:1
Vendor References:CESA-2017:0817 centos 6
Bugtraq IDs:92659,81809,94708,94821,83256,84216,95169,95797,92214,93544
Severity: Medium
Date Discovered: 2020-01-07 14:35:48
Nucleus Notification Rules Triggered: GitHub Rule
Project Name: Ticketing Rules now apply to all vulnerabilities
| non_process | nucleus project ticketing rules now apply to all vulnerabilities centos security update for kernel cesa source qualys finding description centos has released security update for kernel to fix the vulnerabilities affected products centos impact this vulnerability could be exploited to gain complete access to sensitive information malicious users could also use this vulnerability to change all the contents or configuration on the system additionally this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable target s asset name ip solution to resolve this issue upgrade to the latest packages which contain a patch refer to centos advisory for updates and patch information patch following are links for downloading patches to fix the vulnerabilities references id cve cve cve cve cve cve cve cve cve cve category centos pci flagged vendor references cesa centos bugtraq ids severity medium date discovered nucleus notification rules triggered github rule project name ticketing rules now apply to all vulnerabilities | 0 |
777,761 | 27,293,234,093 | IssuesEvent | 2023-02-23 18:08:08 | infor-design/enterprise-ng | https://api.github.com/repos/infor-design/enterprise-ng | closed | Monthview's parameter 'Active Date' not working | type: bug :bug: [3] priority: minor | **Describe the bug**
Active Date parameter on Monthview component is not working correctly. After using a number the component will disappear and a message 's.activeDate.getDate is not a function' will be shown on the console.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://stackblitz.com/edit/monthview-component-activate-date-not-working-re3fx8?file=src%2Fapp%2Fform%2Fform.component.html
2. Add parameter [activeDate]="2"
3. See how component disappears.
4. Check console
**Expected behavior**
The selected date on the component should be changed and the component should remain visible.
**Version**
- ids-enterprise-ng: 13.6.2
**Screenshots**
<img width="619" alt="image" src="https://user-images.githubusercontent.com/9669865/220741558-b939dea3-3fc8-4adb-9b39-42ed1d2b9234.png">
**Additional context**
I've verified on 'monthview-api-func-test.js' file that the activeDate parameter was set to "new Date(2022, 6, 10)", so maybe the input type is wrong on the final version of the component.
| 1.0 | Monthview's parameter 'Active Date' not working - **Describe the bug**
Active Date parameter on Monthview component is not working correctly. After using a number the component will disappear and a message 's.activeDate.getDate is not a function' will be shown on the console.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://stackblitz.com/edit/monthview-component-activate-date-not-working-re3fx8?file=src%2Fapp%2Fform%2Fform.component.html
2. Add parameter [activeDate]="2"
3. See how component disappears.
4. Check console
**Expected behavior**
The selected date on the component should be changed and the component should remain visible.
**Version**
- ids-enterprise-ng: 13.6.2
**Screenshots**
<img width="619" alt="image" src="https://user-images.githubusercontent.com/9669865/220741558-b939dea3-3fc8-4adb-9b39-42ed1d2b9234.png">
**Additional context**
I've verified on 'monthview-api-func-test.js' file that the activeDate parameter was set to "new Date(2022, 6, 10)", so maybe the input type is wrong on the final version of the component.
| non_process | monthview s parameter active date not working describe the bug active date parameter on monthview component is not working correctly after using a number the component will disappear and a message s activedate getdate is not a function will be shown on the console to reproduce steps to reproduce the behavior go to add parameter see how component disappears check console expected behavior the selected date on the component should be changed and the component should remain visible version ids enterprise ng screenshots img width alt image src additional context i ve verified on monthview api func test js file that the activedate parameter was set to new date so maybe the input type is wrong on the final version of the component | 0 |
11,879 | 14,677,288,051 | IssuesEvent | 2020-12-30 22:47:59 | esmero/strawberryfield | https://api.github.com/repos/esmero/strawberryfield | closed | Move from Identify to pdfinfo for PDFs | Digital Preservation Events and Subscriber JSON Postprocessors enhancement question | # why?
because Identify on PDFs is a bottleneck.
a PDF processed (all pages) to extract Page dimensions takes almost a minute (128 pages)
pdfinfo takes less than a second.
The difference is HUGE.
Also, we already ship with `pdfinfo`
What needs to be done?
We need to have the same output format.
## Identify
`identify -format 'format:%m|width:%w|height:%h|orientation:%[orientation]@' -quiet sbr_somehash.pdf`
Which gives us for each line this
`@format:PDF|width:628|height:790|orientation:Undefined` which we split and JSON-ify
## pdfinfo
We need first to get the number of pages (a few ms)
`pdfinfo sbr_somehash.pdf -quiet | grep "^Pages:"`
We get this
`Pages: 124`
Now with that number (needs processing we can call)
`pdfinfo sbr_somehash.pdf -f 1 -l 124 |grep 'Page'`
which will give us a large list of two lines per page
Page 124 size: 630 x 794 pts
Page 124 rot: 0
This is also in the `ms` range of processing time
## what is needed?
I have to change \Drupal\strawberryfield\StrawberryfieldFilePersisterService::getBaseFileMetadata
to exempt PDFs and Ghostscript files from run through identify and instead run PDF Info.
@giancarlobi I wonder if that is the case I should keep the same output formate we use of identify?
Or would it better to go for its own flv:pdfinfo?
| 1.0 | Move from Identify to pdfinfo for PDFs - # why?
because Identify on PDFs is a bottleneck.
a PDF processed (all pages) to extract Page dimensions takes almost a minute (128 pages)
pdfinfo takes less than a second.
The difference is HUGE.
Also, we already ship with `pdfinfo`
What needs to be done?
We need to have the same output format.
## Identify
`identify -format 'format:%m|width:%w|height:%h|orientation:%[orientation]@' -quiet sbr_somehash.pdf`
Which gives us for each line this
`@format:PDF|width:628|height:790|orientation:Undefined` which we split and JSON-ify
## pdfinfo
We need first to get the number of pages (a few ms)
`pdfinfo sbr_somehash.pdf -quiet | grep "^Pages:"`
We get this
`Pages: 124`
Now with that number (needs processing we can call)
`pdfinfo sbr_somehash.pdf -f 1 -l 124 |grep 'Page'`
which will give us a large list of two lines per page
Page 124 size: 630 x 794 pts
Page 124 rot: 0
This is also in the `ms` range of processing time
## what is needed?
I have to change \Drupal\strawberryfield\StrawberryfieldFilePersisterService::getBaseFileMetadata
to exempt PDFs and Ghostscript files from run through identify and instead run PDF Info.
@giancarlobi I wonder if that is the case I should keep the same output formate we use of identify?
Or would it better to go for its own flv:pdfinfo?
| process | move from identify to pdfinfo for pdfs why because identify on pdfs is a bottleneck a pdf processed all pages to extract page dimensions takes almost a minute pages pdfinfo takes less than a second the difference is huge also we already ship with pdfinfo what needs to be done we need to have the same output format identify identify format format m width w height h orientation quiet sbr somehash pdf which gives us for each line this format pdf width height orientation undefined which we split and json ify pdfinfo we need first to get the number of pages a few ms pdfinfo sbr somehash pdf quiet grep pages we get this pages now with that number needs processing we can call pdfinfo sbr somehash pdf f l grep page which will give us a large list of two lines per page page size x pts page rot this is also in the ms range of processing time what is needed i have to change drupal strawberryfield strawberryfieldfilepersisterservice getbasefilemetadata to exempt pdfs and ghostscript files from run through identify and instead run pdf info giancarlobi i wonder if that is the case i should keep the same output formate we use of identify or would it better to go for its own flv pdfinfo | 1 |
10,514 | 13,284,071,257 | IssuesEvent | 2020-08-24 05:19:33 | tikv/tikv | https://api.github.com/repos/tikv/tikv | opened | copr: use enum to represent logical_rows | sig/coprocessor type/enhancement | ## Feature Request
### Is your feature request related to a problem? Please describe:
<!-- A description of what the problem is. -->
### Describe the feature you'd like:
Currently, `logical_rows` is a `&[usize]` slice. If `logical_rows` is identical, it will point to `IDENTICAL_LOGICAL_ROWS`, where each element has the same value as index. By the way, `IDENTICAL_LOGICAL_ROWS` is limited to `BATCH_MAX_SIZE` (or 1024).
In fact, we could represent `logical_rows` as an enum. If it is identical, we'll only need to store row numbers. If not, we could store a slice inside. After that, we will never need to use `IDENTICAL_LOGICAL_ROWS`.
After this feature is implemented, https://github.com/tikv/tikv/issues/8488 could efficiently figure out if we could encode `VectorValue` using fast-path.
### Describe alternatives you've considered:
<!-- A description of any alternative solutions or features you've considered. -->
* remove `logical_rows` by making them part of `ChunkVec`.
### Teachability, Documentation, Adoption, Migration Strategy:
<!-- If you can, explain some scenarios how users might use this, or situations in which it would be helpful. Any API designs, mockups, or diagrams are also helpful. -->
`LogicalRows` enum has been already implemented. But we found some underlying issue in TiKV, that is to say, if a `logical_rows` is identical, the number of elements it contain may be larger than `BATCH_MAX_SIZE`. This may cause TiKV to crash. Currently, `LogicalRows::Identical` is disabled all over TiKV.
Related PRs:
* https://github.com/tikv/tikv/pull/8345/ introduces `LogicalRows`
* https://github.com/tikv/tikv/pull/8397 introduces some fast-path
* https://github.com/tikv/tikv/issues/8481 observes issue that elements could exceed `BATCH_MAX_SIZE`
* https://github.com/tikv/tikv/pull/8482 disabled `LogicalRows::Identical` | 1.0 | copr: use enum to represent logical_rows - ## Feature Request
### Is your feature request related to a problem? Please describe:
<!-- A description of what the problem is. -->
### Describe the feature you'd like:
Currently, `logical_rows` is a `&[usize]` slice. If `logical_rows` is identical, it will point to `IDENTICAL_LOGICAL_ROWS`, where each element has the same value as index. By the way, `IDENTICAL_LOGICAL_ROWS` is limited to `BATCH_MAX_SIZE` (or 1024).
In fact, we could represent `logical_rows` as an enum. If it is identical, we'll only need to store row numbers. If not, we could store a slice inside. After that, we will never need to use `IDENTICAL_LOGICAL_ROWS`.
After this feature is implemented, https://github.com/tikv/tikv/issues/8488 could efficiently figure out if we could encode `VectorValue` using fast-path.
### Describe alternatives you've considered:
<!-- A description of any alternative solutions or features you've considered. -->
* remove `logical_rows` by making them part of `ChunkVec`.
### Teachability, Documentation, Adoption, Migration Strategy:
<!-- If you can, explain some scenarios how users might use this, or situations in which it would be helpful. Any API designs, mockups, or diagrams are also helpful. -->
`LogicalRows` enum has been already implemented. But we found some underlying issue in TiKV, that is to say, if a `logical_rows` is identical, the number of elements it contain may be larger than `BATCH_MAX_SIZE`. This may cause TiKV to crash. Currently, `LogicalRows::Identical` is disabled all over TiKV.
Related PRs:
* https://github.com/tikv/tikv/pull/8345/ introduces `LogicalRows`
* https://github.com/tikv/tikv/pull/8397 introduces some fast-path
* https://github.com/tikv/tikv/issues/8481 observes issue that elements could exceed `BATCH_MAX_SIZE`
* https://github.com/tikv/tikv/pull/8482 disabled `LogicalRows::Identical` | process | copr use enum to represent logical rows feature request is your feature request related to a problem please describe describe the feature you d like currently logical rows is a slice if logical rows is identical it will point to identical logical rows where each element has the same value as index by the way identical logical rows is limited to batch max size or in fact we could represent logical rows as an enum if it is identical we ll only need to store row numbers if not we could store a slice inside after that we will never need to use identical logical rows after this feature is implemented could efficiently figure out if we could encode vectorvalue using fast path describe alternatives you ve considered remove logical rows by making them part of chunkvec teachability documentation adoption migration strategy logicalrows enum has been already implemented but we found some underlying issue in tikv that is to say if a logical rows is identical the number of elements it contain may be larger than batch max size this may cause tikv to crash currently logicalrows identical is disabled all over tikv related prs introduces logicalrows introduces some fast path observes issue that elements could exceed batch max size disabled logicalrows identical | 1 |
80,680 | 30,479,278,749 | IssuesEvent | 2023-07-17 18:59:00 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | reopened | DataTable: rowExpansion+rowEditor creates duplicate ids | :lady_beetle: defect | **Describe the defect**
Using dataTable with rowExpansion and rowEditor creates duplicate ids.
**Reproducer**
[primefaces-datatable-rowexpansion-roweditor-duplicate-ids.zip](https://github.com/primefaces/primefaces/files/7068337/primefaces-datatable-rowexpansion-roweditor-duplicate-ids.zip)
(Targets 10.0.0 instead of master-SNAPSHOT due to #7739.)
**Environment:**
- PF Version: _6.2.30_, _10.0.0_
- JSF + version: _ALL_
- Affected browsers: _ALL_
**To Reproduce**
1. Open the reproducer.
2. Click rowEditor to start editing a row.
3. Finish editing (apply or cancel - doesn't matter).
4. See the error message in the browser console (a script detects duplicate ids and logs them).

For some reason rowExpansion content is cloned on row edit finish.
**Expected behavior**
No duplicate ids.
**Example XHTML**
```html
<h:form id="form">
<p:dataTable id="datatable" value="#{[1,2]}" var="row" editable="true" expandedRow="true">
<p:ajax event="rowEdit" oncomplete="checkDuplicateIds()" />
<p:ajax event="rowEditCancel" oncomplete="checkDuplicateIds()" />
<p:column width="80">
<p:rowToggler />
</p:column>
<p:column width="80">
<p:rowEditor />
</p:column>
<p:column>
#{row} row
</p:column>
<p:rowExpansion>
<h:outputText id="text" value="#{row} details"/>
</p:rowExpansion>
</p:dataTable>
</h:form>
<script>
function checkDuplicateIds() {
setTimeout(function() {
var ids = {};
$('[id]').each(function() {
var id = this.id;
if (id && ids[id]) {
ids[id].push(this);
console.error('Duplicate id: ' + id + ' in elements', ids[id]);
} else {
ids[id] = [this];
}
});
}, 100);
}
checkDuplicateIds();
</script>
```
**Example Bean**
None. | 1.0 | DataTable: rowExpansion+rowEditor creates duplicate ids - **Describe the defect**
Using dataTable with rowExpansion and rowEditor creates duplicate ids.
**Reproducer**
[primefaces-datatable-rowexpansion-roweditor-duplicate-ids.zip](https://github.com/primefaces/primefaces/files/7068337/primefaces-datatable-rowexpansion-roweditor-duplicate-ids.zip)
(Targets 10.0.0 instead of master-SNAPSHOT due to #7739.)
**Environment:**
- PF Version: _6.2.30_, _10.0.0_
- JSF + version: _ALL_
- Affected browsers: _ALL_
**To Reproduce**
1. Open the reproducer.
2. Click rowEditor to start editing a row.
3. Finish editing (apply or cancel - doesn't matter).
4. See the error message in the browser console (a script detects duplicate ids and logs them).

For some reason rowExpansion content is cloned on row edit finish.
**Expected behavior**
No duplicate ids.
**Example XHTML**
```html
<h:form id="form">
<p:dataTable id="datatable" value="#{[1,2]}" var="row" editable="true" expandedRow="true">
<p:ajax event="rowEdit" oncomplete="checkDuplicateIds()" />
<p:ajax event="rowEditCancel" oncomplete="checkDuplicateIds()" />
<p:column width="80">
<p:rowToggler />
</p:column>
<p:column width="80">
<p:rowEditor />
</p:column>
<p:column>
#{row} row
</p:column>
<p:rowExpansion>
<h:outputText id="text" value="#{row} details"/>
</p:rowExpansion>
</p:dataTable>
</h:form>
<script>
function checkDuplicateIds() {
setTimeout(function() {
var ids = {};
$('[id]').each(function() {
var id = this.id;
if (id && ids[id]) {
ids[id].push(this);
console.error('Duplicate id: ' + id + ' in elements', ids[id]);
} else {
ids[id] = [this];
}
});
}, 100);
}
checkDuplicateIds();
</script>
```
**Example Bean**
None. | non_process | datatable rowexpansion roweditor creates duplicate ids describe the defect using datatable with rowexpansion and roweditor creates duplicate ids reproducer targets instead of master snapshot due to environment pf version jsf version all affected browsers all to reproduce open the reproducer click roweditor to start editing a row finish editing apply or cancel doesn t matter see the error message in the browser console a script detects duplicate ids and logs them for some reason rowexpansion content is cloned on row edit finish expected behavior no duplicate ids example xhtml html row row function checkduplicateids settimeout function var ids each function var id this id if id amp amp ids ids push this console error duplicate id id in elements ids else ids checkduplicateids example bean none | 0 |
18,291 | 24,396,446,400 | IssuesEvent | 2022-10-04 19:44:21 | Drexel-UHC/analytics-corner | https://api.github.com/repos/Drexel-UHC/analytics-corner | closed | [Request title]: UTF-8 Primer + how to fix invalid UTF-8 for web apps | text-processing | ### Name
Ran
### Job title
Engineer
### Department
Urban Health Collaborative
### Request type
technical question
### Request description
Our datastore for the SALURBAL web applciation has some invalid UTF-8 characters which are not rendered properly in production (see below)

Request is a post that gives a brief primer on text encoding (particualrly UTF-8) and provide a solution of how to detect and fix invalid UTF-8 characters within a data store.
### Example
_No response_
### Data
_No response_
### Notes
_No response_ | 1.0 | [Request title]: UTF-8 Primer + how to fix invalid UTF-8 for web apps - ### Name
Ran
### Job title
Engineer
### Department
Urban Health Collaborative
### Request type
technical question
### Request description
Our datastore for the SALURBAL web applciation has some invalid UTF-8 characters which are not rendered properly in production (see below)

Request is a post that gives a brief primer on text encoding (particualrly UTF-8) and provide a solution of how to detect and fix invalid UTF-8 characters within a data store.
### Example
_No response_
### Data
_No response_
### Notes
_No response_ | process | utf primer how to fix invalid utf for web apps name ran job title engineer department urban health collaborative request type technical question request description our datastore for the salurbal web applciation has some invalid utf characters which are not rendered properly in production see below request is a post that gives a brief primer on text encoding particualrly utf and provide a solution of how to detect and fix invalid utf characters within a data store example no response data no response notes no response | 1 |
93,051 | 10,764,465,539 | IssuesEvent | 2019-11-01 08:22:16 | Icesiolz/ped | https://api.github.com/repos/Icesiolz/ped | opened | Command Summary documentation inconsistency | severity.Low type.DocumentationBug | When i tried deleterep 1 it didnt work, (although earlier it is mentioned that it should be delrep)

| 1.0 | Command Summary documentation inconsistency - When i tried deleterep 1 it didnt work, (although earlier it is mentioned that it should be delrep)

| non_process | command summary documentation inconsistency when i tried deleterep it didnt work although earlier it is mentioned that it should be delrep | 0 |
812,163 | 30,320,013,588 | IssuesEvent | 2023-07-10 18:28:29 | apcountryman/picolibrary | https://api.github.com/repos/apcountryman/picolibrary | closed | Remove WIZnet W5500 TCP over IP client socket | priority-normal status-awaiting_review type-refactoring | Remove WIZnet W5500 TCP over IP client socket (`::picolibrary::WIZnet::W5500::IP::TCP::Client`) | 1.0 | Remove WIZnet W5500 TCP over IP client socket - Remove WIZnet W5500 TCP over IP client socket (`::picolibrary::WIZnet::W5500::IP::TCP::Client`) | non_process | remove wiznet tcp over ip client socket remove wiznet tcp over ip client socket picolibrary wiznet ip tcp client | 0 |
5,659 | 8,528,680,126 | IssuesEvent | 2018-11-03 02:09:23 | pelias/api | https://api.github.com/repos/pelias/api | closed | Error running Ciao tests with Node.js 6 | Q1-2017 processed | ```
julian@julian-mapzen ~/repos/pelias/api $ npm run ciao
> pelias-api@0.0.0-semantic-release ciao /home/julian/repos/pelias/api
> node node_modules/ciao/bin/ciao -c test/ciao.json test/ciao
/home/julian/repos/pelias/api/node_modules/ciao/bin/ciao:42
if( err ) throw new Error( err );
^
Error: Unexpected end of JSON input
at ciao (/home/julian/repos/pelias/api/node_modules/ciao/bin/ciao:42:19)
at RequestChain.done (/home/julian/repos/pelias/api/node_modules/ciao/lib/Script.coffee:14:12)
at /home/julian/repos/pelias/api/node_modules/ciao/lib/RequestChain.coffee:81:17
at /home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:726:13
at /home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:52:16
at done (/home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:241:17)
at /home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:44:16
at /home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:723:17
at /home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:167:37
at Process.<anonymous> (/home/julian/repos/pelias/api/node_modules/ciao/lib/RequestChain.coffee:72:18)
at emitMany (events.js:127:13)
at Process.emit (events.js:201:7)
at ChildProcess.<anonymous> (/home/julian/repos/pelias/api/node_modules/ciao/lib/Process.coffee:32:8)
at emitTwo (events.js:106:13)
at ChildProcess.emit (events.js:191:7)
at maybeClose (internal/child_process.js:852:16)
at Socket.<anonymous> (internal/child_process.js:323:11)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at Pipe._handle.close [as _onclose] (net.js:492:12)
npm ERR! Linux 4.2.0-38-generic
npm ERR! argv "/usr/bin/nodejs" "/home/julian/bin/npm" "run" "ciao"
npm ERR! node v6.3.1
npm ERR! npm v3.10.3
npm ERR! code ELIFECYCLE
npm ERR! pelias-api@0.0.0-semantic-release ciao: `node node_modules/ciao/bin/ciao -c test/ciao.json test/ciao`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the pelias-api@0.0.0-semantic-release ciao script 'node node_modules/ciao/bin/ciao -c test/ciao.json test/ciao'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the pelias-api package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node node_modules/ciao/bin/ciao -c test/ciao.json test/ciao
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs pelias-api
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls pelias-api
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /home/julian/repos/pelias/api/npm-debug.log
julian@julian-mapzen ~/repos/pelias/api $ node -v
v6.3.1
julian@julian-mapzen ~/repos/pelias/api $ npm -v
3.10.3
```
I tried digging into the code a bit, but couldn't figure out where the error was being generated. (My coffeescript is very rusty). This is on master with no changes and a freshly installed set of node_modules. @missinglink any idea? | 1.0 | Error running Ciao tests with Node.js 6 - ```
julian@julian-mapzen ~/repos/pelias/api $ npm run ciao
> pelias-api@0.0.0-semantic-release ciao /home/julian/repos/pelias/api
> node node_modules/ciao/bin/ciao -c test/ciao.json test/ciao
/home/julian/repos/pelias/api/node_modules/ciao/bin/ciao:42
if( err ) throw new Error( err );
^
Error: Unexpected end of JSON input
at ciao (/home/julian/repos/pelias/api/node_modules/ciao/bin/ciao:42:19)
at RequestChain.done (/home/julian/repos/pelias/api/node_modules/ciao/lib/Script.coffee:14:12)
at /home/julian/repos/pelias/api/node_modules/ciao/lib/RequestChain.coffee:81:17
at /home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:726:13
at /home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:52:16
at done (/home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:241:17)
at /home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:44:16
at /home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:723:17
at /home/julian/repos/pelias/api/node_modules/ciao/node_modules/async/lib/async.js:167:37
at Process.<anonymous> (/home/julian/repos/pelias/api/node_modules/ciao/lib/RequestChain.coffee:72:18)
at emitMany (events.js:127:13)
at Process.emit (events.js:201:7)
at ChildProcess.<anonymous> (/home/julian/repos/pelias/api/node_modules/ciao/lib/Process.coffee:32:8)
at emitTwo (events.js:106:13)
at ChildProcess.emit (events.js:191:7)
at maybeClose (internal/child_process.js:852:16)
at Socket.<anonymous> (internal/child_process.js:323:11)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at Pipe._handle.close [as _onclose] (net.js:492:12)
npm ERR! Linux 4.2.0-38-generic
npm ERR! argv "/usr/bin/nodejs" "/home/julian/bin/npm" "run" "ciao"
npm ERR! node v6.3.1
npm ERR! npm v3.10.3
npm ERR! code ELIFECYCLE
npm ERR! pelias-api@0.0.0-semantic-release ciao: `node node_modules/ciao/bin/ciao -c test/ciao.json test/ciao`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the pelias-api@0.0.0-semantic-release ciao script 'node node_modules/ciao/bin/ciao -c test/ciao.json test/ciao'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the pelias-api package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node node_modules/ciao/bin/ciao -c test/ciao.json test/ciao
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs pelias-api
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls pelias-api
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /home/julian/repos/pelias/api/npm-debug.log
julian@julian-mapzen ~/repos/pelias/api $ node -v
v6.3.1
julian@julian-mapzen ~/repos/pelias/api $ npm -v
3.10.3
```
I tried digging into the code a bit, but couldn't figure out where the error was being generated. (My coffeescript is very rusty). This is on master with no changes and a freshly installed set of node_modules. @missinglink any idea? | process | error running ciao tests with node js julian julian mapzen repos pelias api npm run ciao pelias api semantic release ciao home julian repos pelias api node node modules ciao bin ciao c test ciao json test ciao home julian repos pelias api node modules ciao bin ciao if err throw new error err error unexpected end of json input at ciao home julian repos pelias api node modules ciao bin ciao at requestchain done home julian repos pelias api node modules ciao lib script coffee at home julian repos pelias api node modules ciao lib requestchain coffee at home julian repos pelias api node modules ciao node modules async lib async js at home julian repos pelias api node modules ciao node modules async lib async js at done home julian repos pelias api node modules ciao node modules async lib async js at home julian repos pelias api node modules ciao node modules async lib async js at home julian repos pelias api node modules ciao node modules async lib async js at home julian repos pelias api node modules ciao node modules async lib async js at process home julian repos pelias api node modules ciao lib requestchain coffee at emitmany events js at process emit events js at childprocess home julian repos pelias api node modules ciao lib process coffee at emittwo events js at childprocess emit events js at maybeclose internal child process js at socket internal child process js at emitone events js at socket emit events js at pipe handle close net js npm err linux generic npm err argv usr bin nodejs home julian bin npm run ciao npm err node npm err npm npm err code elifecycle npm err pelias api semantic release ciao node node modules ciao bin ciao c test ciao json test ciao npm err exit status npm err npm err failed at the pelias api semantic release ciao script node node modules ciao bin ciao c test ciao json test ciao npm err make sure you have the latest version of node js and npm installed npm err if you do this is most likely a problem with the pelias api package npm err not with npm itself npm err tell the author that this fails on your system npm err node node modules ciao bin ciao c test ciao json test ciao npm err you can get information on how to open an issue for this project with npm err npm bugs pelias api npm err or if that isn t available you can get their info via npm err npm owner ls pelias api npm err there is likely additional logging output above npm err please include the following file with any support request npm err home julian repos pelias api npm debug log julian julian mapzen repos pelias api node v julian julian mapzen repos pelias api npm v i tried digging into the code a bit but couldn t figure out where the error was being generated my coffeescript is very rusty this is on master with no changes and a freshly installed set of node modules missinglink any idea | 1 |
15,274 | 19,256,090,819 | IssuesEvent | 2021-12-09 11:26:12 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | closed | Anomaly/Bias/Error preprocessor | enhancement preprocessor | What do you guys think about a preprocessor for anomaly calculation?
It seems a rather common task to calculate anomalies or errors against either
- reference dataset
- climatology over certain period
- a fixed point in time
What do you think? | 1.0 | Anomaly/Bias/Error preprocessor - What do you guys think about a preprocessor for anomaly calculation?
It seems a rather common task to calculate anomalies or errors against either
- reference dataset
- climatology over certain period
- a fixed point in time
What do you think? | process | anomaly bias error preprocessor what do you guys think about a preprocessor for anomaly calculation it seems a rather common task to calculate anomalies or errors against either reference dataset climatology over certain period a fixed point in time what do you think | 1 |
625,128 | 19,719,345,488 | IssuesEvent | 2022-01-13 14:05:53 | ctm/mb2-doc | https://api.github.com/repos/ctm/mb2-doc | opened | Automate certificate expiration checking | chore high priority easy | Make it so a deploy at minimum checks for certificates that will expire within a month.
The certificate for `ws.devctm.com` expired without me noticing, so when I deployed this morning it appeared the deploy failed. The error message that mb2 currently brings up is absolutely unhelpful, because Rust doesn't look into the JsValue to find the string that explains what happens. I used to get email from LetsEncrypt as certificates were getting close to expiring, but apparently I no longer do (or perhaps they're being routed to my spam folder).
So, at minimum the deploy should detect when our certificates are close to expiring and fail with an error message, although really it shouldn't be too hard to automate the updating. | 1.0 | Automate certificate expiration checking - Make it so a deploy at minimum checks for certificates that will expire within a month.
The certificate for `ws.devctm.com` expired without me noticing, so when I deployed this morning it appeared the deploy failed. The error message that mb2 currently brings up is absolutely unhelpful, because Rust doesn't look into the JsValue to find the string that explains what happens. I used to get email from LetsEncrypt as certificates were getting close to expiring, but apparently I no longer do (or perhaps they're being routed to my spam folder).
So, at minimum the deploy should detect when our certificates are close to expiring and fail with an error message, although really it shouldn't be too hard to automate the updating. | non_process | automate certificate expiration checking make it so a deploy at minimum checks for certificates that will expire within a month the certificate for ws devctm com expired without me noticing so when i deployed this morning it appeared the deploy failed the error message that currently brings up is absolutely unhelpful because rust doesn t look into the jsvalue to find the string that explains what happens i used to get email from letsencrypt as certificates were getting close to expiring but apparently i no longer do or perhaps they re being routed to my spam folder so at minimum the deploy should detect when our certificates are close to expiring and fail with an error message although really it shouldn t be too hard to automate the updating | 0 |
746,826 | 26,047,871,630 | IssuesEvent | 2022-12-22 15:51:55 | wso2/api-manager | https://api.github.com/repos/wso2/api-manager | closed | Prepare for APIM 4.2.0-alpha | Type/Task Priority/Highest Component/APIM 4.2.0-alpha | ### Description
Prepare for APIM 4.2.0-alpha
- [x] Generate the License.txt
- [x] https://github.com/wso2/api-manager/issues/1027
- [x] https://github.com/wso2/api-manager/issues/1023
- [x] https://github.com/wso2/api-manager/issues/1026
- [x] Run the release build
- [x] Do the Github pre release
- [x] Send out the release mail
### Affected Component
APIM
### Version
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_ | 1.0 | Prepare for APIM 4.2.0-alpha - ### Description
Prepare for APIM 4.2.0-alpha
- [x] Generate the License.txt
- [x] https://github.com/wso2/api-manager/issues/1027
- [x] https://github.com/wso2/api-manager/issues/1023
- [x] https://github.com/wso2/api-manager/issues/1026
- [x] Run the release build
- [x] Do the Github pre release
- [x] Send out the release mail
### Affected Component
APIM
### Version
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_ | non_process | prepare for apim alpha description prepare for apim alpha generate the license txt run the release build do the github pre release send out the release mail affected component apim version no response related issues no response suggested labels no response | 0 |
7,560 | 10,680,012,948 | IssuesEvent | 2019-10-21 20:28:12 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | GO:0045087 innate immune response/ add synonym | multi-species process | GO:0045087 innate immune response
is defined
Innate immune responses are defense responses mediated by germline encoded components that directly recognize components of potential pathogens.
so in this case we need narrow synonyms for "pamp triggered immunity"
and "PTI" which is what the first response in plants is called.
[ NOTE: that I could use the descendant term
GO:0061760 antifungal innate immune response
but I am avoiding this because I think should remove the organism-specific terms and just describe the processes (these are probably largely the same and only differ by recognition molecules so this seems to be an unnecessary axis of classification).
However, if we keep these terms they should still be located by searching on
PTI and Pamp triggered immunity" so this would require the synonym "fungal PTI"...etc] | 1.0 | GO:0045087 innate immune response/ add synonym - GO:0045087 innate immune response
is defined
Innate immune responses are defense responses mediated by germline encoded components that directly recognize components of potential pathogens.
so in this case we need narrow synonyms for "pamp triggered immunity"
and "PTI" which is what the first response in plants is called.
[ NOTE: that I could use the descendant term
GO:0061760 antifungal innate immune response
but I am avoiding this because I think should remove the organism-specific terms and just describe the processes (these are probably largely the same and only differ by recognition molecules so this seems to be an unnecessary axis of classification).
However, if we keep these terms they should still be located by searching on
PTI and Pamp triggered immunity" so this would require the synonym "fungal PTI"...etc] | process | go innate immune response add synonym go innate immune response is defined innate immune responses are defense responses mediated by germline encoded components that directly recognize components of potential pathogens so in this case we need narrow synonyms for pamp triggered immunity and pti which is what the first response in plants is called note that i could use the descendant term go antifungal innate immune response but i am avoiding this because i think should remove the organism specific terms and just describe the processes these are probably largely the same and only differ by recognition molecules so this seems to be an unnecessary axis of classification however if we keep these terms they should still be located by searching on pti and pamp triggered immunity so this would require the synonym fungal pti etc | 1 |
61,589 | 6,743,525,866 | IssuesEvent | 2017-10-20 12:29:46 | Vizzuality/half-earth | https://api.github.com/repos/Vizzuality/half-earth | closed | Global Spider (Chart 2) | ready-to-test-staging | - add legend “Percent of species adequately protected”
- make sure it's based on this data source:
https://docs.google.com/spreadsheets/d/1zBk28eP8H5kt83w2x8bwfJqDkLLPX8z8FG59iKHNYhs/edit#gid=2038236908
| 1.0 | Global Spider (Chart 2) - - add legend “Percent of species adequately protected”
- make sure it's based on this data source:
https://docs.google.com/spreadsheets/d/1zBk28eP8H5kt83w2x8bwfJqDkLLPX8z8FG59iKHNYhs/edit#gid=2038236908
| non_process | global spider chart add legend “percent of species adequately protected” make sure it s based on this data source | 0 |
760,860 | 26,658,987,354 | IssuesEvent | 2023-01-25 19:17:32 | googleapis/nodejs-storage | https://api.github.com/repos/googleapis/nodejs-storage | closed | Parallel Operations | type: feature request api: storage priority: p3 | The library should support built-in concurrent downloads and uploads.
This will help with customers looking to bulk-upload, such as #1865. | 1.0 | Parallel Operations - The library should support built-in concurrent downloads and uploads.
This will help with customers looking to bulk-upload, such as #1865. | non_process | parallel operations the library should support built in concurrent downloads and uploads this will help with customers looking to bulk upload such as | 0 |
15,182 | 18,954,957,666 | IssuesEvent | 2021-11-18 19:06:40 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | Documentation of count function | devops/prod doc-bug devops-cicd-process/tech needs-sme | Could you please add to the documentation of the counter function here:
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops-2020#counter
that the `prefix`, although being a string, may not contain certain special characters. I lost nearly two days figuring out, that the string may not contain `.` symbols.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | Documentation of count function - Could you please add to the documentation of the counter function here:
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops-2020#counter
that the `prefix`, although being a string, may not contain certain special characters. I lost nearly two days figuring out, that the string may not contain `.` symbols.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | process | documentation of count function could you please add to the documentation of the counter function here that the prefix although being a string may not contain certain special characters i lost nearly two days figuring out that the string may not contain symbols document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 1 |
159,030 | 12,452,241,035 | IssuesEvent | 2020-05-27 11:57:26 | keptn/keptn | https://api.github.com/repos/keptn/keptn | opened | Quality-gates integration test sometimes fails | automation type:test | Example: https://travis-ci.org/github/keptn/keptn/jobs/691597058
```
Adding SLI File: test/assets/quality_gates_standalone_sli_dynatrace_step2.yaml
Adding resource test/assets/quality_gates_standalone_sli_dynatrace_step2.yaml to service frontend in stage hardening in project musicshop
Resource has been uploaded.
Trying to get evaluation-done event with context-id edf91cf8-4ce6-4103-9743-9a06a65ce37b
No event returned
Retry: 1/30 - Wait 10s for evaluation-done event
No event returned
Retry: 2/30 - Wait 10s for evaluation-done event
...
Checking .source: lighthouse-service ✓
Checking .type: sh.keptn.events.evaluation-done ✓
Checking .data.project: musicshop ✓
Checking .data.stage: hardening ✓
Checking .data.service: frontend ✓
[keptn|ERROR] [2020-05-27 09:42:23] ERROR: Checking .data.result, expected 'pass', got 'warning' ❌
```
I'm guessing the SLOs for this test are too strict, and result in a warning instead of pass.
We can either modify the SLOs, or we allow pass or warning in that case. | 1.0 | Quality-gates integration test sometimes fails - Example: https://travis-ci.org/github/keptn/keptn/jobs/691597058
```
Adding SLI File: test/assets/quality_gates_standalone_sli_dynatrace_step2.yaml
Adding resource test/assets/quality_gates_standalone_sli_dynatrace_step2.yaml to service frontend in stage hardening in project musicshop
Resource has been uploaded.
Trying to get evaluation-done event with context-id edf91cf8-4ce6-4103-9743-9a06a65ce37b
No event returned
Retry: 1/30 - Wait 10s for evaluation-done event
No event returned
Retry: 2/30 - Wait 10s for evaluation-done event
...
Checking .source: lighthouse-service ✓
Checking .type: sh.keptn.events.evaluation-done ✓
Checking .data.project: musicshop ✓
Checking .data.stage: hardening ✓
Checking .data.service: frontend ✓
[keptn|ERROR] [2020-05-27 09:42:23] ERROR: Checking .data.result, expected 'pass', got 'warning' ❌
```
I'm guessing the SLOs for this test are too strict, and result in a warning instead of pass.
We can either modify the SLOs, or we allow pass or warning in that case. | non_process | quality gates integration test sometimes fails example adding sli file test assets quality gates standalone sli dynatrace yaml adding resource test assets quality gates standalone sli dynatrace yaml to service frontend in stage hardening in project musicshop resource has been uploaded trying to get evaluation done event with context id no event returned retry wait for evaluation done event no event returned retry wait for evaluation done event checking source lighthouse service ✓ checking type sh keptn events evaluation done ✓ checking data project musicshop ✓ checking data stage hardening ✓ checking data service frontend ✓ error checking data result expected pass got warning ❌ i m guessing the slos for this test are too strict and result in a warning instead of pass we can either modify the slos or we allow pass or warning in that case | 0 |
608,768 | 18,848,470,042 | IssuesEvent | 2021-11-11 17:33:59 | ooni/probe | https://api.github.com/repos/ooni/probe | opened | engine: support running signed tor binary in same directory | priority/high | We're going to ship a tor binary with probe-desktop. This requires less engineering effort than always using go-libtor (https://github.com/ooni/probe/issues/1866). We're going to sign the tor binary and verify it before running. | 1.0 | engine: support running signed tor binary in same directory - We're going to ship a tor binary with probe-desktop. This requires less engineering effort than always using go-libtor (https://github.com/ooni/probe/issues/1866). We're going to sign the tor binary and verify it before running. | non_process | engine support running signed tor binary in same directory we re going to ship a tor binary with probe desktop this requires less engineering effort than always using go libtor we re going to sign the tor binary and verify it before running | 0 |
28,568 | 12,887,448,561 | IssuesEvent | 2020-07-13 11:15:48 | gabordemooij/redbean | https://api.github.com/repos/gabordemooij/redbean | closed | question/suggestion on non static usage | service & support | Dear sir,
Maybe i am missing something but i don't see an obvious solution to calling i.e. R::store() on a particular toolbox (and all the other R::xyz() facade functions), which is typical when using redbean in a non-static environment? I did of course read the page [here](https://redbeanphp.com/index.php?p=/non_static).
I am trying to use multiple databases and would like to call the R::xyz() functions in the Facade directly on a particular toolbox. In the case of R::store([...]) i would expect something like this:
```
// facade function
public static function store( $bean, $unfreezeIfNeeded = FALSE ) {
self::storeBackend(bean, $unfreezeIfNeeded, self::$redbean)
}
// this function can be called by both the facade and
// with $toolbox->getRedBean() for non-static usage
public static function storeBackend( $bean, $unfreezeIfNeeded , $redbean )
{
[body of function]
}
```
(obviously the storeBackend() function should normally not be in the facade itself but in a backend-worker- class of some kind)
Any suggestions how to cleanly resolve this ?
| 1.0 | question/suggestion on non static usage - Dear sir,
Maybe i am missing something but i don't see an obvious solution to calling i.e. R::store() on a particular toolbox (and all the other R::xyz() facade functions), which is typical when using redbean in a non-static environment? I did of course read the page [here](https://redbeanphp.com/index.php?p=/non_static).
I am trying to use multiple databases and would like to call the R::xyz() functions in the Facade directly on a particular toolbox. In the case of R::store([...]) i would expect something like this:
```
// facade function
public static function store( $bean, $unfreezeIfNeeded = FALSE ) {
self::storeBackend(bean, $unfreezeIfNeeded, self::$redbean)
}
// this function can be called by both the facade and
// with $toolbox->getRedBean() for non-static usage
public static function storeBackend( $bean, $unfreezeIfNeeded , $redbean )
{
[body of function]
}
```
(obviously the storeBackend() function should normally not be in the facade itself but in a backend-worker- class of some kind)
Any suggestions how to cleanly resolve this ?
| non_process | question suggestion on non static usage dear sir maybe i am missing something but i don t see an obvious solution to calling i e r store on a particular toolbox and all the other r xyz facade functions which is typical when using redbean in a non static environment i did of course read the page i am trying to use multiple databases and would like to call the r xyz functions in the facade directly on a particular toolbox in the case of r store i would expect something like this facade function public static function store bean unfreezeifneeded false self storebackend bean unfreezeifneeded self redbean this function can be called by both the facade and with toolbox getredbean for non static usage public static function storebackend bean unfreezeifneeded redbean obviously the storebackend function should normally not be in the facade itself but in a backend worker class of some kind any suggestions how to cleanly resolve this | 0 |
12,070 | 14,739,810,199 | IssuesEvent | 2021-01-07 07:58:30 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Switching customers | anc-process anp-2.5 ant-enhancement | In GitLab by @kdjstudios on Sep 19, 2018, 15:07
Hello Team,
I recall operations used to be able to switch accounts from one customer to another. If I recall correctly we had to remove this feature due to issues it was causing on the back end and have since then been forcing operations to contact support to manually move/merge accounts to a new customer.
Does anyone else recall why it was never fixed? | 1.0 | Switching customers - In GitLab by @kdjstudios on Sep 19, 2018, 15:07
Hello Team,
I recall operations used to be able to switch accounts from one customer to another. If I recall correctly we had to remove this feature due to issues it was causing on the back end and have since then been forcing operations to contact support to manually move/merge accounts to a new customer.
Does anyone else recall why it was never fixed? | process | switching customers in gitlab by kdjstudios on sep hello team i recall operations used to be able to switch accounts from one customer to another if i recall correctly we had to remove this feature due to issues it was causing on the back end and have since then been forcing operations to contact support to manually move merge accounts to a new customer does anyone else recall why it was never fixed | 1 |
11,580 | 14,444,000,039 | IssuesEvent | 2020-12-07 20:32:12 | MelissaMorales13/5a | https://api.github.com/repos/MelissaMorales13/5a | closed | fill_size_estimating_template | process-dashboard | -llenado de template de estimación de líneas de código en process dashboard
-correr el PROBE wizard | 1.0 | fill_size_estimating_template - -llenado de template de estimación de líneas de código en process dashboard
-correr el PROBE wizard | process | fill size estimating template llenado de template de estimación de líneas de código en process dashboard correr el probe wizard | 1 |
17,109 | 22,634,421,657 | IssuesEvent | 2022-06-30 17:27:13 | googleapis/google-cloud-go | https://api.github.com/repos/googleapis/google-cloud-go | closed | core: release a v1 | type: process | **Is your feature request related to a problem? Please describe.**
As the core module for google-cloud-go is still v0, there are no compatibility guarantees for the underpinnings of the generated libraries. When we import the SDK and all of its submodules into the monorepo, we'd like not to be in the case where we're forced to perform an all-submodules update if such a breaking change is made, as we're restricted to having exactly one implementation of the core module.
**Describe the solution you'd like**
If you release a v1, you'd be committing to a higher compatibility guarantee, and reducing the likelihood of requiring whole-world updates when importing, since older per-service submodules should be able to continue to operate on dot releases of v1, or if a v2 is necessary, then they can continue to operate on a mix of v1 and v2.
**Describe alternatives you've considered**
The status quo just means that breaking changes will force importers to go to significant lengths to maintain monorepo policy, and will (further) stall imports of modern versions of the SDK.
**Additional context**
monorepo policy includes: you can have exactly one version of a Go library per major version of that library, and that version must satisfy all its dependencies already in the monorepo. Thus a in-major-version breaking change requires an atomic change of the library with the breaking change and every client that must be amended to handle the in-major-version breaking change. | 1.0 | core: release a v1 - **Is your feature request related to a problem? Please describe.**
As the core module for google-cloud-go is still v0, there are no compatibility guarantees for the underpinnings of the generated libraries. When we import the SDK and all of its submodules into the monorepo, we'd like not to be in the case where we're forced to perform an all-submodules update if such a breaking change is made, as we're restricted to having exactly one implementation of the core module.
**Describe the solution you'd like**
If you release a v1, you'd be committing to a higher compatibility guarantee, and reducing the likelihood of requiring whole-world updates when importing, since older per-service submodules should be able to continue to operate on dot releases of v1, or if a v2 is necessary, then they can continue to operate on a mix of v1 and v2.
**Describe alternatives you've considered**
The status quo just means that breaking changes will force importers to go to significant lengths to maintain monorepo policy, and will (further) stall imports of modern versions of the SDK.
**Additional context**
monorepo policy includes: you can have exactly one version of a Go library per major version of that library, and that version must satisfy all its dependencies already in the monorepo. Thus a in-major-version breaking change requires an atomic change of the library with the breaking change and every client that must be amended to handle the in-major-version breaking change. | process | core release a is your feature request related to a problem please describe as the core module for google cloud go is still there are no compatibility guarantees for the underpinnings of the generated libraries when we import the sdk and all of its submodules into the monorepo we d like not to be in the case where we re forced to perform an all submodules update if such a breaking change is made as we re restricted to having exactly one implementation of the core module describe the solution you d like if you release a you d be committing to a higher compatibility guarantee and reducing the likelihood of requiring whole world updates when importing since older per service submodules should be able to continue to operate on dot releases of or if a is necessary then they can continue to operate on a mix of and describe alternatives you ve considered the status quo just means that breaking changes will force importers to go to significant lengths to maintain monorepo policy and will further stall imports of modern versions of the sdk additional context monorepo policy includes you can have exactly one version of a go library per major version of that library and that version must satisfy all its dependencies already in the monorepo thus a in major version breaking change requires an atomic change of the library with the breaking change and every client that must be amended to handle the in major version breaking change | 1 |
396,704 | 11,712,544,587 | IssuesEvent | 2020-03-09 08:35:35 | input-output-hk/ouroboros-network | https://api.github.com/repos/input-output-hk/ouroboros-network | closed | Make Immutable DB independent from EpochSize | consensus immutable db priority high shelley mainnet | At the moment, the immutable DB stores blocks per epoch. Unfortunately, this introduces a dependency between the immutable DB and the ledger state, because the ledger state tells us when epoch sizes change. Instead we should give the immutable DB a fixed "chunk" parameter, with a precondition that _if_ there are EBBs present, they _must_ line up with the first slot in each chunk. Unfortunately, this means that the size of the files stored on disk will go down a factor of 10 or even 20 (depending on the choice of the Praos `f` parameter), but after discussing this with @dcoutts , we considered this an acceptable compromise.
This is blocking the work on the hard fork combinator; indeed, it is blocking the work at #1698 on #1637 / #1205 . | 1.0 | Make Immutable DB independent from EpochSize - At the moment, the immutable DB stores blocks per epoch. Unfortunately, this introduces a dependency between the immutable DB and the ledger state, because the ledger state tells us when epoch sizes change. Instead we should give the immutable DB a fixed "chunk" parameter, with a precondition that _if_ there are EBBs present, they _must_ line up with the first slot in each chunk. Unfortunately, this means that the size of the files stored on disk will go down a factor of 10 or even 20 (depending on the choice of the Praos `f` parameter), but after discussing this with @dcoutts , we considered this an acceptable compromise.
This is blocking the work on the hard fork combinator; indeed, it is blocking the work at #1698 on #1637 / #1205 . | non_process | make immutable db independent from epochsize at the moment the immutable db stores blocks per epoch unfortunately this introduces a dependency between the immutable db and the ledger state because the ledger state tells us when epoch sizes change instead we should give the immutable db a fixed chunk parameter with a precondition that if there are ebbs present they must line up with the first slot in each chunk unfortunately this means that the size of the files stored on disk will go down a factor of or even depending on the choice of the praos f parameter but after discussing this with dcoutts we considered this an acceptable compromise this is blocking the work on the hard fork combinator indeed it is blocking the work at on | 0 |
9,658 | 12,640,724,924 | IssuesEvent | 2020-06-16 03:58:03 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Datetime Input Parameter in Processing Graphical Modeler not working | Bug Processing | Not sure where exactly the problem is, anyway. The `datetime` field of the `test` layer contains some random well formatted `datetime` values.
Using the algorithm `Extract by expression` with a simple expression like:
`to_date( "datetime" ) = make_date(2020, 06, 15)`
works perfectly (1 feature extracted). Putting this in a Processing model with a `Datetime` input seems not working.
The model is very simple (and included in the project attached with the name `Datetime extraction`):

`enterdate` is the `Datetime` parameter but it seems not available in the Expression Builder in the Model and also when running the model:
1. Within the model settings (no `@enterdate` variable)

2. During the model running (no `@enterdate` variable)

in both cases also the expression `parameter('@enterdate')` is not evaluated.
If the parameter `enterdate` is a String input then the parameters is correctly exposed in the Expression Builder within the model settings and also when running it.
This is the second model with the name `String extraction` always attached with the project itself:

and the input parameter `enterdate_string` exposed correctly as variable in all the expression builder dialogs:
1. Within the model settings (@enterdate_string` variable)

2. During the model running (`@enterdate_string` variable) used with the expression `to_date("datetime") = to_date(@enterdatestring)`:

and the result is correctly generated
Here the attached project with data and both models saved as project models [datetime_test.zip](https://github.com/qgis/QGIS/files/4778411/datetime_test.zip)
| 1.0 | Datetime Input Parameter in Processing Graphical Modeler not working - Not sure where exactly the problem is, anyway. The `datetime` field of the `test` layer contains some random well formatted `datetime` values.
Using the algorithm `Extract by expression` with a simple expression like:
`to_date( "datetime" ) = make_date(2020, 06, 15)`
works perfectly (1 feature extracted). Putting this in a Processing model with a `Datetime` input seems not working.
The model is very simple (and included in the project attached with the name `Datetime extraction`):

`enterdate` is the `Datetime` parameter but it seems not available in the Expression Builder in the Model and also when running the model:
1. Within the model settings (no `@enterdate` variable)

2. During the model running (no `@enterdate` variable)

in both cases also the expression `parameter('@enterdate')` is not evaluated.
If the parameter `enterdate` is a String input then the parameters is correctly exposed in the Expression Builder within the model settings and also when running it.
This is the second model with the name `String extraction` always attached with the project itself:

and the input parameter `enterdate_string` exposed correctly as variable in all the expression builder dialogs:
1. Within the model settings (@enterdate_string` variable)

2. During the model running (`@enterdate_string` variable) used with the expression `to_date("datetime") = to_date(@enterdatestring)`:

and the result is correctly generated
Here the attached project with data and both models saved as project models [datetime_test.zip](https://github.com/qgis/QGIS/files/4778411/datetime_test.zip)
| process | datetime input parameter in processing graphical modeler not working not sure where exactly the problem is anyway the datetime field of the test layer contains some random well formatted datetime values using the algorithm extract by expression with a simple expression like to date datetime make date works perfectly feature extracted putting this in a processing model with a datetime input seems not working the model is very simple and included in the project attached with the name datetime extraction enterdate is the datetime parameter but it seems not available in the expression builder in the model and also when running the model within the model settings no enterdate variable during the model running no enterdate variable in both cases also the expression parameter enterdate is not evaluated if the parameter enterdate is a string input then the parameters is correctly exposed in the expression builder within the model settings and also when running it this is the second model with the name string extraction always attached with the project itself and the input parameter enterdate string exposed correctly as variable in all the expression builder dialogs within the model settings enterdate string variable during the model running enterdate string variable used with the expression to date datetime to date enterdatestring and the result is correctly generated here the attached project with data and both models saved as project models | 1 |
116,885 | 9,887,048,707 | IssuesEvent | 2019-06-25 08:20:18 | FreeRDP/FreeRDP | https://api.github.com/repos/FreeRDP/FreeRDP | closed | When vaapisink is used for TSMF (gstreamer), keyboard and mouse stops working on the video surface. | fixed-waiting-test | **Describe the bug**
With the recent versions of gstreamer-vaapi, the vaapisink will handle GstNavigation stuff, and this will break input event handling. If WMP in fullscreen mode, there is no way to control video, even close the window. [Similar issue](https://bugzilla.redhat.com/show_bug.cgi?id=1167029)
**To Reproduce**
Steps to reproduce the behavior:
1. Install latest gst-plugins-vaapi
2. Compile freerdp with gstreamer1.0 support
3. Connect to server with enabled MMR (set gstreamer as multimedia decoder)
4. Play any H.264 video in fullscreen mode.
5. Keyboard and mouse stops working
**Expected behavior**
Keyboard and mouse should work
**Application details**
* Version of FreeRDP
` FreeRDP version 2.0.0-rc2 (2.0.0-rc2)`
* Command line used
`xfreerdp /f /bpp:16 /cert-ignore /kbd:0x409 /multimedia:sys:alsa /multimedia:decoder:gstreamer /v:ts2008`
* output of `/buildconfig`:
```
This is FreeRDP version 2.0.0-rc2 (2.0.0-rc2)
Build configuration: BUILD_TESTING=OFF BUILTIN_CHANNELS=ON HAVE_AIO_H=1 HAVE_EXECINFO_H=1 HAVE_FCNTL_H=1 HAVE_INTTYPES_H=1 HAVE_MATH_C99_LONG_DOUBLE=1 HAVE_POLL_H=1 HAVE_PTHREAD_MUTEX_TIMEDLOCK=ON HAVE_PTHREAD_MUTEX_TIMEDLOCK_LIB=1 HAVE_PTHREAD_MUTEX_TIMEDLOCK_SYMBOL= HAVE_SYSLOG_H=1 HAVE_SYS_EVENTFD_H=1 HAVE_SYS_FILIO_H= HAVE_SYS_MODEM_H= HAVE_SYS_SELECT_H=1 HAVE_SYS_SOCKIO_H= HAVE_SYS_STRTIO_H= HAVE_SYS_TIMERFD_H=1 HAVE_TM_GMTOFF=1 HAVE_UNISTD_H=1 HAVE_XI_TOUCH_CLASS=1 WITH_ALSA=ON WITH_CCACHE=ON WITH_CHANNELS=ON WITH_CLIENT=ON WITH_CLIENT_AVAILABLE=1 WITH_CLIENT_CHANNELS=ON WITH_CLIENT_CHANNELS_AVAILABLE=1 WITH_CLIENT_COMMON=ON WITH_CLIENT_INTERFACE=OFF WITH_CUNIT=OFF WITH_CUPS=OFF WITH_DEBUG_ALL=OFF WITH_DEBUG_CAPABILITIES=OFF WITH_DEBUG_CERTIFICATE=OFF WITH_DEBUG_CHANNELS=OFF WITH_DEBUG_CLIPRDR=OFF WITH_DEBUG_DVC=OFF WITH_DEBUG_KBD=OFF WITH_DEBUG_LICENSE=OFF WITH_DEBUG_MUTEX=OFF WITH_DEBUG_NEGO=OFF WITH_DEBUG_NLA=OFF WITH_DEBUG_NTLM=OFF WITH_DEBUG_RAIL=OFF WITH_DEBUG_RDP=OFF WITH_DEBUG_RDPDR=OFF WITH_DEBUG_RDPEI=OFF WITH_DEBUG_REDIR=OFF WITH_DEBUG_RFX=OFF WITH_DEBUG_RINGBUFFER=OFF WITH_DEBUG_SCARD=OFF WITH_DEBUG_SND=OFF WITH_DEBUG_SVC=OFF WITH_DEBUG_SYMBOLS=OFF WITH_DEBUG_THREADS=OFF WITH_DEBUG_TIMEZONE=OFF WITH_DEBUG_TRANSPORT=OFF WITH_DEBUG_TSG=OFF WITH_DEBUG_TSMF=OFF WITH_DEBUG_WND=OFF WITH_DEBUG_X11=OFF WITH_DEBUG_X11_CLIPRDR=OFF WITH_DEBUG_X11_LOCAL_MOVESIZE=OFF WITH_DEBUG_XV=OFF WITH_DIRECTFB=OFF WITH_EVENTFD_READ_WRITE=1 WITH_FFMPEG=OFF WITH_GFX_H264=OFF WITH_GPROF=OFF WITH_GSM=OFF WITH_GSSAPI=OFF WITH_GSTREAMER_0_10=OFF WITH_GSTREAMER_1_0=ON WITH_ICU=OFF WITH_IPP=OFF WITH_JPEG=OFF WITH_LIBRARY_VERSIONING=ON WITH_LIBSYSTEMD=OFF WITH_MACAUDIO=OFF WITH_MACAUDIO=OFF WITH_MACAUDIO_AVAILABLE=0 WITH_MANPAGES=OFF WITH_MBEDTLS=OFF WITH_NEON=OFF WITH_OPENH264=OFF WITH_OPENSLES=OFF WITH_OPENSSL=ON WITH_OSS=OFF WITH_PAM=OFF WITH_PCSC=OFF WITH_PROFILER=OFF WITH_PULSE=OFF WITH_PULSEAUDIO=OFF WITH_SAMPLE=OFF WITH_SANITIZE_ADDRESS=OFF WITH_SANITIZE_ADDRESS_AVAILABLE=1 WITH_SANITIZE_MEMORY=OFF WITH_SANITIZE_MEMORY_AVAILABLE=1 WITH_SANITIZE_THREAD=OFF WITH_SANITIZE_THREAD_AVAILABLE=1 WITH_SERVER=OFF WITH_SERVER_INTERFACE=ON WITH_SMARTCARD_INSPECT=OFF WITH_SSE2=ON WITH_THIRD_PARTY=OFF WITH_VALGRIND_MEMCHECK=OFF WITH_VALGRIND_MEMCHECK_AVAILABLE=1 WITH_WAYLAND=OFF WITH_X11=ON WITH_X264=OFF WITH_XCURSOR=ON WITH_XDAMAGE=ON WITH_XEXT=ON WITH_XFIXES=ON WITH_XI=ON WITH_XINERAMA=ON WITH_XKBFILE=ON WITH_XRANDR=ON WITH_XRENDER=ON WITH_XSHM=ON WITH_XV=ON WITH_ZLIB=ON
Build type: Release
CFLAGS: -m64 -march=core2 -mtune=core2 -msse3 -mfpmath=sse --sysroot=/home/yocto/poky/build/tmp/work/core2-64-tcx11-linux/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0/recipe-sysroot -O2 -pipe -g -feliminate-unused-debug-types -fdebug-prefix-map=/home/yocto/poky/build/tmp/work/core2-64-tcx11-linux/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0=/usr/src/debug/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0 -fdebug-prefix-map=/home/yocto/poky/build/tmp/work/core2-64-tcx11-linux/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0/recipe-sysroot= -fdebug-prefix-map=/home/yocto/poky/build/tmp/work/core2-64-tcx11-linux/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0/recipe-sysroot-native= -m64 -march=core2 -mtune=core2 -msse3 -mfpmath=sse --sysroot=/home/yocto/poky/build/tmp/work/core2-64-tcx11-linux/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0/recipe-sysroot -fPIC -Wall -Wno-unused-result -Wno-unused-but-set-variable -Wno-deprecated-declarations -fvisibility=hidden -Wimplicit-function-declaration -Wredundant-decls
Compiler: GNU, 7.3.0
Target architecture: x64
```
* OS version connecting to
Windows Server 2008 R2
**Desktop (please complete the following information):**
- OS: Linux
**Additional context**
Issue can be resolved with below patch:
```
diff -Naur a/channels/tsmf/client/gstreamer/tsmf_X11.c b/channels/tsmf/client/gstreamer/tsmf_X11.c
--- a/channels/tsmf/client/gstreamer/tsmf_X11.c 2019-06-17 12:53:03.105884324 +0500
+++ b/channels/tsmf/client/gstreamer/tsmf_X11.c 2019-06-17 12:55:44.255619829 +0500
@@ -114,7 +114,7 @@
#if GST_VERSION_MAJOR > 0
hdl->overlay = GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message));
gst_video_overlay_set_window_handle(hdl->overlay, hdl->subwin);
- gst_video_overlay_handle_events(hdl->overlay, TRUE);
+ gst_video_overlay_handle_events(hdl->overlay, FALSE);
#else
hdl->overlay = GST_X_OVERLAY (GST_MESSAGE_SRC (message));
#if GST_CHECK_VERSION(0,10,31)
``` | 1.0 | When vaapisink is used for TSMF (gstreamer), keyboard and mouse stops working on the video surface. - **Describe the bug**
With the recent versions of gstreamer-vaapi, the vaapisink will handle GstNavigation stuff, and this will break input event handling. If WMP in fullscreen mode, there is no way to control video, even close the window. [Similar issue](https://bugzilla.redhat.com/show_bug.cgi?id=1167029)
**To Reproduce**
Steps to reproduce the behavior:
1. Install latest gst-plugins-vaapi
2. Compile freerdp with gstreamer1.0 support
3. Connect to server with enabled MMR (set gstreamer as multimedia decoder)
4. Play any H.264 video in fullscreen mode.
5. Keyboard and mouse stops working
**Expected behavior**
Keyboard and mouse should work
**Application details**
* Version of FreeRDP
` FreeRDP version 2.0.0-rc2 (2.0.0-rc2)`
* Command line used
`xfreerdp /f /bpp:16 /cert-ignore /kbd:0x409 /multimedia:sys:alsa /multimedia:decoder:gstreamer /v:ts2008`
* output of `/buildconfig`:
```
This is FreeRDP version 2.0.0-rc2 (2.0.0-rc2)
Build configuration: BUILD_TESTING=OFF BUILTIN_CHANNELS=ON HAVE_AIO_H=1 HAVE_EXECINFO_H=1 HAVE_FCNTL_H=1 HAVE_INTTYPES_H=1 HAVE_MATH_C99_LONG_DOUBLE=1 HAVE_POLL_H=1 HAVE_PTHREAD_MUTEX_TIMEDLOCK=ON HAVE_PTHREAD_MUTEX_TIMEDLOCK_LIB=1 HAVE_PTHREAD_MUTEX_TIMEDLOCK_SYMBOL= HAVE_SYSLOG_H=1 HAVE_SYS_EVENTFD_H=1 HAVE_SYS_FILIO_H= HAVE_SYS_MODEM_H= HAVE_SYS_SELECT_H=1 HAVE_SYS_SOCKIO_H= HAVE_SYS_STRTIO_H= HAVE_SYS_TIMERFD_H=1 HAVE_TM_GMTOFF=1 HAVE_UNISTD_H=1 HAVE_XI_TOUCH_CLASS=1 WITH_ALSA=ON WITH_CCACHE=ON WITH_CHANNELS=ON WITH_CLIENT=ON WITH_CLIENT_AVAILABLE=1 WITH_CLIENT_CHANNELS=ON WITH_CLIENT_CHANNELS_AVAILABLE=1 WITH_CLIENT_COMMON=ON WITH_CLIENT_INTERFACE=OFF WITH_CUNIT=OFF WITH_CUPS=OFF WITH_DEBUG_ALL=OFF WITH_DEBUG_CAPABILITIES=OFF WITH_DEBUG_CERTIFICATE=OFF WITH_DEBUG_CHANNELS=OFF WITH_DEBUG_CLIPRDR=OFF WITH_DEBUG_DVC=OFF WITH_DEBUG_KBD=OFF WITH_DEBUG_LICENSE=OFF WITH_DEBUG_MUTEX=OFF WITH_DEBUG_NEGO=OFF WITH_DEBUG_NLA=OFF WITH_DEBUG_NTLM=OFF WITH_DEBUG_RAIL=OFF WITH_DEBUG_RDP=OFF WITH_DEBUG_RDPDR=OFF WITH_DEBUG_RDPEI=OFF WITH_DEBUG_REDIR=OFF WITH_DEBUG_RFX=OFF WITH_DEBUG_RINGBUFFER=OFF WITH_DEBUG_SCARD=OFF WITH_DEBUG_SND=OFF WITH_DEBUG_SVC=OFF WITH_DEBUG_SYMBOLS=OFF WITH_DEBUG_THREADS=OFF WITH_DEBUG_TIMEZONE=OFF WITH_DEBUG_TRANSPORT=OFF WITH_DEBUG_TSG=OFF WITH_DEBUG_TSMF=OFF WITH_DEBUG_WND=OFF WITH_DEBUG_X11=OFF WITH_DEBUG_X11_CLIPRDR=OFF WITH_DEBUG_X11_LOCAL_MOVESIZE=OFF WITH_DEBUG_XV=OFF WITH_DIRECTFB=OFF WITH_EVENTFD_READ_WRITE=1 WITH_FFMPEG=OFF WITH_GFX_H264=OFF WITH_GPROF=OFF WITH_GSM=OFF WITH_GSSAPI=OFF WITH_GSTREAMER_0_10=OFF WITH_GSTREAMER_1_0=ON WITH_ICU=OFF WITH_IPP=OFF WITH_JPEG=OFF WITH_LIBRARY_VERSIONING=ON WITH_LIBSYSTEMD=OFF WITH_MACAUDIO=OFF WITH_MACAUDIO=OFF WITH_MACAUDIO_AVAILABLE=0 WITH_MANPAGES=OFF WITH_MBEDTLS=OFF WITH_NEON=OFF WITH_OPENH264=OFF WITH_OPENSLES=OFF WITH_OPENSSL=ON WITH_OSS=OFF WITH_PAM=OFF WITH_PCSC=OFF WITH_PROFILER=OFF WITH_PULSE=OFF WITH_PULSEAUDIO=OFF WITH_SAMPLE=OFF WITH_SANITIZE_ADDRESS=OFF WITH_SANITIZE_ADDRESS_AVAILABLE=1 WITH_SANITIZE_MEMORY=OFF WITH_SANITIZE_MEMORY_AVAILABLE=1 WITH_SANITIZE_THREAD=OFF WITH_SANITIZE_THREAD_AVAILABLE=1 WITH_SERVER=OFF WITH_SERVER_INTERFACE=ON WITH_SMARTCARD_INSPECT=OFF WITH_SSE2=ON WITH_THIRD_PARTY=OFF WITH_VALGRIND_MEMCHECK=OFF WITH_VALGRIND_MEMCHECK_AVAILABLE=1 WITH_WAYLAND=OFF WITH_X11=ON WITH_X264=OFF WITH_XCURSOR=ON WITH_XDAMAGE=ON WITH_XEXT=ON WITH_XFIXES=ON WITH_XI=ON WITH_XINERAMA=ON WITH_XKBFILE=ON WITH_XRANDR=ON WITH_XRENDER=ON WITH_XSHM=ON WITH_XV=ON WITH_ZLIB=ON
Build type: Release
CFLAGS: -m64 -march=core2 -mtune=core2 -msse3 -mfpmath=sse --sysroot=/home/yocto/poky/build/tmp/work/core2-64-tcx11-linux/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0/recipe-sysroot -O2 -pipe -g -feliminate-unused-debug-types -fdebug-prefix-map=/home/yocto/poky/build/tmp/work/core2-64-tcx11-linux/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0=/usr/src/debug/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0 -fdebug-prefix-map=/home/yocto/poky/build/tmp/work/core2-64-tcx11-linux/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0/recipe-sysroot= -fdebug-prefix-map=/home/yocto/poky/build/tmp/work/core2-64-tcx11-linux/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0/recipe-sysroot-native= -m64 -march=core2 -mtune=core2 -msse3 -mfpmath=sse --sysroot=/home/yocto/poky/build/tmp/work/core2-64-tcx11-linux/freerdp/2.0.0+gitrAUTOINC+7a7b180277-r0/recipe-sysroot -fPIC -Wall -Wno-unused-result -Wno-unused-but-set-variable -Wno-deprecated-declarations -fvisibility=hidden -Wimplicit-function-declaration -Wredundant-decls
Compiler: GNU, 7.3.0
Target architecture: x64
```
* OS version connecting to
Windows Server 2008 R2
**Desktop (please complete the following information):**
- OS: Linux
**Additional context**
Issue can be resolved with below patch:
```
diff -Naur a/channels/tsmf/client/gstreamer/tsmf_X11.c b/channels/tsmf/client/gstreamer/tsmf_X11.c
--- a/channels/tsmf/client/gstreamer/tsmf_X11.c 2019-06-17 12:53:03.105884324 +0500
+++ b/channels/tsmf/client/gstreamer/tsmf_X11.c 2019-06-17 12:55:44.255619829 +0500
@@ -114,7 +114,7 @@
#if GST_VERSION_MAJOR > 0
hdl->overlay = GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message));
gst_video_overlay_set_window_handle(hdl->overlay, hdl->subwin);
- gst_video_overlay_handle_events(hdl->overlay, TRUE);
+ gst_video_overlay_handle_events(hdl->overlay, FALSE);
#else
hdl->overlay = GST_X_OVERLAY (GST_MESSAGE_SRC (message));
#if GST_CHECK_VERSION(0,10,31)
``` | non_process | when vaapisink is used for tsmf gstreamer keyboard and mouse stops working on the video surface describe the bug with the recent versions of gstreamer vaapi the vaapisink will handle gstnavigation stuff and this will break input event handling if wmp in fullscreen mode there is no way to control video even close the window to reproduce steps to reproduce the behavior install latest gst plugins vaapi compile freerdp with support connect to server with enabled mmr set gstreamer as multimedia decoder play any h video in fullscreen mode keyboard and mouse stops working expected behavior keyboard and mouse should work application details version of freerdp freerdp version command line used xfreerdp f bpp cert ignore kbd multimedia sys alsa multimedia decoder gstreamer v output of buildconfig this is freerdp version build configuration build testing off builtin channels on have aio h have execinfo h have fcntl h have inttypes h have math long double have poll h have pthread mutex timedlock on have pthread mutex timedlock lib have pthread mutex timedlock symbol have syslog h have sys eventfd h have sys filio h have sys modem h have sys select h have sys sockio h have sys strtio h have sys timerfd h have tm gmtoff have unistd h have xi touch class with alsa on with ccache on with channels on with client on with client available with client channels on with client channels available with client common on with client interface off with cunit off with cups off with debug all off with debug capabilities off with debug certificate off with debug channels off with debug cliprdr off with debug dvc off with debug kbd off with debug license off with debug mutex off with debug nego off with debug nla off with debug ntlm off with debug rail off with debug rdp off with debug rdpdr off with debug rdpei off with debug redir off with debug rfx off with debug ringbuffer off with debug scard off with debug snd off with debug svc off with debug symbols off with debug threads off with debug timezone off with debug transport off with debug tsg off with debug tsmf off with debug wnd off with debug off with debug cliprdr off with debug local movesize off with debug xv off with directfb off with eventfd read write with ffmpeg off with gfx off with gprof off with gsm off with gssapi off with gstreamer off with gstreamer on with icu off with ipp off with jpeg off with library versioning on with libsystemd off with macaudio off with macaudio off with macaudio available with manpages off with mbedtls off with neon off with off with opensles off with openssl on with oss off with pam off with pcsc off with profiler off with pulse off with pulseaudio off with sample off with sanitize address off with sanitize address available with sanitize memory off with sanitize memory available with sanitize thread off with sanitize thread available with server off with server interface on with smartcard inspect off with on with third party off with valgrind memcheck off with valgrind memcheck available with wayland off with on with off with xcursor on with xdamage on with xext on with xfixes on with xi on with xinerama on with xkbfile on with xrandr on with xrender on with xshm on with xv on with zlib on build type release cflags march mtune mfpmath sse sysroot home yocto poky build tmp work linux freerdp gitrautoinc recipe sysroot pipe g feliminate unused debug types fdebug prefix map home yocto poky build tmp work linux freerdp gitrautoinc usr src debug freerdp gitrautoinc fdebug prefix map home yocto poky build tmp work linux freerdp gitrautoinc recipe sysroot fdebug prefix map home yocto poky build tmp work linux freerdp gitrautoinc recipe sysroot native march mtune mfpmath sse sysroot home yocto poky build tmp work linux freerdp gitrautoinc recipe sysroot fpic wall wno unused result wno unused but set variable wno deprecated declarations fvisibility hidden wimplicit function declaration wredundant decls compiler gnu target architecture os version connecting to windows server desktop please complete the following information os linux additional context issue can be resolved with below patch diff naur a channels tsmf client gstreamer tsmf c b channels tsmf client gstreamer tsmf c a channels tsmf client gstreamer tsmf c b channels tsmf client gstreamer tsmf c if gst version major hdl overlay gst video overlay gst message src message gst video overlay set window handle hdl overlay hdl subwin gst video overlay handle events hdl overlay true gst video overlay handle events hdl overlay false else hdl overlay gst x overlay gst message src message if gst check version | 0 |
64,189 | 18,272,608,288 | IssuesEvent | 2021-10-04 15:12:56 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | libva error: init failed | T-Defect Z-Platform-Specific S-Critical A-Electron O-Uncommon | ### Steps to reproduce
1. Install element-desktop following the instructions from the official site using apt repo
2. Run element-desktop from terminal
3. Get error "libva error: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed"
### What happened?
### What did you expect?
Proper run
### What happened?
Element-desktop runs, but all I can see is the window filled with white
### Operating system
Ubuntu 21.10, gnome 40.4.0, X11
### Application version
Element 1.9.0
### How did you install the app?
https://packages.riot.im/debian/ default main
### Homeserver
matrix.org
### Have you submitted a rageshake?
No | 1.0 | libva error: init failed - ### Steps to reproduce
1. Install element-desktop following the instructions from the official site using apt repo
2. Run element-desktop from terminal
3. Get error "libva error: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed"
### What happened?
### What did you expect?
Proper run
### What happened?
Element-desktop runs, but all I can see is the window filled with white
### Operating system
Ubuntu 21.10, gnome 40.4.0, X11
### Application version
Element 1.9.0
### How did you install the app?
https://packages.riot.im/debian/ default main
### Homeserver
matrix.org
### Have you submitted a rageshake?
No | non_process | libva error init failed steps to reproduce install element desktop following the instructions from the official site using apt repo run element desktop from terminal get error libva error usr lib linux gnu dri drv video so init failed what happened what did you expect proper run what happened element desktop runs but all i can see is the window filled with white operating system ubuntu gnome application version element how did you install the app default main homeserver matrix org have you submitted a rageshake no | 0 |
17,792 | 23,719,737,937 | IssuesEvent | 2022-08-30 14:29:08 | FOLIO-FSE/folio_migration_tools | https://api.github.com/repos/FOLIO-FSE/folio_migration_tools | closed | Add configuration option to reset HRID settings | simplify_migration_process | - [x] Add task configuration for resetting hrid settings, meaning this needs to be set per task.
- [x] When resetting, just reset the numbers to the FOLIO default original values **for that specific object type.** Honor any other setting. | 1.0 | Add configuration option to reset HRID settings - - [x] Add task configuration for resetting hrid settings, meaning this needs to be set per task.
- [x] When resetting, just reset the numbers to the FOLIO default original values **for that specific object type.** Honor any other setting. | process | add configuration option to reset hrid settings add task configuration for resetting hrid settings meaning this needs to be set per task when resetting just reset the numbers to the folio default original values for that specific object type honor any other setting | 1 |
7,660 | 10,745,588,309 | IssuesEvent | 2019-10-30 09:21:10 | Addalin/cameranetwork | https://api.github.com/repos/Addalin/cameranetwork | closed | GUI: ImportError: No module named indexes.base | Priority: High Type: Bug inverse pre-processing | In the GUI on pressing 'Query'
```bash
2019-09-24 16:18:43,463 [Thread-1 ] [ERROR] Uncaught exception in ZMQStream callback
Traceback (most recent call last):
File "/home/shubi/.conda/envs/cvenv2.7/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 438, in _run_callback
callback(*args, **kwargs)
File "/home/shubi/.conda/envs/cvenv2.7/lib/python2.7/site-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/home/shubi/PycharmProjects/cameranetwork/CameraNetwork/mdp/client.py", line 190, in _on_message
self.on_message(msg)
File "/home/shubi/PycharmProjects/cameranetwork/CameraNetwork/client.py", line 163, in on_message
status, cmd, args, kwds = cPickle.loads(msg[0])
ImportError: No module named indexes.base
```
Possibly related to pandas version mismatch.
From [stackoverflow](https://stackoverflow.com/questions/36888228/python-read-pickle-importerror-no-module-named-indexes-base):
*This error can be caused by a version mismatch between the version of pandas used to save the dataframe and the version of pandas used to load it.* | 1.0 | GUI: ImportError: No module named indexes.base - In the GUI on pressing 'Query'
```bash
2019-09-24 16:18:43,463 [Thread-1 ] [ERROR] Uncaught exception in ZMQStream callback
Traceback (most recent call last):
File "/home/shubi/.conda/envs/cvenv2.7/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 438, in _run_callback
callback(*args, **kwargs)
File "/home/shubi/.conda/envs/cvenv2.7/lib/python2.7/site-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/home/shubi/PycharmProjects/cameranetwork/CameraNetwork/mdp/client.py", line 190, in _on_message
self.on_message(msg)
File "/home/shubi/PycharmProjects/cameranetwork/CameraNetwork/client.py", line 163, in on_message
status, cmd, args, kwds = cPickle.loads(msg[0])
ImportError: No module named indexes.base
```
Possibly related to pandas version mismatch.
From [stackoverflow](https://stackoverflow.com/questions/36888228/python-read-pickle-importerror-no-module-named-indexes-base):
*This error can be caused by a version mismatch between the version of pandas used to save the dataframe and the version of pandas used to load it.* | process | gui importerror no module named indexes base in the gui on pressing query bash uncaught exception in zmqstream callback traceback most recent call last file home shubi conda envs lib site packages zmq eventloop zmqstream py line in run callback callback args kwargs file home shubi conda envs lib site packages tornado stack context py line in null wrapper return fn args kwargs file home shubi pycharmprojects cameranetwork cameranetwork mdp client py line in on message self on message msg file home shubi pycharmprojects cameranetwork cameranetwork client py line in on message status cmd args kwds cpickle loads msg importerror no module named indexes base possibly related to pandas version mismatch from this error can be caused by a version mismatch between the version of pandas used to save the dataframe and the version of pandas used to load it | 1 |
366,610 | 25,593,993,704 | IssuesEvent | 2022-12-01 14:57:27 | practice-uffs/tour-virtual | https://api.github.com/repos/practice-uffs/tour-virtual | closed | Coleta de dados do campus LDS | documentation | essa issue tem o objetivo de buscar e entrar em contato com a diretoria do campus e comunicação para coleta de dados sobre mapas, estruturas, setores, imagens áreas tudo que for relevante ao projeto | 1.0 | Coleta de dados do campus LDS - essa issue tem o objetivo de buscar e entrar em contato com a diretoria do campus e comunicação para coleta de dados sobre mapas, estruturas, setores, imagens áreas tudo que for relevante ao projeto | non_process | coleta de dados do campus lds essa issue tem o objetivo de buscar e entrar em contato com a diretoria do campus e comunicação para coleta de dados sobre mapas estruturas setores imagens áreas tudo que for relevante ao projeto | 0 |
29,063 | 13,934,579,136 | IssuesEvent | 2020-10-22 10:13:42 | Tribler/tribler | https://api.github.com/repos/Tribler/tribler | opened | Use SQLite to store torrent data | enhancement performance | Currently, we store torrent data and stats in separate files in the `dlcheckpoints` dir. Libtorrent never touches any of these. Instead, we mediate its access through Python code.
Moving to use PonyORM backed SQLite storage for torrents will save us a lot of hassle regarding file access synchronization/persistence, the kind of things affecting e.g. #5615. Also, it will enable simpler and tighter integration with Channels database, resulting in faster UI response.
What do you think, guys?
| True | Use SQLite to store torrent data - Currently, we store torrent data and stats in separate files in the `dlcheckpoints` dir. Libtorrent never touches any of these. Instead, we mediate its access through Python code.
Moving to use PonyORM backed SQLite storage for torrents will save us a lot of hassle regarding file access synchronization/persistence, the kind of things affecting e.g. #5615. Also, it will enable simpler and tighter integration with Channels database, resulting in faster UI response.
What do you think, guys?
| non_process | use sqlite to store torrent data currently we store torrent data and stats in separate files in the dlcheckpoints dir libtorrent never touches any of these instead we mediate its access through python code moving to use ponyorm backed sqlite storage for torrents will save us a lot of hassle regarding file access synchronization persistence the kind of things affecting e g also it will enable simpler and tighter integration with channels database resulting in faster ui response what do you think guys | 0 |
65,563 | 14,739,356,811 | IssuesEvent | 2021-01-07 07:02:25 | habusha/CIOIL | https://api.github.com/repos/habusha/CIOIL | opened | CVE-2020-27218 (Medium) detected in jetty-server-9.4.12.v20180830.jar | security vulnerability | ## CVE-2020-27218 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-server-9.4.12.v20180830.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: CIOIL/infra_github/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.12.v20180830/jetty-server-9.4.12.v20180830.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-jetty-2.0.6.RELEASE.jar (Root Library)
- jetty-webapp-9.4.12.v20180830.jar
- jetty-servlet-9.4.12.v20180830.jar
- jetty-security-9.4.12.v20180830.jar
- :x: **jetty-server-9.4.12.v20180830.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/habusha/CIOIL/commit/03e78ae9cdd310ea6bd9663baf9a22b4609d9fc8">03e78ae9cdd310ea6bd9663baf9a22b4609d9fc8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Eclipse Jetty version 9.4.0.RC0 to 9.4.34.v20201102, 10.0.0.alpha0 to 10.0.0.beta2, and 11.0.0.alpha0 to 11.0.0.beta2, if GZIP request body inflation is enabled and requests from different clients are multiplexed onto a single connection, and if an attacker can send a request with a body that is received entirely but not consumed by the application, then a subsequent request on the same connection will see that body prepended to its body. The attacker will not see any data but may inject data into the body of the subsequent request.
<p>Publish Date: 2020-11-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27218>CVE-2020-27218</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-86wm-rrjm-8wh8">https://github.com/eclipse/jetty.project/security/advisories/GHSA-86wm-rrjm-8wh8</a></p>
<p>Release Date: 2020-11-28</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-server:9.4.35.v20201120, 10.0.0.beta3, 11.0.0.beta3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-27218 (Medium) detected in jetty-server-9.4.12.v20180830.jar - ## CVE-2020-27218 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-server-9.4.12.v20180830.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: CIOIL/infra_github/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.12.v20180830/jetty-server-9.4.12.v20180830.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-jetty-2.0.6.RELEASE.jar (Root Library)
- jetty-webapp-9.4.12.v20180830.jar
- jetty-servlet-9.4.12.v20180830.jar
- jetty-security-9.4.12.v20180830.jar
- :x: **jetty-server-9.4.12.v20180830.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/habusha/CIOIL/commit/03e78ae9cdd310ea6bd9663baf9a22b4609d9fc8">03e78ae9cdd310ea6bd9663baf9a22b4609d9fc8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Eclipse Jetty version 9.4.0.RC0 to 9.4.34.v20201102, 10.0.0.alpha0 to 10.0.0.beta2, and 11.0.0.alpha0 to 11.0.0.beta2, if GZIP request body inflation is enabled and requests from different clients are multiplexed onto a single connection, and if an attacker can send a request with a body that is received entirely but not consumed by the application, then a subsequent request on the same connection will see that body prepended to its body. The attacker will not see any data but may inject data into the body of the subsequent request.
<p>Publish Date: 2020-11-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27218>CVE-2020-27218</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-86wm-rrjm-8wh8">https://github.com/eclipse/jetty.project/security/advisories/GHSA-86wm-rrjm-8wh8</a></p>
<p>Release Date: 2020-11-28</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-server:9.4.35.v20201120, 10.0.0.beta3, 11.0.0.beta3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in jetty server jar cve medium severity vulnerability vulnerable library jetty server jar the core jetty server artifact library home page a href path to dependency file cioil infra github pom xml path to vulnerable library home wss scanner repository org eclipse jetty jetty server jetty server jar dependency hierarchy spring boot starter jetty release jar root library jetty webapp jar jetty servlet jar jetty security jar x jetty server jar vulnerable library found in head commit a href vulnerability details in eclipse jetty version to to and to if gzip request body inflation is enabled and requests from different clients are multiplexed onto a single connection and if an attacker can send a request with a body that is received entirely but not consumed by the application then a subsequent request on the same connection will see that body prepended to its body the attacker will not see any data but may inject data into the body of the subsequent request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org eclipse jetty jetty server step up your open source security game with whitesource | 0 |
12,817 | 15,191,741,364 | IssuesEvent | 2021-02-15 20:31:31 | pystatgen/sgkit | https://api.github.com/repos/pystatgen/sgkit | closed | Complete NumFOCUS onboarding | process + tools | - high-quality .svg file of your logo
- description of the project for our website
- names and emails of any other project members we should add to our communications
- Twitter handle (if you have one)
- your time zone | 1.0 | Complete NumFOCUS onboarding - - high-quality .svg file of your logo
- description of the project for our website
- names and emails of any other project members we should add to our communications
- Twitter handle (if you have one)
- your time zone | process | complete numfocus onboarding high quality svg file of your logo description of the project for our website names and emails of any other project members we should add to our communications twitter handle if you have one your time zone | 1 |
11,140 | 13,957,692,232 | IssuesEvent | 2020-10-24 08:10:37 | alexanderkotsev/geoportal | https://api.github.com/repos/alexanderkotsev/geoportal | opened | LT: The metadata documents provided by the newly created CSW service still doesn?t appear in the INSPIRE geoportal | Geoportal Harvesting process LT - Lithuania | From: Rita Viliuviene <R.Viliuviene@gis-centras.lt>
Sent: 15 March 2019 14:38
To: QUAGLIA Angelo (JRC-ISPRA-EXT)
Subject: The metadata documents provided by the newly created CSW service still doesn’t appear in the INSPIRE geoportal
Dear Angelo,
We have registered a new CSW service (https://www.geoportal.lt/geonetwork/srv/eng/csw?SERVICE=CSW&VERSION=2.0.2&REQUEST=GetCapabilities) in the INSPIRE Service Register system (please find an attachment). The service is accessible in INSPIRE Resource Browser (http://inspire-geoportal.ec.europa.eu/proxybrowser/#fq=memberStateCountryCode%3Alt&q=*%3A*) after the last harvesting session already, but the metadata documents provided by the newly created CSW service still doesn’t appear in the INSPIRE geoportal (http://inspire-geoportal.ec.europa.eu/results.html?country=lt&view=details&theme=none).
Moreover, the old metadata documents, which were provided by an old CSW service, still are accessible both in INSPIRE geoportal and INSPIRE Recourse Browser.
So, could you please advise us about where the problem is? Thank you in advance.
Best regards,
Rita Viliuviene
Product analyst
SE „GIS-Centras“, Seliu str. 66, Vilnius
Lithuania | 1.0 | LT: The metadata documents provided by the newly created CSW service still doesn?t appear in the INSPIRE geoportal - From: Rita Viliuviene <R.Viliuviene@gis-centras.lt>
Sent: 15 March 2019 14:38
To: QUAGLIA Angelo (JRC-ISPRA-EXT)
Subject: The metadata documents provided by the newly created CSW service still doesn’t appear in the INSPIRE geoportal
Dear Angelo,
We have registered a new CSW service (https://www.geoportal.lt/geonetwork/srv/eng/csw?SERVICE=CSW&VERSION=2.0.2&REQUEST=GetCapabilities) in the INSPIRE Service Register system (please find an attachment). The service is accessible in INSPIRE Resource Browser (http://inspire-geoportal.ec.europa.eu/proxybrowser/#fq=memberStateCountryCode%3Alt&q=*%3A*) after the last harvesting session already, but the metadata documents provided by the newly created CSW service still doesn’t appear in the INSPIRE geoportal (http://inspire-geoportal.ec.europa.eu/results.html?country=lt&view=details&theme=none).
Moreover, the old metadata documents, which were provided by an old CSW service, still are accessible both in INSPIRE geoportal and INSPIRE Recourse Browser.
So, could you please advise us about where the problem is? Thank you in advance.
Best regards,
Rita Viliuviene
Product analyst
SE „GIS-Centras“, Seliu str. 66, Vilnius
Lithuania | process | lt the metadata documents provided by the newly created csw service still doesn t appear in the inspire geoportal from rita viliuviene lt r viliuviene gis centras lt gt sent march to quaglia angelo jrc ispra ext subject the metadata documents provided by the newly created csw service still doesn rsquo t appear in the inspire geoportal dear angelo we have registered a new csw service in the inspire service register system please find an attachment the service is accessible in inspire resource browser after the last harvesting session already but the metadata documents provided by the newly created csw service still doesn rsquo t appear in the inspire geoportal moreover the old metadata documents which were provided by an old csw service still are accessible both in inspire geoportal and inspire recourse browser so could you please advise us about where the problem is thank you in advance best regards rita viliuviene product analyst se bdquo gis centras ldquo seliu str vilnius lithuania | 1 |
15,813 | 20,013,394,447 | IssuesEvent | 2022-02-01 09:32:58 | Students-of-the-city-of-Kostroma/tournament-project | https://api.github.com/repos/Students-of-the-city-of-Kostroma/tournament-project | closed | Обработка ошибки при попытке внесения в БД турнира с именем, которое уже используется | error processing no-issue-activity | При нажатии на кнопку "Сетка" должна обрабатываться ситуация, когда пользователь назвал турнирную сетку именем, которое уже использовано. | 1.0 | Обработка ошибки при попытке внесения в БД турнира с именем, которое уже используется - При нажатии на кнопку "Сетка" должна обрабатываться ситуация, когда пользователь назвал турнирную сетку именем, которое уже использовано. | process | обработка ошибки при попытке внесения в бд турнира с именем которое уже используется при нажатии на кнопку сетка должна обрабатываться ситуация когда пользователь назвал турнирную сетку именем которое уже использовано | 1 |
12,495 | 3,079,966,256 | IssuesEvent | 2015-08-21 19:15:18 | mozilla/webmaker-core | https://api.github.com/repos/mozilla/webmaker-core | closed | Introduce Discovery section | design | When the app loads, it sends the user to the Discovery section of the app. Upon first glance this can be overwhelming and confusing. @LauraReynal suggested to add a line or two of introductory content to make this transition easier. | 1.0 | Introduce Discovery section - When the app loads, it sends the user to the Discovery section of the app. Upon first glance this can be overwhelming and confusing. @LauraReynal suggested to add a line or two of introductory content to make this transition easier. | non_process | introduce discovery section when the app loads it sends the user to the discovery section of the app upon first glance this can be overwhelming and confusing laurareynal suggested to add a line or two of introductory content to make this transition easier | 0 |
18,638 | 24,580,573,084 | IssuesEvent | 2022-10-13 15:19:24 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Mobile apps] Sign in screen > All the text fields, buttons, hyperlinks in sign in screen are displayed in smaller size in higher screen size devices | Bug P2 UI Process: Fixed Process: Tested QA Process: Tested dev Auth server | Steps:
1. Install the app (android or iOS) on devices having at least> 5.5 inch screen size
2. Navigate to signin screen
3. Observe the text fields, buttons, and hyperlinks
Note: Issue not observed in smaller screen size devices eg. iPhone SE 2nd gen
Refer screenshots for the below devices:
iOS - iPhone XS Max (6.46" screen size)

Android - Samsung M32 (6.4" screen size)

Android - OnePlus 8T (6.55" screen size)

| 3.0 | [Mobile apps] Sign in screen > All the text fields, buttons, hyperlinks in sign in screen are displayed in smaller size in higher screen size devices - Steps:
1. Install the app (android or iOS) on devices having at least> 5.5 inch screen size
2. Navigate to signin screen
3. Observe the text fields, buttons, and hyperlinks
Note: Issue not observed in smaller screen size devices eg. iPhone SE 2nd gen
Refer screenshots for the below devices:
iOS - iPhone XS Max (6.46" screen size)

Android - Samsung M32 (6.4" screen size)

Android - OnePlus 8T (6.55" screen size)

| process | sign in screen all the text fields buttons hyperlinks in sign in screen are displayed in smaller size in higher screen size devices steps install the app android or ios on devices having at least inch screen size navigate to signin screen observe the text fields buttons and hyperlinks note issue not observed in smaller screen size devices eg iphone se gen refer screenshots for the below devices ios iphone xs max screen size android samsung screen size android oneplus screen size | 1 |
385,208 | 11,415,171,756 | IssuesEvent | 2020-02-02 09:08:24 | threefoldfoundation/www_threefold.io_new | https://api.github.com/repos/threefoldfoundation/www_threefold.io_new | closed | Get SEC into our Large Farmer section on the grid page. | priority_major state_inprogress state_question | We want to get Smart Edge Cloud feature on our website before Feb. 2nd.
Requirements:
- [x] Get logo (@VonSub )
- [x] Get text vetted by @AdnanFatayerji (@VonSub)
- [x] Make 4th Column in Large Farmer Section on Grid page (@ehab-hassan)
- [x] Feature the logo (@ehab-hassan)
- [x] Put in the text (@ehab-hassan )
- [x] Add button "visit Smart Edge Cloud" that links to https://thesecloud.com/ (@ehab-hassan )
- [x] Check Mobile view if it's ok (@ehab-hassan )
- [x] Push to production (@ehab-hassan )
[LOGO-.pdf](https://github.com/threefoldfoundation/www_threefold.io_new/files/4142760/LOGO-.pdf)
| 1.0 | Get SEC into our Large Farmer section on the grid page. - We want to get Smart Edge Cloud feature on our website before Feb. 2nd.
Requirements:
- [x] Get logo (@VonSub )
- [x] Get text vetted by @AdnanFatayerji (@VonSub)
- [x] Make 4th Column in Large Farmer Section on Grid page (@ehab-hassan)
- [x] Feature the logo (@ehab-hassan)
- [x] Put in the text (@ehab-hassan )
- [x] Add button "visit Smart Edge Cloud" that links to https://thesecloud.com/ (@ehab-hassan )
- [x] Check Mobile view if it's ok (@ehab-hassan )
- [x] Push to production (@ehab-hassan )
[LOGO-.pdf](https://github.com/threefoldfoundation/www_threefold.io_new/files/4142760/LOGO-.pdf)
| non_process | get sec into our large farmer section on the grid page we want to get smart edge cloud feature on our website before feb requirements get logo vonsub get text vetted by adnanfatayerji vonsub make column in large farmer section on grid page ehab hassan feature the logo ehab hassan put in the text ehab hassan add button visit smart edge cloud that links to ehab hassan check mobile view if it s ok ehab hassan push to production ehab hassan | 0 |
13,659 | 16,375,973,063 | IssuesEvent | 2021-05-16 04:54:57 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | [FEATURE][processing] Allow copying/cut/paste of model components | 3.14 Automatic new feature Graphical modeler Processing | Original commit: https://github.com/qgis/QGIS/commit/47f96e246621660b0b941a0a6df9f1b5da33c7dd by nyalldawson
This commit allows users to copy and paste model components, both
within the same model and between different models | 1.0 | [FEATURE][processing] Allow copying/cut/paste of model components - Original commit: https://github.com/qgis/QGIS/commit/47f96e246621660b0b941a0a6df9f1b5da33c7dd by nyalldawson
This commit allows users to copy and paste model components, both
within the same model and between different models | process | allow copying cut paste of model components original commit by nyalldawson this commit allows users to copy and paste model components both within the same model and between different models | 1 |
21,967 | 30,462,860,173 | IssuesEvent | 2023-07-17 08:18:07 | camunda/issues | https://api.github.com/repos/camunda/issues | closed | BPMN Signal Events(1): Top-level signal start events | component:desktopModeler component:operate component:optimize component:webModeler component:zeebe component:zeebe-process-automation public feature-parity potential:8.3 | > This is an epic internal-docs issue. It bundles all activities we conduct around a certain initiative. It will typically links to various child issues from various repositories and can be spread across multiple teams.
### Value Proposition Statement
Trigger a BPMN signal via API to start multiple different definitions at the same time.
### User Problem
In process automation sometimes there is a need to broadcast a signal to one or multiple waiting process instances (as intermediate) or processes (as start).
Example Use-Case: [Insurance Policy conditions change](https://docs.camunda.org/manual/7.16/reference/bpmn20/events/signal-events/)
In this epic we focus on solving the first problem:
Starting an instance for all process definitions with a signal start event matching the signal name.
### User Stories
**Design**:
- As a Developer, I can model Top-level signal start events and define a signal name.
**Automate**:
- As a Developer, I can deploy Top-level signal start events to the Engine (Zeebe).
- As a Developer, I can be sure that the Engine uses signal events correctly and e.g. triggers Top-Level signal start events when using the gRPC API or Java Client.
- As a Developer, I can see Signal execution in Operate.
**Improve**:
- Allow displaying processes with Signals in Optimize.
### Implementation Notes
In the first stage, it will be possible for users to broadcast signals using the gRPC API, starting new process instances for the processes that have top-level signal start events.
Model highlighting that only the top-level signal start event will be supported at this stage, the other signal events are marked as not yet supported

Signal broadcasts arriving in the gateway have to be written to a partition leader. The partition leader needs to relay the signal to other partitions, and it needs to start instances for each of the top-level signal start events that match the name of the broadcasted signal. The other partitions don't create new instances and don't have to relay the signal.
A process is subscribed to a signal using a new SignalStartEventSubscription record.
<sup>:robot: This issue is automatically synced from: [source](https://github.com/camunda/product-hub/issues/142)</sup>
<!-- copiedFromSourceIssue: https://github.com/camunda/product-hub/issues/142 --> | 1.0 | BPMN Signal Events(1): Top-level signal start events - > This is an epic internal-docs issue. It bundles all activities we conduct around a certain initiative. It will typically links to various child issues from various repositories and can be spread across multiple teams.
### Value Proposition Statement
Trigger a BPMN signal via API to start multiple different definitions at the same time.
### User Problem
In process automation sometimes there is a need to broadcast a signal to one or multiple waiting process instances (as intermediate) or processes (as start).
Example Use-Case: [Insurance Policy conditions change](https://docs.camunda.org/manual/7.16/reference/bpmn20/events/signal-events/)
In this epic we focus on solving the first problem:
Starting an instance for all process definitions with a signal start event matching the signal name.
### User Stories
**Design**:
- As a Developer, I can model Top-level signal start events and define a signal name.
**Automate**:
- As a Developer, I can deploy Top-level signal start events to the Engine (Zeebe).
- As a Developer, I can be sure that the Engine uses signal events correctly and e.g. triggers Top-Level signal start events when using the gRPC API or Java Client.
- As a Developer, I can see Signal execution in Operate.
**Improve**:
- Allow displaying processes with Signals in Optimize.
### Implementation Notes
In the first stage, it will be possible for users to broadcast signals using the gRPC API, starting new process instances for the processes that have top-level signal start events.
Model highlighting that only the top-level signal start event will be supported at this stage, the other signal events are marked as not yet supported

Signal broadcasts arriving in the gateway have to be written to a partition leader. The partition leader needs to relay the signal to other partitions, and it needs to start instances for each of the top-level signal start events that match the name of the broadcasted signal. The other partitions don't create new instances and don't have to relay the signal.
A process is subscribed to a signal using a new SignalStartEventSubscription record.
<sup>:robot: This issue is automatically synced from: [source](https://github.com/camunda/product-hub/issues/142)</sup>
<!-- copiedFromSourceIssue: https://github.com/camunda/product-hub/issues/142 --> | process | bpmn signal events top level signal start events this is an epic internal docs issue it bundles all activities we conduct around a certain initiative it will typically links to various child issues from various repositories and can be spread across multiple teams value proposition statement trigger a bpmn signal via api to start multiple different definitions at the same time user problem in process automation sometimes there is a need to broadcast a signal to one or multiple waiting process instances as intermediate or processes as start example use case in this epic we focus on solving the first problem starting an instance for all process definitions with a signal start event matching the signal name user stories design as a developer i can model top level signal start events and define a signal name automate as a developer i can deploy top level signal start events to the engine zeebe as a developer i can be sure that the engine uses signal events correctly and e g triggers top level signal start events when using the grpc api or java client as a developer i can see signal execution in operate improve allow displaying processes with signals in optimize implementation notes in the first stage it will be possible for users to broadcast signals using the grpc api starting new process instances for the processes that have top level signal start events model highlighting that only the top level signal start event will be supported at this stage the other signal events are marked as not yet supported signal broadcasts arriving in the gateway have to be written to a partition leader the partition leader needs to relay the signal to other partitions and it needs to start instances for each of the top level signal start events that match the name of the broadcasted signal the other partitions don t create new instances and don t have to relay the signal a process is subscribed to a signal using a new signalstarteventsubscription record robot this issue is automatically synced from | 1 |
6,707 | 9,815,588,741 | IssuesEvent | 2019-06-13 12:59:40 | linnovate/root | https://api.github.com/repos/linnovate/root | closed | second blank tag bug | 2.0.7 Fixed Process bug Visual bug | in an entity, add a tag
press the plus button to create a second tag
click somewhere on the screen to leave the tag blank
the plus button disapears and the "select tags" text stays visible

| 1.0 | second blank tag bug - in an entity, add a tag
press the plus button to create a second tag
click somewhere on the screen to leave the tag blank
the plus button disapears and the "select tags" text stays visible

| process | second blank tag bug in an entity add a tag press the plus button to create a second tag click somewhere on the screen to leave the tag blank the plus button disapears and the select tags text stays visible | 1 |
496,678 | 14,352,057,389 | IssuesEvent | 2020-11-30 03:12:57 | arfc/mhtgr350-benchmark | https://api.github.com/repos/arfc/mhtgr350-benchmark | closed | Add post-processors for the full-core model in Serpent | Comp:Analysis Difficulty:2-Challenging Priority:2-Normal Status:5-In Review | The full-core model in Serpent is missing the python functions that:
* add legends to the model figure
* add location of the detectors to the model figure
* produce the results, including axial flux, radial flux, and radial power distribution
This issue can be closed when those functions are added to the repo. | 1.0 | Add post-processors for the full-core model in Serpent - The full-core model in Serpent is missing the python functions that:
* add legends to the model figure
* add location of the detectors to the model figure
* produce the results, including axial flux, radial flux, and radial power distribution
This issue can be closed when those functions are added to the repo. | non_process | add post processors for the full core model in serpent the full core model in serpent is missing the python functions that add legends to the model figure add location of the detectors to the model figure produce the results including axial flux radial flux and radial power distribution this issue can be closed when those functions are added to the repo | 0 |
148,951 | 11,872,506,559 | IssuesEvent | 2020-03-26 15:54:56 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | Settings page resets when websocket events occur | component:ui priority:medium state:needs_test type:bug | ##### ISSUE TYPE
- Bug Report
##### SUMMARY
When on a tab on the settings page and a websocket event occurs, user is thrown back to the first tab, losing any unsaved changes.
##### ENVIRONMENT
* AWX version: 9.0.1
* AWX install method: any
* Ansible version: N/A
* Operating System: macOS 10.14.6
* Web Browser: Safari
##### STEPS TO REPRODUCE
- Go to Settings -> Authentication -> GitHub tab.
- Go to another tab and launch a job
- Go back to the Settings page
##### EXPECTED RESULTS
I should be looking at the tab I was on, with my unsaved changes.
##### ACTUAL RESULTS
Unsaved changes are lost, tossed back to the first tab. | 1.0 | Settings page resets when websocket events occur - ##### ISSUE TYPE
- Bug Report
##### SUMMARY
When on a tab on the settings page and a websocket event occurs, user is thrown back to the first tab, losing any unsaved changes.
##### ENVIRONMENT
* AWX version: 9.0.1
* AWX install method: any
* Ansible version: N/A
* Operating System: macOS 10.14.6
* Web Browser: Safari
##### STEPS TO REPRODUCE
- Go to Settings -> Authentication -> GitHub tab.
- Go to another tab and launch a job
- Go back to the Settings page
##### EXPECTED RESULTS
I should be looking at the tab I was on, with my unsaved changes.
##### ACTUAL RESULTS
Unsaved changes are lost, tossed back to the first tab. | non_process | settings page resets when websocket events occur issue type bug report summary when on a tab on the settings page and a websocket event occurs user is thrown back to the first tab losing any unsaved changes environment awx version awx install method any ansible version n a operating system macos web browser safari steps to reproduce go to settings authentication github tab go to another tab and launch a job go back to the settings page expected results i should be looking at the tab i was on with my unsaved changes actual results unsaved changes are lost tossed back to the first tab | 0 |
97,375 | 12,230,637,771 | IssuesEvent | 2020-05-04 05:34:14 | Qiskit/qiskit.org | https://api.github.com/repos/Qiskit/qiskit.org | closed | confusion around elements | Human Design type: user story | user not familiar with elements, therefore not aware what terra, aer, aqua and ignis mean in the top left corner.
would be helpful for user to have an elements landing page
in the current elements pages, research-based users find the "about" section very useful, but the example and stack to be not helpful and confusing, respectively | 1.0 | confusion around elements - user not familiar with elements, therefore not aware what terra, aer, aqua and ignis mean in the top left corner.
would be helpful for user to have an elements landing page
in the current elements pages, research-based users find the "about" section very useful, but the example and stack to be not helpful and confusing, respectively | non_process | confusion around elements user not familiar with elements therefore not aware what terra aer aqua and ignis mean in the top left corner would be helpful for user to have an elements landing page in the current elements pages research based users find the about section very useful but the example and stack to be not helpful and confusing respectively | 0 |
75,330 | 25,776,502,550 | IssuesEvent | 2022-12-09 12:27:32 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | Failover configuration should be smarter about the load balancer comparison | Type: Defect Team: Client Source: Internal Module: Config | We try to make sure that the failover configurations are consistent with each other.
One step of this was to compare the load balancers. However, it performs instance equality checks, which can fail easily under different scenarios.
When the XML/YAML configuration is used and the load balancer is specified as `<load-balancer type="random"/>`, we set the load balancer via the following code, which sets the load balancer to a new instance, which won't pass the equality check mentioned above.
```java
String type = getAttribute(node, "type");
if (matches("random", type)) {
clientConfig.setLoadBalancer(new RandomLB());
} else if (matches("round-robin", type)) {
clientConfig.setLoadBalancer(new RoundRobinLB());
} else if ("custom".equals(type)) {
String loadBalancerClassName = parseCustomLoadBalancerClassName(node);
clientConfig.setLoadBalancerClassName(loadBalancerClassName);
}
```
Then, the failover config will fail fast in the validation phase, saying that they are different load balancers, even if the configs were identical.
We should be smarter about this and perform the equality comparison with a different method, possibly with a class comparison.
For now, affected users can use the following workaround when trying to set the load balancer to `random` or `round-robin` in the declarative configurations.
```
<load-balancer type="custom">
com.hazelcast.client.util.RandomLB
</load-balancer>
```
which will set the load balancer class name, and initiate the load balancer lazily while creating the client. The load balancer name check will pass because it is doing a simple string equality comparison.
| 1.0 | Failover configuration should be smarter about the load balancer comparison - We try to make sure that the failover configurations are consistent with each other.
One step of this was to compare the load balancers. However, it performs instance equality checks, which can fail easily under different scenarios.
When the XML/YAML configuration is used and the load balancer is specified as `<load-balancer type="random"/>`, we set the load balancer via the following code, which sets the load balancer to a new instance, which won't pass the equality check mentioned above.
```java
String type = getAttribute(node, "type");
if (matches("random", type)) {
clientConfig.setLoadBalancer(new RandomLB());
} else if (matches("round-robin", type)) {
clientConfig.setLoadBalancer(new RoundRobinLB());
} else if ("custom".equals(type)) {
String loadBalancerClassName = parseCustomLoadBalancerClassName(node);
clientConfig.setLoadBalancerClassName(loadBalancerClassName);
}
```
Then, the failover config will fail fast in the validation phase, saying that they are different load balancers, even if the configs were identical.
We should be smarter about this and perform the equality comparison with a different method, possibly with a class comparison.
For now, affected users can use the following workaround when trying to set the load balancer to `random` or `round-robin` in the declarative configurations.
```
<load-balancer type="custom">
com.hazelcast.client.util.RandomLB
</load-balancer>
```
which will set the load balancer class name, and initiate the load balancer lazily while creating the client. The load balancer name check will pass because it is doing a simple string equality comparison.
| non_process | failover configuration should be smarter about the load balancer comparison we try to make sure that the failover configurations are consistent with each other one step of this was to compare the load balancers however it performs instance equality checks which can fail easily under different scenarios when the xml yaml configuration is used and the load balancer is specified as we set the load balancer via the following code which sets the load balancer to a new instance which won t pass the equality check mentioned above java string type getattribute node type if matches random type clientconfig setloadbalancer new randomlb else if matches round robin type clientconfig setloadbalancer new roundrobinlb else if custom equals type string loadbalancerclassname parsecustomloadbalancerclassname node clientconfig setloadbalancerclassname loadbalancerclassname then the failover config will fail fast in the validation phase saying that they are different load balancers even if the configs were identical we should be smarter about this and perform the equality comparison with a different method possibly with a class comparison for now affected users can use the following workaround when trying to set the load balancer to random or round robin in the declarative configurations com hazelcast client util randomlb which will set the load balancer class name and initiate the load balancer lazily while creating the client the load balancer name check will pass because it is doing a simple string equality comparison | 0 |
2,584 | 5,344,727,861 | IssuesEvent | 2017-02-17 15:15:59 | ElliotAOram/GhostPyramid | https://api.github.com/repos/ElliotAOram/GhostPyramid | opened | Feature 1: Capture video feed from external camera | Image Processing | Add functionality to capture the output from a camera. This should work with both the internal webcam and an external webcam (The test should be carried out using the internal webcam as tests will not normally include external webcams at this point in time)
Required tasks:
* Entry tasks:
* [ ] Create video processing application
* [ ] Create unit test for video processing application
* Design feature:
* [ ] Consult overall model and decide if change is required
* Build by feature:
* [ ] Write tests for camera feed feature
* [ ] Write code to pass tests
* [ ] Refactor where required | 1.0 | Feature 1: Capture video feed from external camera - Add functionality to capture the output from a camera. This should work with both the internal webcam and an external webcam (The test should be carried out using the internal webcam as tests will not normally include external webcams at this point in time)
Required tasks:
* Entry tasks:
* [ ] Create video processing application
* [ ] Create unit test for video processing application
* Design feature:
* [ ] Consult overall model and decide if change is required
* Build by feature:
* [ ] Write tests for camera feed feature
* [ ] Write code to pass tests
* [ ] Refactor where required | process | feature capture video feed from external camera add functionality to capture the output from a camera this should work with both the internal webcam and an external webcam the test should be carried out using the internal webcam as tests will not normally include external webcams at this point in time required tasks entry tasks create video processing application create unit test for video processing application design feature consult overall model and decide if change is required build by feature write tests for camera feed feature write code to pass tests refactor where required | 1 |
16,317 | 20,972,511,637 | IssuesEvent | 2022-03-28 12:42:07 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Question about Process redirected output and WaitForExit | question area-System.Diagnostics.Process | Hi!
I am investigating an [issue on vstest](https://github.com/dotnet/sdk/issues/22311) and I have nailed down the issue to the fact we receive the exit callback before we have flushed all the output/error messages.
Looking here, I have found the issue [When Process.WaitForExitCore(int milliseconds) waits successfully, the redirected output streams miss some data](https://github.com/dotnet/runtime/issues/51641) which seems to solve our problem.
But, I am not sure which of the discussed solution is the one to rely apply (i.e. `WaitForExitAsync`, multi calls to `WaitForExit`, or read until `null`). The various solutions seem to work but I guess there is one solution to favor and it's not clear what/why.
I think that I simply need to replace the call to [`p.WaitForExit(500)`](https://github.com/microsoft/vstest/blob/main/src/Microsoft.TestPlatform.PlatformAbstractions/common/System/ProcessHelper.cs#L90) by an awaited call to `WaitForExitAsync` with a cancellation token that times out.
Thank you for your feedback/answer. | 1.0 | Question about Process redirected output and WaitForExit - Hi!
I am investigating an [issue on vstest](https://github.com/dotnet/sdk/issues/22311) and I have nailed down the issue to the fact we receive the exit callback before we have flushed all the output/error messages.
Looking here, I have found the issue [When Process.WaitForExitCore(int milliseconds) waits successfully, the redirected output streams miss some data](https://github.com/dotnet/runtime/issues/51641) which seems to solve our problem.
But, I am not sure which of the discussed solution is the one to rely apply (i.e. `WaitForExitAsync`, multi calls to `WaitForExit`, or read until `null`). The various solutions seem to work but I guess there is one solution to favor and it's not clear what/why.
I think that I simply need to replace the call to [`p.WaitForExit(500)`](https://github.com/microsoft/vstest/blob/main/src/Microsoft.TestPlatform.PlatformAbstractions/common/System/ProcessHelper.cs#L90) by an awaited call to `WaitForExitAsync` with a cancellation token that times out.
Thank you for your feedback/answer. | process | question about process redirected output and waitforexit hi i am investigating an and i have nailed down the issue to the fact we receive the exit callback before we have flushed all the output error messages looking here i have found the issue which seems to solve our problem but i am not sure which of the discussed solution is the one to rely apply i e waitforexitasync multi calls to waitforexit or read until null the various solutions seem to work but i guess there is one solution to favor and it s not clear what why i think that i simply need to replace the call to by an awaited call to waitforexitasync with a cancellation token that times out thank you for your feedback answer | 1 |
18,178 | 24,229,288,097 | IssuesEvent | 2022-09-26 16:46:31 | hashgraph/hedera-mirror-node | https://api.github.com/repos/hashgraph/hedera-mirror-node | closed | Codecov GitHub Action upgrade from V1 to v3 | bug process | As of February 1, 2022, v1 for Codecov GitHub Action has been fully sunset and no longer functions.
Due to the [deprecation](https://about.codecov.io/blog/introducing-codecovs-new-uploader/) of the underlying bash uploader, the Codecov GitHub Action has released v2/v3 which will use the new [uploader](https://github.com/codecov/uploader).
Upgrade the workflows to use v3. | 1.0 | Codecov GitHub Action upgrade from V1 to v3 - As of February 1, 2022, v1 for Codecov GitHub Action has been fully sunset and no longer functions.
Due to the [deprecation](https://about.codecov.io/blog/introducing-codecovs-new-uploader/) of the underlying bash uploader, the Codecov GitHub Action has released v2/v3 which will use the new [uploader](https://github.com/codecov/uploader).
Upgrade the workflows to use v3. | process | codecov github action upgrade from to as of february for codecov github action has been fully sunset and no longer functions due to the of the underlying bash uploader the codecov github action has released which will use the new upgrade the workflows to use | 1 |
16,080 | 20,251,594,946 | IssuesEvent | 2022-02-14 18:26:32 | googleapis/java-logging | https://api.github.com/repos/googleapis/java-logging | closed | Dependency Dashboard | type: process | This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/org.sonatype.plugins-nexus-staging-maven-plugin-1.x -->[build(deps): update dependency org.sonatype.plugins:nexus-staging-maven-plugin to v1.6.9](../pull/869)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-logging-3.x -->[chore(deps): update dependency com.google.cloud:google-cloud-logging to v3.6.3](../pull/870)
- [ ] <!-- rebase-branch=renovate/org.codehaus.mojo-build-helper-maven-plugin-3.x -->[build(deps): update dependency org.codehaus.mojo:build-helper-maven-plugin to v3.3.0](../pull/827)
- [ ] <!-- rebase-branch=renovate/com.google.code.gson-gson-2.x -->[deps: update dependency com.google.code.gson:gson to v2.9.0](../pull/868)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/actions-github-script-6.x -->[deps: update actions/github-script action to v6](../pull/865)
- [ ] <!-- recreate-branch=renovate/actions-setup-java-2.x -->[deps: update actions/setup-java action to v2](../pull/479)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 1.0 | Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/org.sonatype.plugins-nexus-staging-maven-plugin-1.x -->[build(deps): update dependency org.sonatype.plugins:nexus-staging-maven-plugin to v1.6.9](../pull/869)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-logging-3.x -->[chore(deps): update dependency com.google.cloud:google-cloud-logging to v3.6.3](../pull/870)
- [ ] <!-- rebase-branch=renovate/org.codehaus.mojo-build-helper-maven-plugin-3.x -->[build(deps): update dependency org.codehaus.mojo:build-helper-maven-plugin to v3.3.0](../pull/827)
- [ ] <!-- rebase-branch=renovate/com.google.code.gson-gson-2.x -->[deps: update dependency com.google.code.gson:gson to v2.9.0](../pull/868)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/actions-github-script-6.x -->[deps: update actions/github-script action to v6](../pull/865)
- [ ] <!-- recreate-branch=renovate/actions-setup-java-2.x -->[deps: update actions/setup-java action to v2](../pull/479)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| process | dependency dashboard this issue provides visibility into renovate updates and their statuses edited blocked these updates have been manually edited so renovate will no longer make changes to discard all commits and start over click on a checkbox pull pull pull pull ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull check this box to trigger a request for renovate to run again on this repository | 1 |
76,710 | 26,563,239,508 | IssuesEvent | 2023-01-20 17:37:51 | vector-im/element-android | https://api.github.com/repos/vector-im/element-android | opened | Re-authentication never stops. | T-Defect | ### Steps to reproduce
1. Getting Encryption upgrade available

2. Then clicked on it.
4. Then this Re-authentication Needed is never stoping.
5. 
6. If tried to stop, then getting this-
7.

### Outcome
#### What did you expect?
#### What happened instead?
### Your phone model
Poco X3 Pro
### Operating system version
Android 12
### Application version and app store
Version 1.5.20 [40105202] (G-3e947e43), Matrix SDK Version 1.5.20 (3e947e43), olm version 3.2.12
### Homeserver
_No response_
### Will you send logs?
Yes
### Are you willing to provide a PR?
Yes | 1.0 | Re-authentication never stops. - ### Steps to reproduce
1. Getting Encryption upgrade available

2. Then clicked on it.
4. Then this Re-authentication Needed is never stoping.
5. 
6. If tried to stop, then getting this-
7.

### Outcome
#### What did you expect?
#### What happened instead?
### Your phone model
Poco X3 Pro
### Operating system version
Android 12
### Application version and app store
Version 1.5.20 [40105202] (G-3e947e43), Matrix SDK Version 1.5.20 (3e947e43), olm version 3.2.12
### Homeserver
_No response_
### Will you send logs?
Yes
### Are you willing to provide a PR?
Yes | non_process | re authentication never stops steps to reproduce getting encryption upgrade available then clicked on it then this re authentication needed is never stoping if tried to stop then getting this outcome what did you expect what happened instead your phone model poco pro operating system version android application version and app store version g matrix sdk version olm version homeserver no response will you send logs yes are you willing to provide a pr yes | 0 |
3,170 | 6,224,119,455 | IssuesEvent | 2017-07-10 13:37:42 | dzhw/zofar | https://api.github.com/repos/dzhw/zofar | opened | server structure | category: technical.processes prio: 9999 status: discussion type: backlog.item | zu wenig Server, wenn mehr als drei Befragungen produktiv sind; zudem sind sie nicht
ausfallsicher, Ziel: Servercluster aufbauen
engl. tba
| 1.0 | server structure - zu wenig Server, wenn mehr als drei Befragungen produktiv sind; zudem sind sie nicht
ausfallsicher, Ziel: Servercluster aufbauen
engl. tba
| process | server structure zu wenig server wenn mehr als drei befragungen produktiv sind zudem sind sie nicht ausfallsicher ziel servercluster aufbauen engl tba | 1 |
754,603 | 26,395,482,837 | IssuesEvent | 2023-01-12 19:00:23 | internetarchive/openlibrary | https://api.github.com/repos/internetarchive/openlibrary | opened | Add "Safe For Work" Mode to blur certain NSWF covers | Type: Feature Request Module: Carousels Type: Epic Priority: 2 Needs: Community Discussion Affects: Experience Lead: @jimchamp Module: My Account Page | <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
### Describe the problem that you'd like solved
<!-- A clear and concise description of what you want to happen. -->
## SFW: “Safe For Work” mode
Let librarians add `content_warnings:` subject tags (e.g. `content_warning:pornographic` -- let librarians decide or use archive.org flags) via the standard `edit` UI.
- [ ] Add SFW: “Safe For Work” mode checkbox to account registration page + settings
- [ ] Have carousels blur items with subject `content_warnings:*` if account set as “SFW”
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
<!-- Which suggestions or requirements should be considered for how feature needs to appear or be implemented? -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
| 1.0 | Add "Safe For Work" Mode to blur certain NSWF covers - <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
### Describe the problem that you'd like solved
<!-- A clear and concise description of what you want to happen. -->
## SFW: “Safe For Work” mode
Let librarians add `content_warnings:` subject tags (e.g. `content_warning:pornographic` -- let librarians decide or use archive.org flags) via the standard `edit` UI.
- [ ] Add SFW: “Safe For Work” mode checkbox to account registration page + settings
- [ ] Have carousels blur items with subject `content_warnings:*` if account set as “SFW”
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
<!-- Which suggestions or requirements should be considered for how feature needs to appear or be implemented? -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
| non_process | add safe for work mode to blur certain nswf covers describe the problem that you d like solved sfw “safe for work” mode let librarians add content warnings subject tags e g content warning pornographic let librarians decide or use archive org flags via the standard edit ui add sfw “safe for work” mode checkbox to account registration page settings have carousels blur items with subject content warnings if account set as “sfw” proposal constraints additional context stakeholders | 0 |
3,140 | 6,193,015,405 | IssuesEvent | 2017-07-05 05:19:10 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | When process.stdin is in raw mode, pressing enter returns Carriage Return ("\r") instead of Linefeed ("\n") or End of Line ("\r\n") | process tty | I suppose this could be intended behaviour, but it seems odd considering that printing "\r" and "\n" results in expected behaviour and reading text from a file where enter was pressed will result in "\n".
| 1.0 | When process.stdin is in raw mode, pressing enter returns Carriage Return ("\r") instead of Linefeed ("\n") or End of Line ("\r\n") - I suppose this could be intended behaviour, but it seems odd considering that printing "\r" and "\n" results in expected behaviour and reading text from a file where enter was pressed will result in "\n".
| process | when process stdin is in raw mode pressing enter returns carriage return r instead of linefeed n or end of line r n i suppose this could be intended behaviour but it seems odd considering that printing r and n results in expected behaviour and reading text from a file where enter was pressed will result in n | 1 |
40,056 | 12,744,924,579 | IssuesEvent | 2020-06-26 13:23:49 | RG4421/atlasdb | https://api.github.com/repos/RG4421/atlasdb | opened | CVE-2018-1320 (High) detected in libthrift-0.9.2.jar | security vulnerability | ## CVE-2018-1320 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libthrift-0.9.2.jar</b></p></summary>
<p>Thrift is a software framework for scalable cross-language services development.</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.9.2/9b067e2e2c5291e9f0d8b3561b1654286e6d81ee/libthrift-0.9.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.9.2/9b067e2e2c5291e9f0d8b3561b1654286e6d81ee/libthrift-0.9.2.jar,canner/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.9.2/9b067e2e2c5291e9f0d8b3561b1654286e6d81ee/libthrift-0.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **libthrift-0.9.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/atlasdb/commit/6c613675868440052ef3631d79eea71e4ab49c96">6c613675868440052ef3631d79eea71e4ab49c96</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Thrift Java client library versions 0.5.0 through 0.11.0 can bypass SASL negotiation isComplete validation in the org.apache.thrift.transport.TSaslTransport class. An assert used to determine if the SASL handshake had successfully completed could be disabled in production settings making the validation incomplete.
<p>Publish Date: 2019-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1320>CVE-2018-1320</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1320">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1320</a></p>
<p>Release Date: 2019-01-07</p>
<p>Fix Resolution: 0.12.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.thrift","packageName":"libthrift","packageVersion":"0.9.2","isTransitiveDependency":false,"dependencyTree":"org.apache.thrift:libthrift:0.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.12.0"}],"vulnerabilityIdentifier":"CVE-2018-1320","vulnerabilityDetails":"Apache Thrift Java client library versions 0.5.0 through 0.11.0 can bypass SASL negotiation isComplete validation in the org.apache.thrift.transport.TSaslTransport class. An assert used to determine if the SASL handshake had successfully completed could be disabled in production settings making the validation incomplete.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1320","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-1320 (High) detected in libthrift-0.9.2.jar - ## CVE-2018-1320 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libthrift-0.9.2.jar</b></p></summary>
<p>Thrift is a software framework for scalable cross-language services development.</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.9.2/9b067e2e2c5291e9f0d8b3561b1654286e6d81ee/libthrift-0.9.2.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.9.2/9b067e2e2c5291e9f0d8b3561b1654286e6d81ee/libthrift-0.9.2.jar,canner/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.9.2/9b067e2e2c5291e9f0d8b3561b1654286e6d81ee/libthrift-0.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **libthrift-0.9.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/atlasdb/commit/6c613675868440052ef3631d79eea71e4ab49c96">6c613675868440052ef3631d79eea71e4ab49c96</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Thrift Java client library versions 0.5.0 through 0.11.0 can bypass SASL negotiation isComplete validation in the org.apache.thrift.transport.TSaslTransport class. An assert used to determine if the SASL handshake had successfully completed could be disabled in production settings making the validation incomplete.
<p>Publish Date: 2019-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1320>CVE-2018-1320</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1320">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1320</a></p>
<p>Release Date: 2019-01-07</p>
<p>Fix Resolution: 0.12.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.thrift","packageName":"libthrift","packageVersion":"0.9.2","isTransitiveDependency":false,"dependencyTree":"org.apache.thrift:libthrift:0.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.12.0"}],"vulnerabilityIdentifier":"CVE-2018-1320","vulnerabilityDetails":"Apache Thrift Java client library versions 0.5.0 through 0.11.0 can bypass SASL negotiation isComplete validation in the org.apache.thrift.transport.TSaslTransport class. An assert used to determine if the SASL handshake had successfully completed could be disabled in production settings making the validation incomplete.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1320","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_process | cve high detected in libthrift jar cve high severity vulnerability vulnerable library libthrift jar thrift is a software framework for scalable cross language services development path to vulnerable library home wss scanner gradle caches modules files org apache thrift libthrift libthrift jar home wss scanner gradle caches modules files org apache thrift libthrift libthrift jar canner gradle caches modules files org apache thrift libthrift libthrift jar dependency hierarchy x libthrift jar vulnerable library found in head commit a href vulnerability details apache thrift java client library versions through can bypass sasl negotiation iscomplete validation in the org apache thrift transport tsasltransport class an assert used to determine if the sasl handshake had successfully completed could be disabled in production settings making the validation incomplete publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails apache thrift java client library versions through can bypass sasl negotiation iscomplete validation in the org apache thrift transport tsasltransport class an assert used to determine if the sasl handshake had successfully completed could be disabled in production settings making the validation incomplete vulnerabilityurl | 0 |
40,112 | 16,332,470,433 | IssuesEvent | 2021-05-12 10:57:24 | gradido/gradido | https://api.github.com/repos/gradido/gradido | closed | 🐛 [Bug] Login takes 2.5s | bug service: login server | ## :bug: Bugreport
The login takes too long.
### Steps to reproduce the behavior
1. Login
2. See how long it takes
5. Profit
### Expected behavior
Faster, at max half a second. Alternative: Show a spinner.
| 1.0 | 🐛 [Bug] Login takes 2.5s - ## :bug: Bugreport
The login takes too long.
### Steps to reproduce the behavior
1. Login
2. See how long it takes
5. Profit
### Expected behavior
Faster, at max half a second. Alternative: Show a spinner.
| non_process | 🐛 login takes bug bugreport the login takes too long steps to reproduce the behavior login see how long it takes profit expected behavior faster at max half a second alternative show a spinner | 0 |
27,048 | 7,891,455,237 | IssuesEvent | 2018-06-28 12:14:36 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | Build produces Gradle deprecation warnings for sourceSets.main.output.classesDirs | :Core/Build >non-issue | Here's one Example, but there are others throughout the build:
`./gradlew x-pack:plugin:monitoring:build --dry-run -Dorg.gradle.warning.mode=all --warning-mode=all --stacktrace`
```
Gradle now uses separate output directories for each JVM language, but this build assumes a single directory for all classes from a source set. This behaviour has been deprecated and is scheduled to be removed in Gradle 5.0
at org.gradle.api.internal.tasks.DefaultSourceSetOutput.getClassesDir(DefaultSourceSetOutput.java:81)
at org.gradle.api.internal.tasks.DefaultSourceSetOutput_Decorated.getClassesDir(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at org.codehaus.groovy.runtime.metaclass.MultipleSetterProperty.getProperty(MultipleSetterProperty.java:49)
at org.gradle.internal.metaobject.BeanDynamicObject$MetaClassAdapter.getProperty(BeanDynamicObject.java:228)
at org.gradle.internal.metaobject.BeanDynamicObject.tryGetProperty(BeanDynamicObject.java:171)
at org.gradle.internal.metaobject.CompositeDynamicObject.tryGetProperty(CompositeDynamicObject.java:55)
at org.gradle.internal.metaobject.AbstractDynamicObject.getProperty(AbstractDynamicObject.java:59)
at org.gradle.api.internal.tasks.DefaultSourceSetOutput_Decorated.getProperty(Unknown Source)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:50)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:296)
at org.elasticsearch.gradle.precommit.NamingConventionsTask$_closure1.doCall(NamingConventionsTask.groovy:88)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
at groovy.lang.Closure.call(Closure.java:414)
at groovy.lang.Closure.call(Closure.java:430)
at org.gradle.api.specs.internal.ClosureSpec.isSatisfiedBy(ClosureSpec.java:32)
at org.gradle.api.specs.AndSpec.isSatisfiedBy(AndSpec.java:47)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:42)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:256)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:317)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:309)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:185)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:97)
at org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:249)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:238)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:104)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:98)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:663)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:596)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:98)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.base/java.lang.Thread.run(Thread.java:844)
```
| 1.0 | Build produces Gradle deprecation warnings for sourceSets.main.output.classesDirs - Here's one Example, but there are others throughout the build:
`./gradlew x-pack:plugin:monitoring:build --dry-run -Dorg.gradle.warning.mode=all --warning-mode=all --stacktrace`
```
Gradle now uses separate output directories for each JVM language, but this build assumes a single directory for all classes from a source set. This behaviour has been deprecated and is scheduled to be removed in Gradle 5.0
at org.gradle.api.internal.tasks.DefaultSourceSetOutput.getClassesDir(DefaultSourceSetOutput.java:81)
at org.gradle.api.internal.tasks.DefaultSourceSetOutput_Decorated.getClassesDir(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at org.codehaus.groovy.runtime.metaclass.MultipleSetterProperty.getProperty(MultipleSetterProperty.java:49)
at org.gradle.internal.metaobject.BeanDynamicObject$MetaClassAdapter.getProperty(BeanDynamicObject.java:228)
at org.gradle.internal.metaobject.BeanDynamicObject.tryGetProperty(BeanDynamicObject.java:171)
at org.gradle.internal.metaobject.CompositeDynamicObject.tryGetProperty(CompositeDynamicObject.java:55)
at org.gradle.internal.metaobject.AbstractDynamicObject.getProperty(AbstractDynamicObject.java:59)
at org.gradle.api.internal.tasks.DefaultSourceSetOutput_Decorated.getProperty(Unknown Source)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:50)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:296)
at org.elasticsearch.gradle.precommit.NamingConventionsTask$_closure1.doCall(NamingConventionsTask.groovy:88)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
at groovy.lang.Closure.call(Closure.java:414)
at groovy.lang.Closure.call(Closure.java:430)
at org.gradle.api.specs.internal.ClosureSpec.isSatisfiedBy(ClosureSpec.java:32)
at org.gradle.api.specs.AndSpec.isSatisfiedBy(AndSpec.java:47)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:42)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:256)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:317)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:309)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:185)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:97)
at org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:249)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:238)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:104)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:98)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:663)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:596)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:98)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.base/java.lang.Thread.run(Thread.java:844)
```
| non_process | build produces gradle deprecation warnings for sourcesets main output classesdirs here s one example but there are others throughout the build gradlew x pack plugin monitoring build dry run dorg gradle warning mode all warning mode all stacktrace gradle now uses separate output directories for each jvm language but this build assumes a single directory for all classes from a source set this behaviour has been deprecated and is scheduled to be removed in gradle at org gradle api internal tasks defaultsourcesetoutput getclassesdir defaultsourcesetoutput java at org gradle api internal tasks defaultsourcesetoutput decorated getclassesdir unknown source at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org codehaus groovy reflection cachedmethod invoke cachedmethod java at org codehaus groovy runtime metaclass multiplesetterproperty getproperty multiplesetterproperty java at org gradle internal metaobject beandynamicobject metaclassadapter getproperty beandynamicobject java at org gradle internal metaobject beandynamicobject trygetproperty beandynamicobject java at org gradle internal metaobject compositedynamicobject trygetproperty compositedynamicobject java at org gradle internal metaobject abstractdynamicobject getproperty abstractdynamicobject java at org gradle api internal tasks defaultsourcesetoutput decorated getproperty unknown source at org codehaus groovy runtime callsite pogogetpropertysite getproperty pogogetpropertysite java at org codehaus groovy runtime callsite abstractcallsite callgetproperty abstractcallsite java at org elasticsearch gradle precommit namingconventionstask docall namingconventionstask groovy at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org codehaus groovy reflection cachedmethod invoke cachedmethod java at groovy lang metamethod domethodinvoke metamethod java at org codehaus groovy runtime metaclass closuremetaclass invokemethod closuremetaclass java at groovy lang metaclassimpl invokemethod metaclassimpl java at groovy lang closure call closure java at groovy lang closure call closure java at org gradle api specs internal closurespec issatisfiedby closurespec java at org gradle api specs andspec issatisfiedby andspec java at org gradle api internal tasks execution skiponlyiftaskexecuter execute skiponlyiftaskexecuter java at org gradle api internal tasks execution executeatmostoncetaskexecuter execute executeatmostoncetaskexecuter java at org gradle api internal tasks execution catchexceptiontaskexecuter execute catchexceptiontaskexecuter java at org gradle execution taskgraph defaulttaskgraphexecuter eventfiringtaskworker run defaulttaskgraphexecuter java at org gradle internal operations defaultbuildoperationexecutor runnablebuildoperationworker execute defaultbuildoperationexecutor java at org gradle internal operations defaultbuildoperationexecutor runnablebuildoperationworker execute defaultbuildoperationexecutor java at org gradle internal operations defaultbuildoperationexecutor execute defaultbuildoperationexecutor java at org gradle internal operations defaultbuildoperationexecutor run defaultbuildoperationexecutor java at org gradle internal operations delegatingbuildoperationexecutor run delegatingbuildoperationexecutor java at org gradle execution taskgraph defaulttaskgraphexecuter eventfiringtaskworker execute defaulttaskgraphexecuter java at org gradle execution taskgraph defaulttaskgraphexecuter eventfiringtaskworker execute defaulttaskgraphexecuter java at org gradle execution taskgraph defaulttaskplanexecutor taskexecutorworker execute defaulttaskplanexecutor java at org gradle execution taskgraph defaulttaskplanexecutor taskexecutorworker execute defaulttaskplanexecutor java at org gradle execution taskgraph defaulttaskexecutionplan execute defaulttaskexecutionplan java at org gradle execution taskgraph defaulttaskexecutionplan executewithtask defaulttaskexecutionplan java at org gradle execution taskgraph defaulttaskplanexecutor taskexecutorworker run defaulttaskplanexecutor java at org gradle internal concurrent executorpolicy catchandrecordfailures onexecute executorpolicy java at org gradle internal concurrent managedexecutorimpl run managedexecutorimpl java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org gradle internal concurrent threadfactoryimpl managedthreadrunnable run threadfactoryimpl java at java base java lang thread run thread java | 0 |
10,239 | 13,098,707,700 | IssuesEvent | 2020-08-03 20:02:44 | googleapis/google-cloud-ruby | https://api.github.com/repos/googleapis/google-cloud-ruby | closed | Migrate google-cloud-datastore to the microgenerator | api: datastore type: process | Migrate google-cloud-datastore to the microgenerator. This involves the following steps:
* [x] Write synth file and generate `google-cloud-datastore-v1`
* [x] Make sure the new libraries are configured in kokoro
* [x] Release `google-cloud-datastore-v1`
* [x] Switch `google-cloud-datastore` backend to the versioned gems. That is:
* Rip out synth and all the generated code
* Add `google-cloud-datastore-v1` as a dependency
* Update the veneer code to the microgenerator usage
* [ ] Release `google-cloud-datastore` update
I do not believe samples need to be updated, unless they invoke the low-level interface directly. | 1.0 | Migrate google-cloud-datastore to the microgenerator - Migrate google-cloud-datastore to the microgenerator. This involves the following steps:
* [x] Write synth file and generate `google-cloud-datastore-v1`
* [x] Make sure the new libraries are configured in kokoro
* [x] Release `google-cloud-datastore-v1`
* [x] Switch `google-cloud-datastore` backend to the versioned gems. That is:
* Rip out synth and all the generated code
* Add `google-cloud-datastore-v1` as a dependency
* Update the veneer code to the microgenerator usage
* [ ] Release `google-cloud-datastore` update
I do not believe samples need to be updated, unless they invoke the low-level interface directly. | process | migrate google cloud datastore to the microgenerator migrate google cloud datastore to the microgenerator this involves the following steps write synth file and generate google cloud datastore make sure the new libraries are configured in kokoro release google cloud datastore switch google cloud datastore backend to the versioned gems that is rip out synth and all the generated code add google cloud datastore as a dependency update the veneer code to the microgenerator usage release google cloud datastore update i do not believe samples need to be updated unless they invoke the low level interface directly | 1 |
14,195 | 17,099,045,911 | IssuesEvent | 2021-07-09 08:37:45 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | Error: Error in migration engine. Reason: [migration-engine/connectors/sql-migration-connector/src/sql_renderer/mssql_renderer.rs:394:9] not yet implemented: DROP TYPE [dbo].[syspolicy_target_filters_type] | bug/1-repro-available kind/bug process/candidate | <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma migrate reset --preview-feature`
Version: `2.22.1`
Binary Version: `60cc71d884972ab4e897f0277c4b84383dddaf6c`
Report: https://prisma-errors.netlify.app/report/13406
OS: `x64 darwin 19.6.0`
JS Stacktrace:
```
Error: Error in migration engine.
Reason: [migration-engine/connectors/sql-migration-connector/src/sql_renderer/mssql_renderer.rs:394:9] not yet implemented: DROP TYPE [dbo].[syspolicy_target_filters_type]
Please create an issue with your `schema.prisma` at
https://github.com/prisma/prisma/issues/new
at ChildProcess.<anonymous> (/Users/<censored>/index.js:56818:23)
at ChildProcess.emit (events.js:315:20)
at ChildProcess.EventEmitter.emit (domain.js:483:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:275:12)
```
Rust Stacktrace:
```
[migration-engine/connectors/sql-migration-connector/src/sql_renderer/mssql_renderer.rs:394:9] not yet implemented: DROP TYPE [dbo].[syspolicy_target_filters_type]
```
| 1.0 | Error: Error in migration engine. Reason: [migration-engine/connectors/sql-migration-connector/src/sql_renderer/mssql_renderer.rs:394:9] not yet implemented: DROP TYPE [dbo].[syspolicy_target_filters_type] - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma migrate reset --preview-feature`
Version: `2.22.1`
Binary Version: `60cc71d884972ab4e897f0277c4b84383dddaf6c`
Report: https://prisma-errors.netlify.app/report/13406
OS: `x64 darwin 19.6.0`
JS Stacktrace:
```
Error: Error in migration engine.
Reason: [migration-engine/connectors/sql-migration-connector/src/sql_renderer/mssql_renderer.rs:394:9] not yet implemented: DROP TYPE [dbo].[syspolicy_target_filters_type]
Please create an issue with your `schema.prisma` at
https://github.com/prisma/prisma/issues/new
at ChildProcess.<anonymous> (/Users/<censored>/index.js:56818:23)
at ChildProcess.emit (events.js:315:20)
at ChildProcess.EventEmitter.emit (domain.js:483:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:275:12)
```
Rust Stacktrace:
```
[migration-engine/connectors/sql-migration-connector/src/sql_renderer/mssql_renderer.rs:394:9] not yet implemented: DROP TYPE [dbo].[syspolicy_target_filters_type]
```
| process | error error in migration engine reason not yet implemented drop type command prisma migrate reset preview feature version binary version report os darwin js stacktrace error error in migration engine reason not yet implemented drop type please create an issue with your schema prisma at at childprocess users index js at childprocess emit events js at childprocess eventemitter emit domain js at process childprocess handle onexit internal child process js rust stacktrace not yet implemented drop type | 1 |
5,177 | 7,960,236,924 | IssuesEvent | 2018-07-13 06:14:18 | Rokid/ShadowNode | https://api.github.com/repos/Rokid/ShadowNode | closed | child_process: memory leaks | bug child_process | ```js
var exec = require('child_process').exec
function main() {
exec('ls', (err, stdout, stderr) => {
console.log('success?', err == null)
setTimeout(main, 0)
})
}
main()
``` | 1.0 | child_process: memory leaks - ```js
var exec = require('child_process').exec
function main() {
exec('ls', (err, stdout, stderr) => {
console.log('success?', err == null)
setTimeout(main, 0)
})
}
main()
``` | process | child process memory leaks js var exec require child process exec function main exec ls err stdout stderr console log success err null settimeout main main | 1 |
39,516 | 10,348,410,313 | IssuesEvent | 2019-09-04 19:44:30 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | grpc_basictests_multilang failing on 'apt-get update' | area/core infra/BUILDPONY lang/core | Ref: https://source.cloud.google.com/results/invocations/c1552253-d359-4b64-a01d-409f105439ea/log
```
+ sudo apt-get update
Hit http://repo.stackdriver.com trusty InRelease
Hit http://repo.stackdriver.com trusty/main amd64 Packages
Ign http://developer.download.nvidia.com InRelease
Get:1 http://developer.download.nvidia.com Release.gpg [819 B]
Hit http://repo.stackdriver.com trusty/main i386 Packages
Get:2 http://developer.download.nvidia.com Release [564 B]
Get:3 http://developer.download.nvidia.com Packages [204 kB]
Get:4 https://nvidia.github.io InRelease
Hit http://ppa.launchpad.net trusty InRelease
Get:5 https://nvidia.github.io InRelease
Get:6 https://nvidia.github.io InRelease
Hit https://deb.nodesource.com trusty InRelease
Get:7 https://nvidia.github.io Packages
Get:8 https://nvidia.github.io Translation-en_US
Get:9 https://dl.yarnpkg.com stable InRelease
Hit https://deb.nodesource.com trusty/main Sources
Hit https://deb.nodesource.com trusty/main amd64 Packages
Hit https://deb.nodesource.com trusty/main i386 Packages
Get:10 https://nvidia.github.io Translation-en_US
Get:11 https://dl.yarnpkg.com stable/main amd64 Packages
Ign http://dl.google.com stable InRelease
Get:12 https://deb.nodesource.com trusty/main Translation-en_US
Get:13 https://dl.yarnpkg.com stable/main i386 Packages
Get:14 https://dl.yarnpkg.com stable/main Translation-en_US
Get:15 https://dl.yarnpkg.com stable/main Translation-en_US
Get:16 https://dl.yarnpkg.com stable/main Translation-en_US
Get:17 https://dl.yarnpkg.com stable/main Translation-en_US
Get:18 https://dl.yarnpkg.com stable/main Translation-en_US
Get:19 https://dl.yarnpkg.com stable/main Translation-en_US
Get:20 https://dl.yarnpkg.com stable/main Translation-en_US
Get:21 https://dl.yarnpkg.com stable/main Translation-en_US
Get:22 https://dl.yarnpkg.com stable/main Translation-en
Get:23 https://dl.yarnpkg.com stable/main Translation-en
Get:24 https://dl.yarnpkg.com stable/main Translation-en
Get:25 https://dl.yarnpkg.com stable/main Translation-en
Get:26 https://dl.yarnpkg.com stable/main Translation-en
Get:27 https://dl.yarnpkg.com stable/main Translation-en
Get:28 https://dl.yarnpkg.com stable/main Translation-en
Get:29 https://dl.yarnpkg.com stable/main Translation-en
Ign http://repo.stackdriver.com trusty/main Translation-en_US
Get:30 https://nvidia.github.io Translation-en_US
Get:31 https://dl.yarnpkg.com stable/main Translation-en_US
Get:32 https://dl.yarnpkg.com stable/main Translation-en_US
Get:33 https://dl.yarnpkg.com stable/main Translation-en_US
Ign http://dl.google.com stable InRelease
Get:34 https://dl.yarnpkg.com stable/main Translation-en
Get:35 https://dl.yarnpkg.com stable/main Translation-en
Get:36 https://dl.yarnpkg.com stable/main Translation-en
Ign http://repo.stackdriver.com trusty/main Translation-en
Get:37 https://download.docker.com trusty/stable Translation-en_US
Get:38 https://dl.yarnpkg.com stable/main Translation-en_US
Get:39 https://dl.yarnpkg.com stable/main Translation-en_US
Get:40 https://dl.yarnpkg.com stable/main Translation-en_US
Ign https://deb.nodesource.com trusty/main Translation-en_US
Get:41 http://ppa.launchpad.net trusty InRelease [20.8 kB]
Get:42 https://dl.yarnpkg.com stable/main Translation-en
Get:43 https://dl.yarnpkg.com stable/main Translation-en
Get:44 https://dl.yarnpkg.com stable/main Translation-en
Ign https://deb.nodesource.com trusty/main Translation-en
Hit http://dl.google.com stable Release.gpg
Get:45 https://dl.yarnpkg.com stable/main Translation-en_US
Get:46 https://dl.yarnpkg.com stable/main Translation-en_US
Get:47 https://dl.yarnpkg.com stable/main Translation-en_US
Get:48 http://dl.google.com stable Release.gpg [819 B]
Get:49 https://dl.yarnpkg.com stable/main Translation-en
Get:50 https://dl.yarnpkg.com stable/main Translation-en
Get:51 https://dl.yarnpkg.com stable/main Translation-en
Get:52 https://dl.yarnpkg.com stable/main Translation-en_US
Get:53 https://dl.yarnpkg.com stable/main Translation-en_US
Get:54 https://dl.yarnpkg.com stable/main Translation-en_US
Ign https://dl.yarnpkg.com stable/main Translation-en_US
Get:55 https://dl.yarnpkg.com stable/main Translation-en
Get:56 https://dl.yarnpkg.com stable/main Translation-en
Get:57 https://dl.yarnpkg.com stable/main Translation-en
Ign https://dl.yarnpkg.com stable/main Translation-en
Ign https://nvidia.github.io Translation-en_US
Ign https://nvidia.github.io Translation-en
Ign https://nvidia.github.io Translation-en_US
Get:58 http://ppa.launchpad.net trusty InRelease [15.4 kB]
Ign https://nvidia.github.io Translation-en
Hit http://dl.google.com stable Release
Ign https://nvidia.github.io Translation-en_US
Get:59 http://dl.google.com stable Release [943 B]
Ign https://nvidia.github.io Translation-en
Hit http://dl.google.com stable/main amd64 Packages
Ign http://developer.download.nvidia.com Translation-en_US
Ign http://developer.download.nvidia.com Translation-en
Hit http://ppa.launchpad.net trusty InRelease
Ign https://download.docker.com trusty/stable Translation-en_US
Get:60 http://ppa.launchpad.net trusty InRelease [20.8 kB]
Ign https://download.docker.com trusty/stable Translation-en
Get:61 http://dl.google.com stable/main amd64 Packages [1,110 B]
Ign http://ppa.launchpad.net precise InRelease
Hit http://ppa.launchpad.net trusty InRelease
Hit http://ppa.launchpad.net trusty InRelease
Hit http://ppa.launchpad.net trusty InRelease
Hit http://ppa.launchpad.net trusty InRelease
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://ppa.launchpad.net trusty/main i386 Packages
Ign http://dl.google.com stable/main Translation-en_US
Ign http://dl.google.com stable/main Translation-en
Hit http://ppa.launchpad.net trusty/main Translation-en
Get:62 http://ppa.launchpad.net trusty/main amd64 Packages [8,857 B]
Ign http://dl.google.com stable/main Translation-en_US
Ign http://dl.google.com stable/main Translation-en
Get:63 http://ppa.launchpad.net trusty/main i386 Packages [8,845 B]
Get:64 http://ppa.launchpad.net trusty/main Translation-en [4,703 B]
Get:65 http://ppa.launchpad.net trusty/main amd64 Packages [1,313 B]
Get:66 http://ppa.launchpad.net trusty/main i386 Packages [1,313 B]
Get:67 http://ppa.launchpad.net trusty/main Translation-en [1,114 B]
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://ppa.launchpad.net trusty/main i386 Packages
Hit http://ppa.launchpad.net trusty/main Translation-en
Get:68 http://ppa.launchpad.net trusty/main amd64 Packages [3,420 B]
Get:69 http://ppa.launchpad.net trusty/main i386 Packages [3,424 B]
Get:70 http://ppa.launchpad.net trusty/main Translation-en [2,594 B]
Hit http://ppa.launchpad.net precise Release.gpg
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://ppa.launchpad.net trusty/main i386 Packages
Hit http://ppa.launchpad.net trusty/main Translation-en
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://ppa.launchpad.net trusty/main i386 Packages
Hit http://ppa.launchpad.net trusty/main Translation-en
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://ppa.launchpad.net trusty/main i386 Packages
Hit http://ppa.launchpad.net trusty/main Translation-en
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Ign http://us.archive.ubuntu.com trusty InRelease
Hit http://ppa.launchpad.net trusty/main i386 Packages
Get:71 http://us.archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Hit http://us.archive.ubuntu.com trusty-backports InRelease
Hit http://us.archive.ubuntu.com trusty Release.gpg
Hit http://ppa.launchpad.net trusty/main Translation-en
Get:72 http://us.archive.ubuntu.com trusty-updates/main Sources [429 kB]
Hit http://ppa.launchpad.net precise Release
Hit http://ppa.launchpad.net precise/main Sources
Hit http://ppa.launchpad.net precise/main amd64 Packages
Hit http://ppa.launchpad.net precise/main i386 Packages
Get:73 http://us.archive.ubuntu.com trusty-updates/restricted Sources [6,313 B]
Get:74 http://us.archive.ubuntu.com trusty-updates/universe Sources [231 kB]
Ign http://ppa.launchpad.net precise/main Translation-en_US
Ign http://ppa.launchpad.net precise/main Translation-en
Get:75 http://us.archive.ubuntu.com trusty-updates/multiverse Sources [7,528 B]
Get:76 http://us.archive.ubuntu.com trusty-updates/main amd64 Packages [1,165 kB]
Get:77 http://us.archive.ubuntu.com trusty-updates/restricted amd64 Packages [17.2 kB]
Get:78 http://us.archive.ubuntu.com trusty-updates/universe amd64 Packages [523 kB]
Get:79 http://us.archive.ubuntu.com trusty-updates/multiverse amd64 Packages [14.6 kB]
Get:80 http://us.archive.ubuntu.com trusty-updates/main i386 Packages [1,082 kB]
Get:81 http://us.archive.ubuntu.com trusty-updates/restricted i386 Packages [17.0 kB]
Get:82 http://us.archive.ubuntu.com trusty-updates/universe i386 Packages [503 kB]
Get:83 http://us.archive.ubuntu.com trusty-updates/multiverse i386 Packages [15.1 kB]
Get:84 http://us.archive.ubuntu.com trusty-updates/main Translation-en [576 kB]
Get:85 http://us.archive.ubuntu.com trusty-updates/multiverse Translation-en [7,616 B]
Get:86 http://us.archive.ubuntu.com trusty-updates/restricted Translation-en [4,028 B]
Get:87 http://us.archive.ubuntu.com trusty-updates/universe Translation-en [277 kB]
Hit http://us.archive.ubuntu.com trusty-backports/main Sources
Hit http://us.archive.ubuntu.com trusty-backports/restricted Sources
Hit http://us.archive.ubuntu.com trusty-backports/universe Sources
Hit http://us.archive.ubuntu.com trusty-backports/multiverse Sources
Hit http://us.archive.ubuntu.com trusty-backports/main amd64 Packages
Hit http://us.archive.ubuntu.com trusty-backports/restricted amd64 Packages
Hit http://us.archive.ubuntu.com trusty-backports/universe amd64 Packages
Hit http://us.archive.ubuntu.com trusty-backports/multiverse amd64 Packages
Hit http://us.archive.ubuntu.com trusty-backports/main i386 Packages
Hit http://us.archive.ubuntu.com trusty-backports/restricted i386 Packages
Hit http://us.archive.ubuntu.com trusty-backports/universe i386 Packages
Hit http://us.archive.ubuntu.com trusty-backports/multiverse i386 Packages
Hit http://us.archive.ubuntu.com trusty-backports/main Translation-en
Hit http://us.archive.ubuntu.com trusty-backports/multiverse Translation-en
Hit http://us.archive.ubuntu.com trusty-backports/restricted Translation-en
Hit http://us.archive.ubuntu.com trusty-backports/universe Translation-en
Hit http://us.archive.ubuntu.com trusty Release
Hit http://us.archive.ubuntu.com trusty/main Sources
Hit http://us.archive.ubuntu.com trusty/restricted Sources
Hit http://us.archive.ubuntu.com trusty/universe Sources
Hit http://us.archive.ubuntu.com trusty/multiverse Sources
Hit http://us.archive.ubuntu.com trusty/main amd64 Packages
Hit http://us.archive.ubuntu.com trusty/restricted amd64 Packages
Hit http://us.archive.ubuntu.com trusty/universe amd64 Packages
Hit http://us.archive.ubuntu.com trusty/multiverse amd64 Packages
Hit http://us.archive.ubuntu.com trusty/main i386 Packages
Hit http://us.archive.ubuntu.com trusty/restricted i386 Packages
Hit http://us.archive.ubuntu.com trusty/universe i386 Packages
Hit http://us.archive.ubuntu.com trusty/multiverse i386 Packages
Hit http://us.archive.ubuntu.com trusty/main Translation-en
Hit http://us.archive.ubuntu.com trusty/multiverse Translation-en
Hit http://us.archive.ubuntu.com trusty/restricted Translation-en
Hit http://us.archive.ubuntu.com trusty/universe Translation-en
Ign http://us.archive.ubuntu.com trusty/main Translation-en_US
Ign http://us.archive.ubuntu.com trusty/multiverse Translation-en_US
Ign http://us.archive.ubuntu.com trusty/restricted Translation-en_US
Ign http://us.archive.ubuntu.com trusty/universe Translation-en_US
Get:88 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
Get:89 http://security.ubuntu.com trusty-security/main Sources [172 kB]
Get:90 http://security.ubuntu.com trusty-security/restricted Sources [4,931 B]
Get:91 http://security.ubuntu.com trusty-security/universe Sources [102 kB]
Get:92 http://security.ubuntu.com trusty-security/multiverse Sources [3,264 B]
Get:93 http://security.ubuntu.com trusty-security/main amd64 Packages [828 kB]
Get:94 http://security.ubuntu.com trusty-security/restricted amd64 Packages [14.2 kB]
Get:95 http://security.ubuntu.com trusty-security/universe amd64 Packages [289 kB]
Get:96 http://security.ubuntu.com trusty-security/multiverse amd64 Packages [4,797 B]
Get:97 http://security.ubuntu.com trusty-security/main i386 Packages [746 kB]
Get:98 http://security.ubuntu.com trusty-security/restricted i386 Packages [13.9 kB]
Get:99 http://security.ubuntu.com trusty-security/universe i386 Packages [273 kB]
Get:100 http://security.ubuntu.com trusty-security/multiverse i386 Packages [4,964 B]
Get:101 http://security.ubuntu.com trusty-security/main Translation-en [444 kB]
Get:102 http://security.ubuntu.com trusty-security/multiverse Translation-en [2,564 B]
Get:103 http://security.ubuntu.com trusty-security/restricted Translation-en [3,556 B]
Get:104 http://security.ubuntu.com trusty-security/universe Translation-en [157 kB]
Fetched 8,456 kB in 2min 41s (52.3 kB/s)
W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/trusty-updates/universe/i18n/Translation-en Hash Sum mismatch
``` | 1.0 | grpc_basictests_multilang failing on 'apt-get update' - Ref: https://source.cloud.google.com/results/invocations/c1552253-d359-4b64-a01d-409f105439ea/log
```
+ sudo apt-get update
Hit http://repo.stackdriver.com trusty InRelease
Hit http://repo.stackdriver.com trusty/main amd64 Packages
Ign http://developer.download.nvidia.com InRelease
Get:1 http://developer.download.nvidia.com Release.gpg [819 B]
Hit http://repo.stackdriver.com trusty/main i386 Packages
Get:2 http://developer.download.nvidia.com Release [564 B]
Get:3 http://developer.download.nvidia.com Packages [204 kB]
Get:4 https://nvidia.github.io InRelease
Hit http://ppa.launchpad.net trusty InRelease
Get:5 https://nvidia.github.io InRelease
Get:6 https://nvidia.github.io InRelease
Hit https://deb.nodesource.com trusty InRelease
Get:7 https://nvidia.github.io Packages
Get:8 https://nvidia.github.io Translation-en_US
Get:9 https://dl.yarnpkg.com stable InRelease
Hit https://deb.nodesource.com trusty/main Sources
Hit https://deb.nodesource.com trusty/main amd64 Packages
Hit https://deb.nodesource.com trusty/main i386 Packages
Get:10 https://nvidia.github.io Translation-en_US
Get:11 https://dl.yarnpkg.com stable/main amd64 Packages
Ign http://dl.google.com stable InRelease
Get:12 https://deb.nodesource.com trusty/main Translation-en_US
Get:13 https://dl.yarnpkg.com stable/main i386 Packages
Get:14 https://dl.yarnpkg.com stable/main Translation-en_US
Get:15 https://dl.yarnpkg.com stable/main Translation-en_US
Get:16 https://dl.yarnpkg.com stable/main Translation-en_US
Get:17 https://dl.yarnpkg.com stable/main Translation-en_US
Get:18 https://dl.yarnpkg.com stable/main Translation-en_US
Get:19 https://dl.yarnpkg.com stable/main Translation-en_US
Get:20 https://dl.yarnpkg.com stable/main Translation-en_US
Get:21 https://dl.yarnpkg.com stable/main Translation-en_US
Get:22 https://dl.yarnpkg.com stable/main Translation-en
Get:23 https://dl.yarnpkg.com stable/main Translation-en
Get:24 https://dl.yarnpkg.com stable/main Translation-en
Get:25 https://dl.yarnpkg.com stable/main Translation-en
Get:26 https://dl.yarnpkg.com stable/main Translation-en
Get:27 https://dl.yarnpkg.com stable/main Translation-en
Get:28 https://dl.yarnpkg.com stable/main Translation-en
Get:29 https://dl.yarnpkg.com stable/main Translation-en
Ign http://repo.stackdriver.com trusty/main Translation-en_US
Get:30 https://nvidia.github.io Translation-en_US
Get:31 https://dl.yarnpkg.com stable/main Translation-en_US
Get:32 https://dl.yarnpkg.com stable/main Translation-en_US
Get:33 https://dl.yarnpkg.com stable/main Translation-en_US
Ign http://dl.google.com stable InRelease
Get:34 https://dl.yarnpkg.com stable/main Translation-en
Get:35 https://dl.yarnpkg.com stable/main Translation-en
Get:36 https://dl.yarnpkg.com stable/main Translation-en
Ign http://repo.stackdriver.com trusty/main Translation-en
Get:37 https://download.docker.com trusty/stable Translation-en_US
Get:38 https://dl.yarnpkg.com stable/main Translation-en_US
Get:39 https://dl.yarnpkg.com stable/main Translation-en_US
Get:40 https://dl.yarnpkg.com stable/main Translation-en_US
Ign https://deb.nodesource.com trusty/main Translation-en_US
Get:41 http://ppa.launchpad.net trusty InRelease [20.8 kB]
Get:42 https://dl.yarnpkg.com stable/main Translation-en
Get:43 https://dl.yarnpkg.com stable/main Translation-en
Get:44 https://dl.yarnpkg.com stable/main Translation-en
Ign https://deb.nodesource.com trusty/main Translation-en
Hit http://dl.google.com stable Release.gpg
Get:45 https://dl.yarnpkg.com stable/main Translation-en_US
Get:46 https://dl.yarnpkg.com stable/main Translation-en_US
Get:47 https://dl.yarnpkg.com stable/main Translation-en_US
Get:48 http://dl.google.com stable Release.gpg [819 B]
Get:49 https://dl.yarnpkg.com stable/main Translation-en
Get:50 https://dl.yarnpkg.com stable/main Translation-en
Get:51 https://dl.yarnpkg.com stable/main Translation-en
Get:52 https://dl.yarnpkg.com stable/main Translation-en_US
Get:53 https://dl.yarnpkg.com stable/main Translation-en_US
Get:54 https://dl.yarnpkg.com stable/main Translation-en_US
Ign https://dl.yarnpkg.com stable/main Translation-en_US
Get:55 https://dl.yarnpkg.com stable/main Translation-en
Get:56 https://dl.yarnpkg.com stable/main Translation-en
Get:57 https://dl.yarnpkg.com stable/main Translation-en
Ign https://dl.yarnpkg.com stable/main Translation-en
Ign https://nvidia.github.io Translation-en_US
Ign https://nvidia.github.io Translation-en
Ign https://nvidia.github.io Translation-en_US
Get:58 http://ppa.launchpad.net trusty InRelease [15.4 kB]
Ign https://nvidia.github.io Translation-en
Hit http://dl.google.com stable Release
Ign https://nvidia.github.io Translation-en_US
Get:59 http://dl.google.com stable Release [943 B]
Ign https://nvidia.github.io Translation-en
Hit http://dl.google.com stable/main amd64 Packages
Ign http://developer.download.nvidia.com Translation-en_US
Ign http://developer.download.nvidia.com Translation-en
Hit http://ppa.launchpad.net trusty InRelease
Ign https://download.docker.com trusty/stable Translation-en_US
Get:60 http://ppa.launchpad.net trusty InRelease [20.8 kB]
Ign https://download.docker.com trusty/stable Translation-en
Get:61 http://dl.google.com stable/main amd64 Packages [1,110 B]
Ign http://ppa.launchpad.net precise InRelease
Hit http://ppa.launchpad.net trusty InRelease
Hit http://ppa.launchpad.net trusty InRelease
Hit http://ppa.launchpad.net trusty InRelease
Hit http://ppa.launchpad.net trusty InRelease
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://ppa.launchpad.net trusty/main i386 Packages
Ign http://dl.google.com stable/main Translation-en_US
Ign http://dl.google.com stable/main Translation-en
Hit http://ppa.launchpad.net trusty/main Translation-en
Get:62 http://ppa.launchpad.net trusty/main amd64 Packages [8,857 B]
Ign http://dl.google.com stable/main Translation-en_US
Ign http://dl.google.com stable/main Translation-en
Get:63 http://ppa.launchpad.net trusty/main i386 Packages [8,845 B]
Get:64 http://ppa.launchpad.net trusty/main Translation-en [4,703 B]
Get:65 http://ppa.launchpad.net trusty/main amd64 Packages [1,313 B]
Get:66 http://ppa.launchpad.net trusty/main i386 Packages [1,313 B]
Get:67 http://ppa.launchpad.net trusty/main Translation-en [1,114 B]
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://ppa.launchpad.net trusty/main i386 Packages
Hit http://ppa.launchpad.net trusty/main Translation-en
Get:68 http://ppa.launchpad.net trusty/main amd64 Packages [3,420 B]
Get:69 http://ppa.launchpad.net trusty/main i386 Packages [3,424 B]
Get:70 http://ppa.launchpad.net trusty/main Translation-en [2,594 B]
Hit http://ppa.launchpad.net precise Release.gpg
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://ppa.launchpad.net trusty/main i386 Packages
Hit http://ppa.launchpad.net trusty/main Translation-en
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://ppa.launchpad.net trusty/main i386 Packages
Hit http://ppa.launchpad.net trusty/main Translation-en
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Hit http://ppa.launchpad.net trusty/main i386 Packages
Hit http://ppa.launchpad.net trusty/main Translation-en
Hit http://ppa.launchpad.net trusty/main amd64 Packages
Ign http://us.archive.ubuntu.com trusty InRelease
Hit http://ppa.launchpad.net trusty/main i386 Packages
Get:71 http://us.archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Hit http://us.archive.ubuntu.com trusty-backports InRelease
Hit http://us.archive.ubuntu.com trusty Release.gpg
Hit http://ppa.launchpad.net trusty/main Translation-en
Get:72 http://us.archive.ubuntu.com trusty-updates/main Sources [429 kB]
Hit http://ppa.launchpad.net precise Release
Hit http://ppa.launchpad.net precise/main Sources
Hit http://ppa.launchpad.net precise/main amd64 Packages
Hit http://ppa.launchpad.net precise/main i386 Packages
Get:73 http://us.archive.ubuntu.com trusty-updates/restricted Sources [6,313 B]
Get:74 http://us.archive.ubuntu.com trusty-updates/universe Sources [231 kB]
Ign http://ppa.launchpad.net precise/main Translation-en_US
Ign http://ppa.launchpad.net precise/main Translation-en
Get:75 http://us.archive.ubuntu.com trusty-updates/multiverse Sources [7,528 B]
Get:76 http://us.archive.ubuntu.com trusty-updates/main amd64 Packages [1,165 kB]
Get:77 http://us.archive.ubuntu.com trusty-updates/restricted amd64 Packages [17.2 kB]
Get:78 http://us.archive.ubuntu.com trusty-updates/universe amd64 Packages [523 kB]
Get:79 http://us.archive.ubuntu.com trusty-updates/multiverse amd64 Packages [14.6 kB]
Get:80 http://us.archive.ubuntu.com trusty-updates/main i386 Packages [1,082 kB]
Get:81 http://us.archive.ubuntu.com trusty-updates/restricted i386 Packages [17.0 kB]
Get:82 http://us.archive.ubuntu.com trusty-updates/universe i386 Packages [503 kB]
Get:83 http://us.archive.ubuntu.com trusty-updates/multiverse i386 Packages [15.1 kB]
Get:84 http://us.archive.ubuntu.com trusty-updates/main Translation-en [576 kB]
Get:85 http://us.archive.ubuntu.com trusty-updates/multiverse Translation-en [7,616 B]
Get:86 http://us.archive.ubuntu.com trusty-updates/restricted Translation-en [4,028 B]
Get:87 http://us.archive.ubuntu.com trusty-updates/universe Translation-en [277 kB]
Hit http://us.archive.ubuntu.com trusty-backports/main Sources
Hit http://us.archive.ubuntu.com trusty-backports/restricted Sources
Hit http://us.archive.ubuntu.com trusty-backports/universe Sources
Hit http://us.archive.ubuntu.com trusty-backports/multiverse Sources
Hit http://us.archive.ubuntu.com trusty-backports/main amd64 Packages
Hit http://us.archive.ubuntu.com trusty-backports/restricted amd64 Packages
Hit http://us.archive.ubuntu.com trusty-backports/universe amd64 Packages
Hit http://us.archive.ubuntu.com trusty-backports/multiverse amd64 Packages
Hit http://us.archive.ubuntu.com trusty-backports/main i386 Packages
Hit http://us.archive.ubuntu.com trusty-backports/restricted i386 Packages
Hit http://us.archive.ubuntu.com trusty-backports/universe i386 Packages
Hit http://us.archive.ubuntu.com trusty-backports/multiverse i386 Packages
Hit http://us.archive.ubuntu.com trusty-backports/main Translation-en
Hit http://us.archive.ubuntu.com trusty-backports/multiverse Translation-en
Hit http://us.archive.ubuntu.com trusty-backports/restricted Translation-en
Hit http://us.archive.ubuntu.com trusty-backports/universe Translation-en
Hit http://us.archive.ubuntu.com trusty Release
Hit http://us.archive.ubuntu.com trusty/main Sources
Hit http://us.archive.ubuntu.com trusty/restricted Sources
Hit http://us.archive.ubuntu.com trusty/universe Sources
Hit http://us.archive.ubuntu.com trusty/multiverse Sources
Hit http://us.archive.ubuntu.com trusty/main amd64 Packages
Hit http://us.archive.ubuntu.com trusty/restricted amd64 Packages
Hit http://us.archive.ubuntu.com trusty/universe amd64 Packages
Hit http://us.archive.ubuntu.com trusty/multiverse amd64 Packages
Hit http://us.archive.ubuntu.com trusty/main i386 Packages
Hit http://us.archive.ubuntu.com trusty/restricted i386 Packages
Hit http://us.archive.ubuntu.com trusty/universe i386 Packages
Hit http://us.archive.ubuntu.com trusty/multiverse i386 Packages
Hit http://us.archive.ubuntu.com trusty/main Translation-en
Hit http://us.archive.ubuntu.com trusty/multiverse Translation-en
Hit http://us.archive.ubuntu.com trusty/restricted Translation-en
Hit http://us.archive.ubuntu.com trusty/universe Translation-en
Ign http://us.archive.ubuntu.com trusty/main Translation-en_US
Ign http://us.archive.ubuntu.com trusty/multiverse Translation-en_US
Ign http://us.archive.ubuntu.com trusty/restricted Translation-en_US
Ign http://us.archive.ubuntu.com trusty/universe Translation-en_US
Get:88 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
Get:89 http://security.ubuntu.com trusty-security/main Sources [172 kB]
Get:90 http://security.ubuntu.com trusty-security/restricted Sources [4,931 B]
Get:91 http://security.ubuntu.com trusty-security/universe Sources [102 kB]
Get:92 http://security.ubuntu.com trusty-security/multiverse Sources [3,264 B]
Get:93 http://security.ubuntu.com trusty-security/main amd64 Packages [828 kB]
Get:94 http://security.ubuntu.com trusty-security/restricted amd64 Packages [14.2 kB]
Get:95 http://security.ubuntu.com trusty-security/universe amd64 Packages [289 kB]
Get:96 http://security.ubuntu.com trusty-security/multiverse amd64 Packages [4,797 B]
Get:97 http://security.ubuntu.com trusty-security/main i386 Packages [746 kB]
Get:98 http://security.ubuntu.com trusty-security/restricted i386 Packages [13.9 kB]
Get:99 http://security.ubuntu.com trusty-security/universe i386 Packages [273 kB]
Get:100 http://security.ubuntu.com trusty-security/multiverse i386 Packages [4,964 B]
Get:101 http://security.ubuntu.com trusty-security/main Translation-en [444 kB]
Get:102 http://security.ubuntu.com trusty-security/multiverse Translation-en [2,564 B]
Get:103 http://security.ubuntu.com trusty-security/restricted Translation-en [3,556 B]
Get:104 http://security.ubuntu.com trusty-security/universe Translation-en [157 kB]
Fetched 8,456 kB in 2min 41s (52.3 kB/s)
W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/trusty-updates/universe/i18n/Translation-en Hash Sum mismatch
``` | non_process | grpc basictests multilang failing on apt get update ref sudo apt get update hit trusty inrelease hit trusty main packages ign inrelease get release gpg hit trusty main packages get release get packages get inrelease hit trusty inrelease get inrelease get inrelease hit trusty inrelease get packages get translation en us get stable inrelease hit trusty main sources hit trusty main packages hit trusty main packages get translation en us get stable main packages ign stable inrelease get trusty main translation en us get stable main packages get stable main translation en us get stable main translation en us get stable main translation en us get stable main translation en us get stable main translation en us get stable main translation en us get stable main translation en us get stable main translation en us get stable main translation en get stable main translation en get stable main translation en get stable main translation en get stable main translation en get stable main translation en get stable main translation en get stable main translation en ign trusty main translation en us get translation en us get stable main translation en us get stable main translation en us get stable main translation en us ign stable inrelease get stable main translation en get stable main translation en get stable main translation en ign trusty main translation en get trusty stable translation en us get stable main translation en us get stable main translation en us get stable main translation en us ign trusty main translation en us get trusty inrelease get stable main translation en get stable main translation en get stable main translation en ign trusty main translation en hit stable release gpg get stable main translation en us get stable main translation en us get stable main translation en us get stable release gpg get stable main translation en get stable main translation en get stable main translation en get stable main translation en us get stable main translation en us get stable main translation en us ign stable main translation en us get stable main translation en get stable main translation en get stable main translation en ign stable main translation en ign translation en us ign translation en ign translation en us get trusty inrelease ign translation en hit stable release ign translation en us get stable release ign translation en hit stable main packages ign translation en us ign translation en hit trusty inrelease ign trusty stable translation en us get trusty inrelease ign trusty stable translation en get stable main packages ign precise inrelease hit trusty inrelease hit trusty inrelease hit trusty inrelease hit trusty inrelease hit trusty main packages hit trusty main packages ign stable main translation en us ign stable main translation en hit trusty main translation en get trusty main packages ign stable main translation en us ign stable main translation en get trusty main packages get trusty main translation en get trusty main packages get trusty main packages get trusty main translation en hit trusty main packages hit trusty main packages hit trusty main translation en get trusty main packages get trusty main packages get trusty main translation en hit precise release gpg hit trusty main packages hit trusty main packages hit trusty main translation en hit trusty main packages hit trusty main packages hit trusty main translation en hit trusty main packages hit trusty main packages hit trusty main translation en hit trusty main packages ign trusty inrelease hit trusty main packages get trusty updates inrelease hit trusty backports inrelease hit trusty release gpg hit trusty main translation en get trusty updates main sources hit precise release hit precise main sources hit precise main packages hit precise main packages get trusty updates restricted sources get trusty updates universe sources ign precise main translation en us ign precise main translation en get trusty updates multiverse sources get trusty updates main packages get trusty updates restricted packages get trusty updates universe packages get trusty updates multiverse packages get trusty updates main packages get trusty updates restricted packages get trusty updates universe packages get trusty updates multiverse packages get trusty updates main translation en get trusty updates multiverse translation en get trusty updates restricted translation en get trusty updates universe translation en hit trusty backports main sources hit trusty backports restricted sources hit trusty backports universe sources hit trusty backports multiverse sources hit trusty backports main packages hit trusty backports restricted packages hit trusty backports universe packages hit trusty backports multiverse packages hit trusty backports main packages hit trusty backports restricted packages hit trusty backports universe packages hit trusty backports multiverse packages hit trusty backports main translation en hit trusty backports multiverse translation en hit trusty backports restricted translation en hit trusty backports universe translation en hit trusty release hit trusty main sources hit trusty restricted sources hit trusty universe sources hit trusty multiverse sources hit trusty main packages hit trusty restricted packages hit trusty universe packages hit trusty multiverse packages hit trusty main packages hit trusty restricted packages hit trusty universe packages hit trusty multiverse packages hit trusty main translation en hit trusty multiverse translation en hit trusty restricted translation en hit trusty universe translation en ign trusty main translation en us ign trusty multiverse translation en us ign trusty restricted translation en us ign trusty universe translation en us get trusty security inrelease get trusty security main sources get trusty security restricted sources get trusty security universe sources get trusty security multiverse sources get trusty security main packages get trusty security restricted packages get trusty security universe packages get trusty security multiverse packages get trusty security main packages get trusty security restricted packages get trusty security universe packages get trusty security multiverse packages get trusty security main translation en get trusty security multiverse translation en get trusty security restricted translation en get trusty security universe translation en fetched kb in kb s w failed to fetch hash sum mismatch | 0 |
108,731 | 9,329,632,664 | IssuesEvent | 2019-03-28 03:13:20 | bwsw/cloudstack-ui | https://api.github.com/repos/bwsw/cloudstack-ui | closed | [e2e-tests] Fix e2e-tests for Login, VM creation | bug e2e tests statistics bug: regression | Fix e2e tests for correct works in the current version of master. | 1.0 | [e2e-tests] Fix e2e-tests for Login, VM creation - Fix e2e tests for correct works in the current version of master. | non_process | fix tests for login vm creation fix tests for correct works in the current version of master | 0 |
256,575 | 22,064,035,209 | IssuesEvent | 2022-05-30 23:07:41 | microsoft/jacdac | https://api.github.com/repos/microsoft/jacdac | closed | dealing with noisy values (closeTo implementation) | help wanted test | the implementation of closeTo(expr,goal,error) records the goal and error values and continuously evaluates expr until its value is in [goal-error, goal+error]. However, as the expr may bounce around a bit, it's possible that after getting in range, it bounces out, then in, etc. Perhaps we should keep a window of last N readings and when we are in range for more than N/2 of the last N readings, declare success. Thoughts? | 1.0 | dealing with noisy values (closeTo implementation) - the implementation of closeTo(expr,goal,error) records the goal and error values and continuously evaluates expr until its value is in [goal-error, goal+error]. However, as the expr may bounce around a bit, it's possible that after getting in range, it bounces out, then in, etc. Perhaps we should keep a window of last N readings and when we are in range for more than N/2 of the last N readings, declare success. Thoughts? | non_process | dealing with noisy values closeto implementation the implementation of closeto expr goal error records the goal and error values and continuously evaluates expr until its value is in however as the expr may bounce around a bit it s possible that after getting in range it bounces out then in etc perhaps we should keep a window of last n readings and when we are in range for more than n of the last n readings declare success thoughts | 0 |
22,355 | 31,030,719,703 | IssuesEvent | 2023-08-10 12:18:10 | UnitTestBot/UTBotJava | https://api.github.com/repos/UnitTestBot/UTBotJava | opened | False positive test on timeout is generated by Symbolic engine | ctg-bug comp-instrumented-process comp-spring | **Description**
False positive test on timeout is generated by Symbolic engine on sm***t project: DCMI class, qcbu method
**To Reproduce**
1. Install [UnitTestBot plugin built from main ](https://github.com/UnitTestBot/UTBotJava/actions/runs/5811246767) in IntelliJ IDEA Ultimate 2023.2
2. Open sm***t project
3. Open DCMI class
4. Generate tests with Spring configuration `Sm***tApplication`, UnitTests, other settings as default
5. Find tests on timeout and run them
**Expected behavior**
No tests on timeout should be generated for the class.
**Actual behavior**
There are tests on timeout being generated.
They fail with NPE.
There are `java.lang.InterruptedException` and `java.lang.ThreadDeath` in utbot-engine-current.log (see below)
**Screenshots, logs**
~~~java
@Test
@Timeout(value = 1000L, unit = TimeUnit.MILLISECONDS)
public void test***_DCM***() throws ClassNotFoundException, IllegalAccessException, NoSuchFieldException, InvocationTargetException, NoSuchMethodException {
(when(dtcmMock.get***(any()))).thenReturn(((List) null));
setField(dscmi, "***", "dscm", dscmMock);
/* This execution may take longer than the 1000 ms timeout
and therefore fail due to exceeding the timeout. */
assertTimeoutPreemptively(Duration.ofMillis(1000L), () -> dscmi.get***(null));
}
~~~
~~~java
13:52:14.763 | ERROR | InstrumentedProcess | RdCategory: DynamicClassTransformer | Error while transforming: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1223)
at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340)
at org.utbot.common.StopWatch.stop(StopWatch.kt:66)
at org.utbot.instrumentation.agent.DynamicClassTransformer.transform(DynamicClassTransformer.kt:38)
at sun.instrument.TransformerManager.transform(TransformerManager.java:188)
at sun.instrument.InstrumentationImpl.transform(InstrumentationImpl.java:428)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:757)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:473)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at org.utbot.instrumentation.process.HandlerClassesLoader.loadClass(InstrumentedProcessMain.kt:54)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
at ch.qos.logback.classic.PatternLayout.<clinit>(PatternLayout.java:111)
at ch.qos.logback.classic.encoder.PatternLayoutEncoder.start(PatternLayoutEncoder.java:24)
at ch.qos.logback.core.joran.action.NestedComplexPropertyIA.end(NestedComplexPropertyIA.java:161)
at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Interpreter.java:309)
at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:193)
at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:179)
at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:62)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:165)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:152)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:110)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:53)
at ch.qos.logback.classic.util.ContextInitializer.configureByResource(ContextInitializer.java:64)
at ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:134)
at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:84)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:55)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)
at com.***
at sun.misc.Unsafe.allocateInstance(Native Method)
at org.utbot.framework.plugin.api.util.ReflectionUtilsKt.getAnyInstance(ReflectionUtils.kt:45)
at org.utbot.instrumentation.instrumentation.execution.constructors.InstrumentationContextAwareValueConstructor.constructObject(InstrumentationContextAwareValueConstructor.kt:142)
at org.utbot.instrumentation.instrumentation.execution.constructors.InstrumentationContextAwareValueConstructor.construct(InstrumentationContextAwareValueConstructor.kt:107)
at org.utbot.instrumentation.instrumentation.execution.constructors.InstrumentationContextAwareValueConstructor.constructMethodParameters(InstrumentationContextAwareValueConstructor.kt:88)
at org.utbot.instrumentation.instrumentation.execution.phases.ValueConstructionPhase.constructParameters(ValueConstructionPhase.kt:47)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$applyPreprocessing$constructedData$1.invoke(PhasesController.kt:96)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$applyPreprocessing$constructedData$1.invoke(PhasesController.kt:95)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1$result$1.invoke(PhasesController.kt:73)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:49)
at org.utbot.common.ThreadBasedExecutor$ensureThreadIsAlive$1.invoke(ThreadUtil.kt:97)
at org.utbot.common.ThreadBasedExecutor$ensureThreadIsAlive$1.invoke(ThreadUtil.kt:93)
at kotlin.concurrent.ThreadsKt$thread$thread$1.run(Thread.kt:30)
13:52:14.766 | INFO | InstrumentedProcess | RdCategory: DynamicClassTransformer | Transformed: ch/qos/logback/classic/pattern/ContextNameConverter
13:52:14.776 | ERROR | InstrumentedProcess | RdCategory: DynamicClassTransformer | Error while transforming: java.lang.ThreadDeath
at java.lang.Thread.stop(Thread.java:858)
at org.utbot.common.ThreadBasedExecutor.invokeWithTimeout-RgG5Fkc(ThreadUtil.kt:72)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1.invoke(PhasesController.kt:70)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1.invoke(PhasesController.kt:65)
at org.utbot.instrumentation.instrumentation.execution.phases.ExecutionPhaseKt.start(ExecutionPhase.kt:25)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController.executePhaseInTimeout(PhasesController.kt:65)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController.applyPreprocessing(PhasesController.kt:95)
at org.utbot.instrumentation.instrumentation.execution.SimpleUtExecutionInstrumentation$invoke$1$1.invoke(SimpleUtExecutionInstrumentation.kt:65)
at org.utbot.instrumentation.instrumentation.execution.SimpleUtExecutionInstrumentation$invoke$1$1.invoke(SimpleUtExecutionInstrumentation.kt:62)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1.invoke(UtExecutionInstrumentation.kt:46)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1.invoke(UtExecutionInstrumentation.kt:45)
at org.utbot.instrumentation.instrumentation.execution.SimpleUtExecutionInstrumentation.invoke(SimpleUtExecutionInstrumentation.kt:62)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$DefaultImpls.invoke(UtExecutionInstrumentation.kt:45)
at org.utbot.instrumentation.instrumentation.execution.SimpleUtExecutionInstrumentation.invoke(SimpleUtExecutionInstrumentation.kt:26)
at org.utbot.instrumentation.instrumentation.execution.SimpleUtExecutionInstrumentation.invoke(SimpleUtExecutionInstrumentation.kt:26)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:130)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:127)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:115)
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:88)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:114)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:362)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:148)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:54)
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase.queue$lambda-3(SingleThreadScheduler.kt:41)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
...
13:52:15.846 | ERROR | InstrumentedProcess | RdCategory: DynamicClassTransformer | Error while transforming: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1223)
at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340)
at org.utbot.common.StopWatch.stop(StopWatch.kt:66)
at org.utbot.instrumentation.agent.DynamicClassTransformer.transform(DynamicClassTransformer.kt:38)
at sun.instrument.TransformerManager.transform(TransformerManager.java:188)
at sun.instrument.InstrumentationImpl.transform(InstrumentationImpl.java:428)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:757)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:473)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1$result$1.invoke(PhasesController.kt:125)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:49)
at org.utbot.common.ThreadBasedExecutor$ensureThreadIsAlive$1.invoke(ThreadUtil.kt:97)
at org.utbot.common.ThreadBasedExecutor$ensureThreadIsAlive$1.invoke(ThreadUtil.kt:93)
at kotlin.concurrent.ThreadsKt$thread$thread$1.run(Thread.kt:30)
~~~
**Environment**
IntelliJ IDEA version - Ultimate 2023.2
Project - sm***t, Maven
JDK - 1.8
**Additional context**
Reproducing for customer.
Probably related issue:
- #2262
| 1.0 | False positive test on timeout is generated by Symbolic engine - **Description**
False positive test on timeout is generated by Symbolic engine on sm***t project: DCMI class, qcbu method
**To Reproduce**
1. Install [UnitTestBot plugin built from main ](https://github.com/UnitTestBot/UTBotJava/actions/runs/5811246767) in IntelliJ IDEA Ultimate 2023.2
2. Open sm***t project
3. Open DCMI class
4. Generate tests with Spring configuration `Sm***tApplication`, UnitTests, other settings as default
5. Find tests on timeout and run them
**Expected behavior**
No tests on timeout should be generated for the class.
**Actual behavior**
There are tests on timeout being generated.
They fail with NPE.
There are `java.lang.InterruptedException` and `java.lang.ThreadDeath` in utbot-engine-current.log (see below)
**Screenshots, logs**
~~~java
@Test
@Timeout(value = 1000L, unit = TimeUnit.MILLISECONDS)
public void test***_DCM***() throws ClassNotFoundException, IllegalAccessException, NoSuchFieldException, InvocationTargetException, NoSuchMethodException {
(when(dtcmMock.get***(any()))).thenReturn(((List) null));
setField(dscmi, "***", "dscm", dscmMock);
/* This execution may take longer than the 1000 ms timeout
and therefore fail due to exceeding the timeout. */
assertTimeoutPreemptively(Duration.ofMillis(1000L), () -> dscmi.get***(null));
}
~~~
~~~java
13:52:14.763 | ERROR | InstrumentedProcess | RdCategory: DynamicClassTransformer | Error while transforming: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1223)
at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340)
at org.utbot.common.StopWatch.stop(StopWatch.kt:66)
at org.utbot.instrumentation.agent.DynamicClassTransformer.transform(DynamicClassTransformer.kt:38)
at sun.instrument.TransformerManager.transform(TransformerManager.java:188)
at sun.instrument.InstrumentationImpl.transform(InstrumentationImpl.java:428)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:757)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:473)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at org.utbot.instrumentation.process.HandlerClassesLoader.loadClass(InstrumentedProcessMain.kt:54)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
at ch.qos.logback.classic.PatternLayout.<clinit>(PatternLayout.java:111)
at ch.qos.logback.classic.encoder.PatternLayoutEncoder.start(PatternLayoutEncoder.java:24)
at ch.qos.logback.core.joran.action.NestedComplexPropertyIA.end(NestedComplexPropertyIA.java:161)
at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Interpreter.java:309)
at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:193)
at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:179)
at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:62)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:165)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:152)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:110)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:53)
at ch.qos.logback.classic.util.ContextInitializer.configureByResource(ContextInitializer.java:64)
at ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:134)
at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:84)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:55)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)
at com.***
at sun.misc.Unsafe.allocateInstance(Native Method)
at org.utbot.framework.plugin.api.util.ReflectionUtilsKt.getAnyInstance(ReflectionUtils.kt:45)
at org.utbot.instrumentation.instrumentation.execution.constructors.InstrumentationContextAwareValueConstructor.constructObject(InstrumentationContextAwareValueConstructor.kt:142)
at org.utbot.instrumentation.instrumentation.execution.constructors.InstrumentationContextAwareValueConstructor.construct(InstrumentationContextAwareValueConstructor.kt:107)
at org.utbot.instrumentation.instrumentation.execution.constructors.InstrumentationContextAwareValueConstructor.constructMethodParameters(InstrumentationContextAwareValueConstructor.kt:88)
at org.utbot.instrumentation.instrumentation.execution.phases.ValueConstructionPhase.constructParameters(ValueConstructionPhase.kt:47)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$applyPreprocessing$constructedData$1.invoke(PhasesController.kt:96)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$applyPreprocessing$constructedData$1.invoke(PhasesController.kt:95)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1$result$1.invoke(PhasesController.kt:73)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:49)
at org.utbot.common.ThreadBasedExecutor$ensureThreadIsAlive$1.invoke(ThreadUtil.kt:97)
at org.utbot.common.ThreadBasedExecutor$ensureThreadIsAlive$1.invoke(ThreadUtil.kt:93)
at kotlin.concurrent.ThreadsKt$thread$thread$1.run(Thread.kt:30)
13:52:14.766 | INFO | InstrumentedProcess | RdCategory: DynamicClassTransformer | Transformed: ch/qos/logback/classic/pattern/ContextNameConverter
13:52:14.776 | ERROR | InstrumentedProcess | RdCategory: DynamicClassTransformer | Error while transforming: java.lang.ThreadDeath
at java.lang.Thread.stop(Thread.java:858)
at org.utbot.common.ThreadBasedExecutor.invokeWithTimeout-RgG5Fkc(ThreadUtil.kt:72)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1.invoke(PhasesController.kt:70)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1.invoke(PhasesController.kt:65)
at org.utbot.instrumentation.instrumentation.execution.phases.ExecutionPhaseKt.start(ExecutionPhase.kt:25)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController.executePhaseInTimeout(PhasesController.kt:65)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController.applyPreprocessing(PhasesController.kt:95)
at org.utbot.instrumentation.instrumentation.execution.SimpleUtExecutionInstrumentation$invoke$1$1.invoke(SimpleUtExecutionInstrumentation.kt:65)
at org.utbot.instrumentation.instrumentation.execution.SimpleUtExecutionInstrumentation$invoke$1$1.invoke(SimpleUtExecutionInstrumentation.kt:62)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1.invoke(UtExecutionInstrumentation.kt:46)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$invoke$1.invoke(UtExecutionInstrumentation.kt:45)
at org.utbot.instrumentation.instrumentation.execution.SimpleUtExecutionInstrumentation.invoke(SimpleUtExecutionInstrumentation.kt:62)
at org.utbot.instrumentation.instrumentation.execution.UtExecutionInstrumentation$DefaultImpls.invoke(UtExecutionInstrumentation.kt:45)
at org.utbot.instrumentation.instrumentation.execution.SimpleUtExecutionInstrumentation.invoke(SimpleUtExecutionInstrumentation.kt:26)
at org.utbot.instrumentation.instrumentation.execution.SimpleUtExecutionInstrumentation.invoke(SimpleUtExecutionInstrumentation.kt:26)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:130)
at org.utbot.instrumentation.process.InstrumentedProcessMainKt$setup$2.invoke(InstrumentedProcessMain.kt:127)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:115)
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:88)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:114)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:362)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:148)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:54)
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase.queue$lambda-3(SingleThreadScheduler.kt:41)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
...
13:52:15.846 | ERROR | InstrumentedProcess | RdCategory: DynamicClassTransformer | Error while transforming: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1223)
at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340)
at org.utbot.common.StopWatch.stop(StopWatch.kt:66)
at org.utbot.instrumentation.agent.DynamicClassTransformer.transform(DynamicClassTransformer.kt:38)
at sun.instrument.TransformerManager.transform(TransformerManager.java:188)
at sun.instrument.InstrumentationImpl.transform(InstrumentationImpl.java:428)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:757)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:473)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
at org.utbot.instrumentation.instrumentation.execution.phases.PhasesController$executePhaseInTimeout$1$result$1.invoke(PhasesController.kt:125)
at org.utbot.common.ThreadBasedExecutor$invokeWithTimeout$1.invoke(ThreadUtil.kt:49)
at org.utbot.common.ThreadBasedExecutor$ensureThreadIsAlive$1.invoke(ThreadUtil.kt:97)
at org.utbot.common.ThreadBasedExecutor$ensureThreadIsAlive$1.invoke(ThreadUtil.kt:93)
at kotlin.concurrent.ThreadsKt$thread$thread$1.run(Thread.kt:30)
~~~
**Environment**
IntelliJ IDEA version - Ultimate 2023.2
Project - sm***t, Maven
JDK - 1.8
**Additional context**
Reproducing for customer.
Probably related issue:
- #2262
| process | false positive test on timeout is generated by symbolic engine description false positive test on timeout is generated by symbolic engine on sm t project dcmi class qcbu method to reproduce install in intellij idea ultimate open sm t project open dcmi class generate tests with spring configuration sm tapplication unittests other settings as default find tests on timeout and run them expected behavior no tests on timeout should be generated for the class actual behavior there are tests on timeout being generated they fail with npe there are java lang interruptedexception and java lang threaddeath in utbot engine current log see below screenshots logs java test timeout value unit timeunit milliseconds public void test dcm throws classnotfoundexception illegalaccessexception nosuchfieldexception invocationtargetexception nosuchmethodexception when dtcmmock get any thenreturn list null setfield dscmi dscm dscmmock this execution may take longer than the ms timeout and therefore fail due to exceeding the timeout asserttimeoutpreemptively duration ofmillis dscmi get null java error instrumentedprocess rdcategory dynamicclasstransformer error while transforming java lang interruptedexception at java util concurrent locks abstractqueuedsynchronizer acquireinterruptibly abstractqueuedsynchronizer java at java util concurrent locks reentrantlock lockinterruptibly reentrantlock java at org utbot common stopwatch stop stopwatch kt at org utbot instrumentation agent dynamicclasstransformer transform dynamicclasstransformer kt at sun instrument transformermanager transform transformermanager java at sun instrument instrumentationimpl transform instrumentationimpl java at java lang classloader native method at java lang classloader defineclass classloader java at java security secureclassloader defineclass secureclassloader java at java net urlclassloader defineclass urlclassloader java at java net urlclassloader access urlclassloader java at java net urlclassloader run urlclassloader java at java net urlclassloader run urlclassloader java at java security accesscontroller doprivileged native method at java net urlclassloader findclass urlclassloader java at java lang classloader loadclass classloader java at org utbot instrumentation process handlerclassesloader loadclass instrumentedprocessmain kt at java lang classloader loadclass classloader java at ch qos logback classic patternlayout patternlayout java at ch qos logback classic encoder patternlayoutencoder start patternlayoutencoder java at ch qos logback core joran action nestedcomplexpropertyia end nestedcomplexpropertyia java at ch qos logback core joran spi interpreter callendaction interpreter java at ch qos logback core joran spi interpreter endelement interpreter java at ch qos logback core joran spi interpreter endelement interpreter java at ch qos logback core joran spi eventplayer play eventplayer java at ch qos logback core joran genericconfigurator doconfigure genericconfigurator java at ch qos logback core joran genericconfigurator doconfigure genericconfigurator java at ch qos logback core joran genericconfigurator doconfigure genericconfigurator java at ch qos logback core joran genericconfigurator doconfigure genericconfigurator java at ch qos logback classic util contextinitializer configurebyresource contextinitializer java at ch qos logback classic util contextinitializer autoconfig contextinitializer java at org impl staticloggerbinder init staticloggerbinder java at org impl staticloggerbinder staticloggerbinder java at org loggerfactory bind loggerfactory java at org loggerfactory performinitialization loggerfactory java at org loggerfactory getiloggerfactory loggerfactory java at org loggerfactory getlogger loggerfactory java at org loggerfactory getlogger loggerfactory java at com at sun misc unsafe allocateinstance native method at org utbot framework plugin api util reflectionutilskt getanyinstance reflectionutils kt at org utbot instrumentation instrumentation execution constructors instrumentationcontextawarevalueconstructor constructobject instrumentationcontextawarevalueconstructor kt at org utbot instrumentation instrumentation execution constructors instrumentationcontextawarevalueconstructor construct instrumentationcontextawarevalueconstructor kt at org utbot instrumentation instrumentation execution constructors instrumentationcontextawarevalueconstructor constructmethodparameters instrumentationcontextawarevalueconstructor kt at org utbot instrumentation instrumentation execution phases valueconstructionphase constructparameters valueconstructionphase kt at org utbot instrumentation instrumentation execution phases phasescontroller applypreprocessing constructeddata invoke phasescontroller kt at org utbot instrumentation instrumentation execution phases phasescontroller applypreprocessing constructeddata invoke phasescontroller kt at org utbot instrumentation instrumentation execution phases phasescontroller executephaseintimeout result invoke phasescontroller kt at org utbot common threadbasedexecutor invokewithtimeout invoke threadutil kt at org utbot common threadbasedexecutor ensurethreadisalive invoke threadutil kt at org utbot common threadbasedexecutor ensurethreadisalive invoke threadutil kt at kotlin concurrent threadskt thread thread run thread kt info instrumentedprocess rdcategory dynamicclasstransformer transformed ch qos logback classic pattern contextnameconverter error instrumentedprocess rdcategory dynamicclasstransformer error while transforming java lang threaddeath at java lang thread stop thread java at org utbot common threadbasedexecutor invokewithtimeout threadutil kt at org utbot instrumentation instrumentation execution phases phasescontroller executephaseintimeout invoke phasescontroller kt at org utbot instrumentation instrumentation execution phases phasescontroller executephaseintimeout invoke phasescontroller kt at org utbot instrumentation instrumentation execution phases executionphasekt start executionphase kt at org utbot instrumentation instrumentation execution phases phasescontroller executephaseintimeout phasescontroller kt at org utbot instrumentation instrumentation execution phases phasescontroller applypreprocessing phasescontroller kt at org utbot instrumentation instrumentation execution simpleutexecutioninstrumentation invoke invoke simpleutexecutioninstrumentation kt at org utbot instrumentation instrumentation execution simpleutexecutioninstrumentation invoke invoke simpleutexecutioninstrumentation kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation invoke invoke utexecutioninstrumentation kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation invoke invoke utexecutioninstrumentation kt at org utbot instrumentation instrumentation execution simpleutexecutioninstrumentation invoke simpleutexecutioninstrumentation kt at org utbot instrumentation instrumentation execution utexecutioninstrumentation defaultimpls invoke utexecutioninstrumentation kt at org utbot instrumentation instrumentation execution simpleutexecutioninstrumentation invoke simpleutexecutioninstrumentation kt at org utbot instrumentation instrumentation execution simpleutexecutioninstrumentation invoke simpleutexecutioninstrumentation kt at org utbot instrumentation process instrumentedprocessmainkt setup invoke instrumentedprocessmain kt at org utbot instrumentation process instrumentedprocessmainkt setup invoke instrumentedprocessmain kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at org utbot rd idlewatchdog wrapactive clientprocessutil kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework impl rdcall onwirereceived rdtask kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl protocolcontexts readmessagecontextandinvoke protocolcontexts kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd util threading singlethreadschedulerbase queue lambda singlethreadscheduler kt at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java error instrumentedprocess rdcategory dynamicclasstransformer error while transforming java lang interruptedexception at java util concurrent locks abstractqueuedsynchronizer acquireinterruptibly abstractqueuedsynchronizer java at java util concurrent locks reentrantlock lockinterruptibly reentrantlock java at org utbot common stopwatch stop stopwatch kt at org utbot instrumentation agent dynamicclasstransformer transform dynamicclasstransformer kt at sun instrument transformermanager transform transformermanager java at sun instrument instrumentationimpl transform instrumentationimpl java at java lang classloader native method at java lang classloader defineclass classloader java at java security secureclassloader defineclass secureclassloader java at java net urlclassloader defineclass urlclassloader java at java net urlclassloader access urlclassloader java at java net urlclassloader run urlclassloader java at java net urlclassloader run urlclassloader java at java security accesscontroller doprivileged native method at java net urlclassloader findclass urlclassloader java at java lang classloader loadclass classloader java at sun misc launcher appclassloader loadclass launcher java at java lang classloader loadclass classloader java at org utbot instrumentation instrumentation execution phases phasescontroller executephaseintimeout result invoke phasescontroller kt at org utbot common threadbasedexecutor invokewithtimeout invoke threadutil kt at org utbot common threadbasedexecutor ensurethreadisalive invoke threadutil kt at org utbot common threadbasedexecutor ensurethreadisalive invoke threadutil kt at kotlin concurrent threadskt thread thread run thread kt environment intellij idea version ultimate project sm t maven jdk additional context reproducing for customer probably related issue | 1 |
22,737 | 32,056,186,212 | IssuesEvent | 2023-09-24 05:18:35 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | metricstransformprocessor: Snap timestamps to the previous second/minute | Stale processor/metricstransform closed as inactive | **Is your feature request related to a problem? Please describe.**
I am trying to group by metrics coming from different workers to the collector.
```
- include: app.counter.job.input-row
action: update
operations:
- action: aggregate_labels
aggregation_type: sum
label_set: []
```
But for this to be effective timestamps have to match as well.
**Describe the solution you'd like**
Ability to snap timestamp to nearest second/minute based on config.
```
- include: app.counter.job.input-row
action: update
operations:
- action: update_timestamp
round_to_nearest: 1m
- action: aggregate_labels
aggregation_type: sum
label_set: []
```
**Describe alternatives you've considered**
None
**Additional context**
This will help reduce the outgoing metric volume, considering we don't require sub-second visibility into the metric.
| 1.0 | metricstransformprocessor: Snap timestamps to the previous second/minute - **Is your feature request related to a problem? Please describe.**
I am trying to group by metrics coming from different workers to the collector.
```
- include: app.counter.job.input-row
action: update
operations:
- action: aggregate_labels
aggregation_type: sum
label_set: []
```
But for this to be effective timestamps have to match as well.
**Describe the solution you'd like**
Ability to snap timestamp to nearest second/minute based on config.
```
- include: app.counter.job.input-row
action: update
operations:
- action: update_timestamp
round_to_nearest: 1m
- action: aggregate_labels
aggregation_type: sum
label_set: []
```
**Describe alternatives you've considered**
None
**Additional context**
This will help reduce the outgoing metric volume, considering we don't require sub-second visibility into the metric.
| process | metricstransformprocessor snap timestamps to the previous second minute is your feature request related to a problem please describe i am trying to group by metrics coming from different workers to the collector include app counter job input row action update operations action aggregate labels aggregation type sum label set but for this to be effective timestamps have to match as well describe the solution you d like ability to snap timestamp to nearest second minute based on config include app counter job input row action update operations action update timestamp round to nearest action aggregate labels aggregation type sum label set describe alternatives you ve considered none additional context this will help reduce the outgoing metric volume considering we don t require sub second visibility into the metric | 1 |
13,288 | 15,765,779,537 | IssuesEvent | 2021-03-31 14:27:34 | brucemiller/LaTeXML | https://api.github.com/repos/brucemiller/LaTeXML | closed | `href` attribute in jats output | bug postprocessing | Considering
**href.tex**:
```latex
\documentclass{article}
\usepackage{hyperref}
\begin{document}
\href{https://www.example.com}{Example link}
\end{document}
```
From `latexml href.tex --dest=href.xml`
i get
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?latexml searchpaths="/home/robert/Work/ems/tex-json/magazine/2021-03"?>
<?latexml class="article"?>
<?latexml package="hyperref"?>
<?latexml RelaxNGSchema="LaTeXML"?>
<document xmlns="http://dlmf.nist.gov/LaTeXML">
<resource src="LaTeXML.css" type="text/css"/>
<resource src="ltx-article.css" type="text/css"/>
<para xml:id="p1">
<p><ref class="ltx_href" href="https://www.example.com">Example link</ref></p>
</para>
</document>
```
Converting to Jats with ` latexmlc href.tex --dest=href.xml --stylesheet=LaTeXML-jats.xsl` i get a link with an empty href
```xml
<?xml version="1.0"?>
<article>
<front>
<article-meta>
<contrib-group/>
</article-meta>
</front>
<body>
<p id="p1">
<ext-link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="">Example link</ext-link>
</p>
</body>
<back>
<app-group/>
</back>
</article>
```
In a local `LaTeXML-jats.xsl` i changed the refs to:
```diff
@@ -853,37 +853,37 @@
</xsl:template>
<xsl:template match="ltx:ref[@class='ltx_url']">
- <ext-link xlink:href="{./href}">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()"/>
</ext-link>
</xsl:template>
<xsl:template match="ltx:ref[@class='ltx_url']" mode="front">
- <ext-link xlink:href="{./href}">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()" mode="front"/>
</ext-link>
</xsl:template>
<xsl:template match="ltx:ref[@class='ltx_url']" mode="back">
- <ext-link xlink:href="{./href}">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()" mode="back"/>
</ext-link>
</xsl:template>
- <xsl:template match="ltx:ref[not(./@idref or ./@labelref) and ./@href]">
- <ext-link xlink:href="{./href}">
+ <xsl:template match="ltx:ref[./@href]">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()"/>
</ext-link>
</xsl:template>
<xsl:template match="ltx:ref[not(./@idref or ./@labelref) and ./@href]" mode="front">
- <ext-link xlink:href="{./href}">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()"/>
</ext-link>
</xsl:template>
<xsl:template match="ltx:ref[not(./@idref or ./@labelref) and ./@href]" mode="back">
- <ext-link xlink:href="{./href}">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()"/>
</ext-link>
</xsl:template>
```
If that's a proper fix i'm happy to contribute a pull request.
I'm using LaTeXML version 0.8.4 but used a xsl file downloaded from this repo's main development branch.
| 1.0 | `href` attribute in jats output - Considering
**href.tex**:
```latex
\documentclass{article}
\usepackage{hyperref}
\begin{document}
\href{https://www.example.com}{Example link}
\end{document}
```
From `latexml href.tex --dest=href.xml`
i get
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?latexml searchpaths="/home/robert/Work/ems/tex-json/magazine/2021-03"?>
<?latexml class="article"?>
<?latexml package="hyperref"?>
<?latexml RelaxNGSchema="LaTeXML"?>
<document xmlns="http://dlmf.nist.gov/LaTeXML">
<resource src="LaTeXML.css" type="text/css"/>
<resource src="ltx-article.css" type="text/css"/>
<para xml:id="p1">
<p><ref class="ltx_href" href="https://www.example.com">Example link</ref></p>
</para>
</document>
```
Converting to Jats with ` latexmlc href.tex --dest=href.xml --stylesheet=LaTeXML-jats.xsl` i get a link with an empty href
```xml
<?xml version="1.0"?>
<article>
<front>
<article-meta>
<contrib-group/>
</article-meta>
</front>
<body>
<p id="p1">
<ext-link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="">Example link</ext-link>
</p>
</body>
<back>
<app-group/>
</back>
</article>
```
In a local `LaTeXML-jats.xsl` i changed the refs to:
```diff
@@ -853,37 +853,37 @@
</xsl:template>
<xsl:template match="ltx:ref[@class='ltx_url']">
- <ext-link xlink:href="{./href}">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()"/>
</ext-link>
</xsl:template>
<xsl:template match="ltx:ref[@class='ltx_url']" mode="front">
- <ext-link xlink:href="{./href}">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()" mode="front"/>
</ext-link>
</xsl:template>
<xsl:template match="ltx:ref[@class='ltx_url']" mode="back">
- <ext-link xlink:href="{./href}">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()" mode="back"/>
</ext-link>
</xsl:template>
- <xsl:template match="ltx:ref[not(./@idref or ./@labelref) and ./@href]">
- <ext-link xlink:href="{./href}">
+ <xsl:template match="ltx:ref[./@href]">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()"/>
</ext-link>
</xsl:template>
<xsl:template match="ltx:ref[not(./@idref or ./@labelref) and ./@href]" mode="front">
- <ext-link xlink:href="{./href}">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()"/>
</ext-link>
</xsl:template>
<xsl:template match="ltx:ref[not(./@idref or ./@labelref) and ./@href]" mode="back">
- <ext-link xlink:href="{./href}">
+ <ext-link xlink:href="{./@href}">
<xsl:apply-templates select="@*|node()"/>
</ext-link>
</xsl:template>
```
If that's a proper fix i'm happy to contribute a pull request.
I'm using LaTeXML version 0.8.4 but used a xsl file downloaded from this repo's main development branch.
| process | href attribute in jats output considering href tex latex documentclass article usepackage hyperref begin document href link end document from latexml href tex dest href xml i get xml document xmlns converting to jats with latexmlc href tex dest href xml stylesheet latexml jats xsl i get a link with an empty href xml example link in a local latexml jats xsl i changed the refs to diff if that s a proper fix i m happy to contribute a pull request i m using latexml version but used a xsl file downloaded from this repo s main development branch | 1 |
22,265 | 30,817,617,548 | IssuesEvent | 2023-08-01 14:22:43 | lambdaclass/cairo_native | https://api.github.com/repos/lambdaclass/cairo_native | closed | Property test support | process | Include [`proptest`](https://github.com/proptest-rs/proptest) as a dependency, examine the codebase for possible areas to property test and create separate issues for them. | 1.0 | Property test support - Include [`proptest`](https://github.com/proptest-rs/proptest) as a dependency, examine the codebase for possible areas to property test and create separate issues for them. | process | property test support include as a dependency examine the codebase for possible areas to property test and create separate issues for them | 1 |
10,802 | 13,609,287,848 | IssuesEvent | 2020-09-23 04:50:09 | googleapis/java-os-login | https://api.github.com/repos/googleapis/java-os-login | closed | Dependency Dashboard | api: oslogin type: process | This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 1.0 | Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| process | dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any build deps update dependency org apache maven plugins maven project info reports plugin to check this box to trigger a request for renovate to run again on this repository | 1 |
6,531 | 9,629,915,881 | IssuesEvent | 2019-05-15 10:43:47 | ropensci/software-review-meta | https://api.github.com/repos/ropensci/software-review-meta | opened | task views | process | The dev guide states that [packages in our suite are listed in our task views](https://ropensci.github.io/dev_guide/softwarereviewintro.html#whysubmit).
Are the task views still maintained? For some of them, it does seem so. https://github.com/search?utf8=%E2%9C%93&q=user%3Aropensci+%22task+view%22&type=Repositories&ref=searchresults
What should happen?
- Removing that mention from the dev guide
- Recruiting new maintainers for the task views, and adding the task view mention to the approval comment template (at the moment it is only in https://devdevguide.netlify.com/editorguide.html#package-promotion)? | 1.0 | task views - The dev guide states that [packages in our suite are listed in our task views](https://ropensci.github.io/dev_guide/softwarereviewintro.html#whysubmit).
Are the task views still maintained? For some of them, it does seem so. https://github.com/search?utf8=%E2%9C%93&q=user%3Aropensci+%22task+view%22&type=Repositories&ref=searchresults
What should happen?
- Removing that mention from the dev guide
- Recruiting new maintainers for the task views, and adding the task view mention to the approval comment template (at the moment it is only in https://devdevguide.netlify.com/editorguide.html#package-promotion)? | process | task views the dev guide states that are the task views still maintained for some of them it does seem so what should happen removing that mention from the dev guide recruiting new maintainers for the task views and adding the task view mention to the approval comment template at the moment it is only in | 1 |
21,493 | 29,659,023,651 | IssuesEvent | 2023-06-10 00:35:44 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [Remoto] Senior Technology Manager na Coodesh | SALVADOR INFRAESTRUTURA REDES PYTHON SENIOR VISÃO COMPUTACIONAL STARTUP REACT VUE REQUISITOS REMOTO PROCESSOS GITHUB CI UMA QUALIDADE LIDERANÇA SAAS MACHINE LEARNING INTELIGÊNCIA ARTIFICIAL Stale | ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/gestor-de-tecnologia-senior-140905034?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A PixForce está em busca de Senior Technology Manager para compor seu time!</p>
<p>A Pix Force desenvolve soluções utilizando tecnologias de visão computacional e inteligência artificial. #1 no ranking da 100 open startups em 2020. Estamos em total expansão e com planos de internacionalizar em 2021. Vamos juntos?</p>
<p></p>
<p>Responsabilidades:</p>
<ul>
<li> Desenvolver aspectos técnicos da estratégia da empresa para assegurar alinhamento com seus objetivos comerciais;</li>
<li>Projetar e personalizar sistemas e plataformas tecnológicas para melhorar a experiência do cliente e a rentabilidade da Pix Force;</li>
<li>Planejar a implementação de novos sistemas e fornecer orientação a profissionais de TI e outros funcionários da organização;</li>
<li>Supervisionar a infraestrutura tecnológica (redes e sistemas de computadores) na organização para assegurar ótimo desempenho;</li>
<li>Dirigir e organizar projetos relacionados com TI;</li>
<li>Monitorar mudanças ou avanços em tecnologia para descobrir maneiras que permitam à empresa obter vantagem competitiva;</li>
<li>Descobrir e implementar novas tecnologias que forneçam vantagem competitiva;</li>
<li>Ajudar os departamentos no uso rentável de tecnologias;</li>
<li>Supervisionar a infraestrutura de sistemas para assegurar funcionalidade e eficiência;</li>
<li>Criar processos de garantia de qualidade e proteção de dados;</li>
<li>Monitorar os principais indicadores de desempenho (KPIs) e orçamentos de TI para avaliar o desempenho tecnológico;</li>
<li>Utilizar o feedback das partes interessadas para informar as melhorias e ajustes necessários às tecnologias;</li>
<li>Comunicar a estratégia de tecnologia à parceiros e investidores.</li>
</ul>
## Pix Force:
<p>A Pix Force desenvolve soluções utilizando tecnologias de visão computacional, inteligência artificial e machine learning. Fornecemos informações valiosas para os nossos clientes através de aquisição e interpretação automática de imagens e vídeos.</p>
<p>Startup #1 em visão computacional no Brasil e multipremiada.</p>
## Habilidades:
- Python
- Vue.js
- React.js
## Local:
100% Remoto
## Requisitos:
- Experiência sólida;
- Experiência comprovada como gestor de tecnologia ou função similar de liderança;
- Experiência com desenvolvimento de plataformas SaaS;
- Conhecimento de tendências tecnológicas para desenvolver estratégias;
- Capacidade para conduzir análises tecnológicas e pesquisa;
- Habilidades de comunicação;
- Habilidades organizacionais e de liderança;
- Pensamento estratégico;
- Aptidão para resolver problemas.
## Diferenciais:
- Bacharelado em Ciência da Computação, Engenharia ou área relacionada; Doutorado ou mestrado são diferenciais.
## Benefícios:
- Horários Flexíveis;
- Vale Refeição;
- Plano de Stock Options.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Senior Technology Manager na Pix Force](https://coodesh.com/jobs/gestor-de-tecnologia-senior-140905034?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
CLT
#### Categoria
Gestão em TI | 1.0 | [Remoto] Senior Technology Manager na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/gestor-de-tecnologia-senior-140905034?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A PixForce está em busca de Senior Technology Manager para compor seu time!</p>
<p>A Pix Force desenvolve soluções utilizando tecnologias de visão computacional e inteligência artificial. #1 no ranking da 100 open startups em 2020. Estamos em total expansão e com planos de internacionalizar em 2021. Vamos juntos?</p>
<p></p>
<p>Responsabilidades:</p>
<ul>
<li> Desenvolver aspectos técnicos da estratégia da empresa para assegurar alinhamento com seus objetivos comerciais;</li>
<li>Projetar e personalizar sistemas e plataformas tecnológicas para melhorar a experiência do cliente e a rentabilidade da Pix Force;</li>
<li>Planejar a implementação de novos sistemas e fornecer orientação a profissionais de TI e outros funcionários da organização;</li>
<li>Supervisionar a infraestrutura tecnológica (redes e sistemas de computadores) na organização para assegurar ótimo desempenho;</li>
<li>Dirigir e organizar projetos relacionados com TI;</li>
<li>Monitorar mudanças ou avanços em tecnologia para descobrir maneiras que permitam à empresa obter vantagem competitiva;</li>
<li>Descobrir e implementar novas tecnologias que forneçam vantagem competitiva;</li>
<li>Ajudar os departamentos no uso rentável de tecnologias;</li>
<li>Supervisionar a infraestrutura de sistemas para assegurar funcionalidade e eficiência;</li>
<li>Criar processos de garantia de qualidade e proteção de dados;</li>
<li>Monitorar os principais indicadores de desempenho (KPIs) e orçamentos de TI para avaliar o desempenho tecnológico;</li>
<li>Utilizar o feedback das partes interessadas para informar as melhorias e ajustes necessários às tecnologias;</li>
<li>Comunicar a estratégia de tecnologia à parceiros e investidores.</li>
</ul>
## Pix Force:
<p>A Pix Force desenvolve soluções utilizando tecnologias de visão computacional, inteligência artificial e machine learning. Fornecemos informações valiosas para os nossos clientes através de aquisição e interpretação automática de imagens e vídeos.</p>
<p>Startup #1 em visão computacional no Brasil e multipremiada.</p>
## Habilidades:
- Python
- Vue.js
- React.js
## Local:
100% Remoto
## Requisitos:
- Experiência sólida;
- Experiência comprovada como gestor de tecnologia ou função similar de liderança;
- Experiência com desenvolvimento de plataformas SaaS;
- Conhecimento de tendências tecnológicas para desenvolver estratégias;
- Capacidade para conduzir análises tecnológicas e pesquisa;
- Habilidades de comunicação;
- Habilidades organizacionais e de liderança;
- Pensamento estratégico;
- Aptidão para resolver problemas.
## Diferenciais:
- Bacharelado em Ciência da Computação, Engenharia ou área relacionada; Doutorado ou mestrado são diferenciais.
## Benefícios:
- Horários Flexíveis;
- Vale Refeição;
- Plano de Stock Options.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Senior Technology Manager na Pix Force](https://coodesh.com/jobs/gestor-de-tecnologia-senior-140905034?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
CLT
#### Categoria
Gestão em TI | process | senior technology manager na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a pixforce está em busca de senior technology manager para compor seu time a pix force desenvolve soluções utilizando tecnologias de visão computacional e inteligência artificial no ranking da open startups em estamos em total expansão e com planos de internacionalizar em vamos juntos responsabilidades nbsp desenvolver aspectos técnicos da estratégia da empresa para assegurar alinhamento com seus objetivos comerciais projetar e personalizar sistemas e plataformas tecnológicas para melhorar a experiência do cliente e a rentabilidade da pix force planejar a implementação de novos sistemas e fornecer orientação a profissionais de ti e outros funcionários da organização supervisionar a infraestrutura tecnológica redes e sistemas de computadores na organização para assegurar ótimo desempenho dirigir e organizar projetos relacionados com ti monitorar mudanças ou avanços em tecnologia para descobrir maneiras que permitam à empresa obter vantagem competitiva descobrir e implementar novas tecnologias que forneçam vantagem competitiva ajudar os departamentos no uso rentável de tecnologias supervisionar a infraestrutura de sistemas para assegurar funcionalidade e eficiência criar processos de garantia de qualidade e proteção de dados monitorar os principais indicadores de desempenho kpis e orçamentos de ti para avaliar o desempenho tecnológico utilizar o feedback das partes interessadas para informar as melhorias e ajustes necessários às tecnologias comunicar a estratégia de tecnologia à parceiros e investidores pix force a pix force desenvolve soluções utilizando tecnologias de visão computacional inteligência artificial e machine learning fornecemos informações valiosas para os nossos clientes através de aquisição e interpretação automática de imagens e vídeos startup em visão computacional no brasil e multipremiada habilidades python vue js react js local remoto requisitos experiência sólida experiência comprovada como gestor de tecnologia ou função similar de liderança experiência com desenvolvimento de plataformas saas conhecimento de tendências tecnológicas para desenvolver estratégias capacidade para conduzir análises tecnológicas e pesquisa habilidades de comunicação habilidades organizacionais e de liderança pensamento estratégico aptidão para resolver problemas diferenciais bacharelado em ciência da computação engenharia ou área relacionada doutorado ou mestrado são diferenciais benefícios horários flexíveis vale refeição plano de stock options como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime clt categoria gestão em ti | 1 |
49,210 | 10,330,120,324 | IssuesEvent | 2019-09-02 13:54:58 | cyring/CoreFreq | https://api.github.com/repos/cyring/CoreFreq | closed | Confidential data | code review | CoreFreq discloses confidential data:
* SMBIOS data, such as the board serial number, some vendor service identifiers
* CPUID registers dump may also reveals confidential bits
# Pending Enhancements
1. Daemon: replace the above data with stars.
2. Client: new setting to request a global unhide/hide action.
Cli asks Daemon to unmask/mask stars
3. Client: command line option for any sensitive dumps. Impacts: send request during the SHM opening.
4. Daemon: SysAdmin reserved: command line argument to grant/deny any unhide request.
| 1.0 | Confidential data - CoreFreq discloses confidential data:
* SMBIOS data, such as the board serial number, some vendor service identifiers
* CPUID registers dump may also reveals confidential bits
# Pending Enhancements
1. Daemon: replace the above data with stars.
2. Client: new setting to request a global unhide/hide action.
Cli asks Daemon to unmask/mask stars
3. Client: command line option for any sensitive dumps. Impacts: send request during the SHM opening.
4. Daemon: SysAdmin reserved: command line argument to grant/deny any unhide request.
| non_process | confidential data corefreq discloses confidential data smbios data such as the board serial number some vendor service identifiers cpuid registers dump may also reveals confidential bits pending enhancements daemon replace the above data with stars client new setting to request a global unhide hide action cli asks daemon to unmask mask stars client command line option for any sensitive dumps impacts send request during the shm opening daemon sysadmin reserved command line argument to grant deny any unhide request | 0 |
10,969 | 13,772,496,057 | IssuesEvent | 2020-10-08 00:42:55 | mpi-forum/mpi-issues | https://api.github.com/repos/mpi-forum/mpi-issues | closed | Update: Section 10.1 - p. 399 - Update PVM Reference | Chapter Committee Change chap-process editor pass mpi-4.0 | ## Problem
10.1 Is the extensive reference to PVM still relevant?
### Suggested Fix
Chapter committee to update | 1.0 | Update: Section 10.1 - p. 399 - Update PVM Reference - ## Problem
10.1 Is the extensive reference to PVM still relevant?
### Suggested Fix
Chapter committee to update | process | update section p update pvm reference problem is the extensive reference to pvm still relevant suggested fix chapter committee to update | 1 |
124,971 | 26,569,789,908 | IssuesEvent | 2023-01-21 01:59:21 | iree-org/iree | https://api.github.com/repos/iree-org/iree | closed | Renable failing CUDA transform dialect tests | codegen/nvvm | The LLVM integration PR #11891 shows failure in CUDA reduction tests for transform dialect. These are disabled to land the integrate, and need to be fixed forward. | 1.0 | Renable failing CUDA transform dialect tests - The LLVM integration PR #11891 shows failure in CUDA reduction tests for transform dialect. These are disabled to land the integrate, and need to be fixed forward. | non_process | renable failing cuda transform dialect tests the llvm integration pr shows failure in cuda reduction tests for transform dialect these are disabled to land the integrate and need to be fixed forward | 0 |
789,728 | 27,804,164,917 | IssuesEvent | 2023-03-17 18:15:11 | dpkg-i-foo-deb/libre-asi | https://api.github.com/repos/dpkg-i-foo-deb/libre-asi | opened | Generate Addiction Severity Indexes | User story Priority | As an interviewer, I wish the application can generate and allows to visualise the addiction severity indexes after an interview has finished | 1.0 | Generate Addiction Severity Indexes - As an interviewer, I wish the application can generate and allows to visualise the addiction severity indexes after an interview has finished | non_process | generate addiction severity indexes as an interviewer i wish the application can generate and allows to visualise the addiction severity indexes after an interview has finished | 0 |
5,744 | 5,920,858,834 | IssuesEvent | 2017-05-22 21:20:21 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | Test failure: CoreMangLib_components._stopwatch_Co9604get_IsRunning_Co9604get_IsRunning_/_stopwatch_Co9604get_IsRunning_Co9604get_IsRunning_cmd | area-Infrastructure area-ReadyToRun | Opened on behalf of @gkhanna79
The test `CoreMangLib_components._stopwatch_Co9604get_IsRunning_Co9604get_IsRunning_/_stopwatch_Co9604get_IsRunning_Co9604get_IsRunning_cmd` has failed.
The system cannot find the file specified.\r
Error compiling Co9604get_IsRunning.org: The system cannot find the file specified. (Exception from HRESULT: 0x80070002)\r
Error: file "Co9604get_IsRunning.org" or one of its dependencies was not found\r
Return code: 1
Raw output file: C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Work\3f729e63-fd57-43b8-b46e-4cea58854a10\Unzip\Reports\CoreMangLib.components\stopwatch\Co9604get_IsRunning\Co9604get_IsRunning.output.txt
Raw output:
BEGIN EXECUTION\r
The system cannot find the file specified.\r
" C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Payload\crossgen.exe" /Platform_Assemblies_Paths C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Payload;C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Work\3f729e63-fd57-43b8-b46e-4cea58854a10\Unzip\stopwatch\Co9604get_IsRunning\IL;C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Work\3f729e63-fd57-43b8-b46e-4cea58854a10\Unzip\stopwatch\Co9604get_IsRunning /in Co9604get_IsRunning.org /out Co9604get_IsRunning.exe\r
Microsoft (R) CoreCLR Native Image Generator - Version 4.5.22220.0\r
Copyright (c) Microsoft Corporation. All rights reserved.\r
\r
Warning: Error enumerating files under C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Work\3f729e63-fd57-43b8-b46e-4cea58854a10\Unzip\stopwatch\Co9604get_IsRunning\IL\.\r
Crossgen failed with exitcode - -3\r
Test Harness Exitcode is : 1\r
To run the test:
> set CORE_ROOT=C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Payload
> C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Work\3f729e63-fd57-43b8-b46e-4cea58854a10\Unzip\stopwatch\Co9604get_IsRunning\Co9604get_IsRunning.cmd
\r
Expected: True\r
Actual: False
Stack Trace:
Build : 2.0.0 - 20170522.01 (Ready-To-Run Tests)
Failing configurations:
- windows.10.amd64
- x64
| 1.0 | Test failure: CoreMangLib_components._stopwatch_Co9604get_IsRunning_Co9604get_IsRunning_/_stopwatch_Co9604get_IsRunning_Co9604get_IsRunning_cmd - Opened on behalf of @gkhanna79
The test `CoreMangLib_components._stopwatch_Co9604get_IsRunning_Co9604get_IsRunning_/_stopwatch_Co9604get_IsRunning_Co9604get_IsRunning_cmd` has failed.
The system cannot find the file specified.\r
Error compiling Co9604get_IsRunning.org: The system cannot find the file specified. (Exception from HRESULT: 0x80070002)\r
Error: file "Co9604get_IsRunning.org" or one of its dependencies was not found\r
Return code: 1
Raw output file: C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Work\3f729e63-fd57-43b8-b46e-4cea58854a10\Unzip\Reports\CoreMangLib.components\stopwatch\Co9604get_IsRunning\Co9604get_IsRunning.output.txt
Raw output:
BEGIN EXECUTION\r
The system cannot find the file specified.\r
" C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Payload\crossgen.exe" /Platform_Assemblies_Paths C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Payload;C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Work\3f729e63-fd57-43b8-b46e-4cea58854a10\Unzip\stopwatch\Co9604get_IsRunning\IL;C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Work\3f729e63-fd57-43b8-b46e-4cea58854a10\Unzip\stopwatch\Co9604get_IsRunning /in Co9604get_IsRunning.org /out Co9604get_IsRunning.exe\r
Microsoft (R) CoreCLR Native Image Generator - Version 4.5.22220.0\r
Copyright (c) Microsoft Corporation. All rights reserved.\r
\r
Warning: Error enumerating files under C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Work\3f729e63-fd57-43b8-b46e-4cea58854a10\Unzip\stopwatch\Co9604get_IsRunning\IL\.\r
Crossgen failed with exitcode - -3\r
Test Harness Exitcode is : 1\r
To run the test:
> set CORE_ROOT=C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Payload
> C:\dotnetbuild\work\7adf97fe-28fe-4aaa-9a36-e5ea85a9e45f\Work\3f729e63-fd57-43b8-b46e-4cea58854a10\Unzip\stopwatch\Co9604get_IsRunning\Co9604get_IsRunning.cmd
\r
Expected: True\r
Actual: False
Stack Trace:
Build : 2.0.0 - 20170522.01 (Ready-To-Run Tests)
Failing configurations:
- windows.10.amd64
- x64
| non_process | test failure coremanglib components stopwatch isrunning isrunning stopwatch isrunning isrunning cmd opened on behalf of the test coremanglib components stopwatch isrunning isrunning stopwatch isrunning isrunning cmd has failed the system cannot find the file specified r error compiling isrunning org the system cannot find the file specified exception from hresult r error file isrunning org or one of its dependencies was not found r return code raw output file c dotnetbuild work work unzip reports coremanglib components stopwatch isrunning isrunning output txt raw output begin execution r the system cannot find the file specified r c dotnetbuild work payload crossgen exe platform assemblies paths c dotnetbuild work payload c dotnetbuild work work unzip stopwatch isrunning il c dotnetbuild work work unzip stopwatch isrunning in isrunning org out isrunning exe r microsoft r coreclr native image generator version r copyright c microsoft corporation all rights reserved r r warning error enumerating files under c dotnetbuild work work unzip stopwatch isrunning il r crossgen failed with exitcode r test harness exitcode is r to run the test set core root c dotnetbuild work payload c dotnetbuild work work unzip stopwatch isrunning isrunning cmd r expected true r actual false stack trace build ready to run tests failing configurations windows | 0 |
3,186 | 6,259,052,016 | IssuesEvent | 2017-07-14 17:05:57 | PeaceGeeksSociety/salesforce | https://api.github.com/repos/PeaceGeeksSociety/salesforce | opened | Identify where applicants dropped off in the application process | Recruitment Processes | We would like to be able to identify the last stage of the application process where the applicant qualified. We can track it as a past activity or have a field that shows the last status before they are unqualified.
This is because it would allow us to identify worthwhile candidates if new vacancies open up.
Done when: process defined for tracking where applicants drop off in application process, inputted for previous applicants. | 1.0 | Identify where applicants dropped off in the application process - We would like to be able to identify the last stage of the application process where the applicant qualified. We can track it as a past activity or have a field that shows the last status before they are unqualified.
This is because it would allow us to identify worthwhile candidates if new vacancies open up.
Done when: process defined for tracking where applicants drop off in application process, inputted for previous applicants. | process | identify where applicants dropped off in the application process we would like to be able to identify the last stage of the application process where the applicant qualified we can track it as a past activity or have a field that shows the last status before they are unqualified this is because it would allow us to identify worthwhile candidates if new vacancies open up done when process defined for tracking where applicants drop off in application process inputted for previous applicants | 1 |
77,588 | 14,886,395,409 | IssuesEvent | 2021-01-20 16:53:35 | dermestid/bold-phylodiv-scripts | https://api.github.com/repos/dermestid/bold-phylodiv-scripts | closed | This code is barbaric | code structure performance | #40 left things in a generally chaotic state:
- file names no longer reflect their contents
- variable name inconsistency getting out of hand
- continued use of magic values
- abuse of const array keys in place of type inference or static inheritance
- many possibly unneeded looping constructs
Fix fix fix once done with testing of output | 1.0 | This code is barbaric - #40 left things in a generally chaotic state:
- file names no longer reflect their contents
- variable name inconsistency getting out of hand
- continued use of magic values
- abuse of const array keys in place of type inference or static inheritance
- many possibly unneeded looping constructs
Fix fix fix once done with testing of output | non_process | this code is barbaric left things in a generally chaotic state file names no longer reflect their contents variable name inconsistency getting out of hand continued use of magic values abuse of const array keys in place of type inference or static inheritance many possibly unneeded looping constructs fix fix fix once done with testing of output | 0 |
89,187 | 8,196,662,931 | IssuesEvent | 2018-08-31 10:36:27 | humera987/HumTestData | https://api.github.com/repos/humera987/HumTestData | closed | humerafxtesting : api_v1_users_org-add_post_auth_invalid | humerafxtesting | Project : humerafxtesting
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 400
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 30 Aug 2018 10:36:36 GMT]}
Endpoint : http://13.56.210.25/api/v1/users/org-add
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [400]
--- FX Bot --- | 1.0 | humerafxtesting : api_v1_users_org-add_post_auth_invalid - Project : humerafxtesting
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 400
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 30 Aug 2018 10:36:36 GMT]}
Endpoint : http://13.56.210.25/api/v1/users/org-add
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [400]
--- FX Bot --- | non_process | humerafxtesting api users org add post auth invalid project humerafxtesting job uat env uat region fxlabs us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options content type transfer encoding date endpoint request response null logs assertion failed expected value but found fx bot | 0 |
24,475 | 12,112,492,118 | IssuesEvent | 2020-04-21 13:53:17 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | Cannot use aws_glue_security_configuration with SSE-S3 encryption because of spurious empty KmsKeyArn | bug service/glue | ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Terraform Version
Terraform v0.12.24
AWS provider v2.57
### Affected Resource(s)
* aws_glue_security_configuration
### Terraform Configuration Files
```hcl
resource "aws_glue_security_configuration" "sec_conf" {
name = "sec_conf"
encryption_configuration {
cloudwatch_encryption {
cloudwatch_encryption_mode = "DISABLED"
}
job_bookmarks_encryption {
job_bookmarks_encryption_mode = "DISABLED"
}
s3_encryption {
s3_encryption_mode = "SSE-S3"
}
}
}
```
### Expected Behavior
When I read the resource back with boto3 client, I expect to see the following:
```python
import boto3
client = boto3.client("glue")
client.get_security_configuration(Name="sec_conf")
```
```json
{
"Name": "sec_conf",
"EncryptionConfiguration": {
"S3Encryption": [
{
"S3EncryptionMode": "SSE-S3"
}
],
"CloudWatchEncryption": {
"CloudWatchEncryptionMode": "DISABLED"
},
"JobBookmarksEncryption": {
"JobBookmarksEncryptionMode": "DISABLED"
}
}
}
```
and to be able to use the resource.
### Actual Behavior
It is created with empty strings for "KmsKeyArn"
```json
{
"Name": "sec_conf",
"EncryptionConfiguration": {
"S3Encryption": [
{
"S3EncryptionMode": "SSE-S3",
"KmsKeyArn": ""
}
],
"CloudWatchEncryption": {
"CloudWatchEncryptionMode": "DISABLED",
"KmsKeyArn": ""
},
"JobBookmarksEncryption": {
"JobBookmarksEncryptionMode": "DISABLED",
"KmsKeyArn": ""
}
}
}
```
Trying to use the configuration in a glue job, I get the following error message:
```
An error occurred while calling z:com.amazonaws.services.glue.util.Job.commit. 1 validation error detected: Value '' at 'keyId' failed to satisfy constraint: Member must have length greater than or equal to 1 (Service: AWSKMS; Status Code: 400; Error Code: ValidationException; Request ID: ...)
```
### Workaround attempts
1. I tried setting the `kms_key_arn = null` in the terraform, but the empty string is still there.
2. I also tried to specify the key arn, as follows:
```hcl
data "aws_kms_alias" "s3" {
name = "alias/aws/s3"
}
resource "aws_glue_security_configuration" "sec_conf" {
name = "sec_conf"
encryption_configuration {
cloudwatch_encryption {
cloudwatch_encryption_mode = "DISABLED"
}
job_bookmarks_encryption {
job_bookmarks_encryption_mode = "DISABLED"
}
s3_encryption {
s3_encryption_mode = "SSE-S3"
kms_key_arn = data.aws_kms_alias.s3.target_key_arn
}
}
}
```
In that case, `terraform apply` gives me the following error message:
```
Error: error creating Glue Security Configuration (glue_security): InvalidInputException: kmsKeyArn can only be empty for s3EncryptionMode that is not SSE_KMS
on main.tf line 6, in resource "aws_glue_security_configuration" "sec_conf":
6: resource "aws_glue_security_configuration" "sec_conf" {
```
### Steps to Reproduce
1. `terraform apply`
2. Use the configuration in a job
### References
| 1.0 | Cannot use aws_glue_security_configuration with SSE-S3 encryption because of spurious empty KmsKeyArn - ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Terraform Version
Terraform v0.12.24
AWS provider v2.57
### Affected Resource(s)
* aws_glue_security_configuration
### Terraform Configuration Files
```hcl
resource "aws_glue_security_configuration" "sec_conf" {
name = "sec_conf"
encryption_configuration {
cloudwatch_encryption {
cloudwatch_encryption_mode = "DISABLED"
}
job_bookmarks_encryption {
job_bookmarks_encryption_mode = "DISABLED"
}
s3_encryption {
s3_encryption_mode = "SSE-S3"
}
}
}
```
### Expected Behavior
When I read the resource back with boto3 client, I expect to see the following:
```python
import boto3
client = boto3.client("glue")
client.get_security_configuration(Name="sec_conf")
```
```json
{
"Name": "sec_conf",
"EncryptionConfiguration": {
"S3Encryption": [
{
"S3EncryptionMode": "SSE-S3"
}
],
"CloudWatchEncryption": {
"CloudWatchEncryptionMode": "DISABLED"
},
"JobBookmarksEncryption": {
"JobBookmarksEncryptionMode": "DISABLED"
}
}
}
```
and to be able to use the resource.
### Actual Behavior
It is created with empty strings for "KmsKeyArn"
```json
{
"Name": "sec_conf",
"EncryptionConfiguration": {
"S3Encryption": [
{
"S3EncryptionMode": "SSE-S3",
"KmsKeyArn": ""
}
],
"CloudWatchEncryption": {
"CloudWatchEncryptionMode": "DISABLED",
"KmsKeyArn": ""
},
"JobBookmarksEncryption": {
"JobBookmarksEncryptionMode": "DISABLED",
"KmsKeyArn": ""
}
}
}
```
Trying to use the configuration in a glue job, I get the following error message:
```
An error occurred while calling z:com.amazonaws.services.glue.util.Job.commit. 1 validation error detected: Value '' at 'keyId' failed to satisfy constraint: Member must have length greater than or equal to 1 (Service: AWSKMS; Status Code: 400; Error Code: ValidationException; Request ID: ...)
```
### Workaround attempts
1. I tried setting the `kms_key_arn = null` in the terraform, but the empty string is still there.
2. I also tried to specify the key arn, as follows:
```hcl
data "aws_kms_alias" "s3" {
name = "alias/aws/s3"
}
resource "aws_glue_security_configuration" "sec_conf" {
name = "sec_conf"
encryption_configuration {
cloudwatch_encryption {
cloudwatch_encryption_mode = "DISABLED"
}
job_bookmarks_encryption {
job_bookmarks_encryption_mode = "DISABLED"
}
s3_encryption {
s3_encryption_mode = "SSE-S3"
kms_key_arn = data.aws_kms_alias.s3.target_key_arn
}
}
}
```
In that case, `terraform apply` gives me the following error message:
```
Error: error creating Glue Security Configuration (glue_security): InvalidInputException: kmsKeyArn can only be empty for s3EncryptionMode that is not SSE_KMS
on main.tf line 6, in resource "aws_glue_security_configuration" "sec_conf":
6: resource "aws_glue_security_configuration" "sec_conf" {
```
### Steps to Reproduce
1. `terraform apply`
2. Use the configuration in a job
### References
| non_process | cannot use aws glue security configuration with sse encryption because of spurious empty kmskeyarn community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version terraform aws provider affected resource s aws glue security configuration terraform configuration files hcl resource aws glue security configuration sec conf name sec conf encryption configuration cloudwatch encryption cloudwatch encryption mode disabled job bookmarks encryption job bookmarks encryption mode disabled encryption encryption mode sse expected behavior when i read the resource back with client i expect to see the following python import client client glue client get security configuration name sec conf json name sec conf encryptionconfiguration sse cloudwatchencryption cloudwatchencryptionmode disabled jobbookmarksencryption jobbookmarksencryptionmode disabled and to be able to use the resource actual behavior it is created with empty strings for kmskeyarn json name sec conf encryptionconfiguration sse kmskeyarn cloudwatchencryption cloudwatchencryptionmode disabled kmskeyarn jobbookmarksencryption jobbookmarksencryptionmode disabled kmskeyarn trying to use the configuration in a glue job i get the following error message an error occurred while calling z com amazonaws services glue util job commit validation error detected value at keyid failed to satisfy constraint member must have length greater than or equal to service awskms status code error code validationexception request id workaround attempts i tried setting the kms key arn null in the terraform but the empty string is still there i also tried to specify the key arn as follows hcl data aws kms alias name alias aws resource aws glue security configuration sec conf name sec conf encryption configuration cloudwatch encryption cloudwatch encryption mode disabled job bookmarks encryption job bookmarks encryption mode disabled encryption encryption mode sse kms key arn data aws kms alias target key arn in that case terraform apply gives me the following error message error error creating glue security configuration glue security invalidinputexception kmskeyarn can only be empty for that is not sse kms on main tf line in resource aws glue security configuration sec conf resource aws glue security configuration sec conf steps to reproduce terraform apply use the configuration in a job references | 0 |
5,651 | 8,515,034,273 | IssuesEvent | 2018-10-31 20:20:26 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | Dissect: * and & reference modifier, breaking change for 7.0 | :Processors libbeat | In the current dissect implementation you can use a named skip field as a reference key, we want to change the behavior to use the `*` instead of using a named skip field this will make it more obvious and will be closer to pointers and reference in programming language. You will still be able to have a named skip field but the value will not be accessible.
Extract from the [specification](https://github.com/elastic/dissect-specification)
`?` - Named skip key instructs the parser to not include this result in the final result set. Behaves identical to an empty skip key %{} but may be used to help with human readability. The ? modifier must be placed to the left of the key name. see example below
`*` and `&` reference modifiers. This modifier requires two keys with the same name present in the dissect pattern. One key with the * and another with the &. This instructs the parser that the value discovered by the * is to be used as the key name for the value discovered by the corresponding & key. These modifiers must be placed on the left of the key name. see example below | 1.0 | Dissect: * and & reference modifier, breaking change for 7.0 - In the current dissect implementation you can use a named skip field as a reference key, we want to change the behavior to use the `*` instead of using a named skip field this will make it more obvious and will be closer to pointers and reference in programming language. You will still be able to have a named skip field but the value will not be accessible.
Extract from the [specification](https://github.com/elastic/dissect-specification)
`?` - Named skip key instructs the parser to not include this result in the final result set. Behaves identical to an empty skip key %{} but may be used to help with human readability. The ? modifier must be placed to the left of the key name. see example below
`*` and `&` reference modifiers. This modifier requires two keys with the same name present in the dissect pattern. One key with the * and another with the &. This instructs the parser that the value discovered by the * is to be used as the key name for the value discovered by the corresponding & key. These modifiers must be placed on the left of the key name. see example below | process | dissect and reference modifier breaking change for in the current dissect implementation you can use a named skip field as a reference key we want to change the behavior to use the instead of using a named skip field this will make it more obvious and will be closer to pointers and reference in programming language you will still be able to have a named skip field but the value will not be accessible extract from the named skip key instructs the parser to not include this result in the final result set behaves identical to an empty skip key but may be used to help with human readability the modifier must be placed to the left of the key name see example below and reference modifiers this modifier requires two keys with the same name present in the dissect pattern one key with the and another with the this instructs the parser that the value discovered by the is to be used as the key name for the value discovered by the corresponding key these modifiers must be placed on the left of the key name see example below | 1 |
161,770 | 12,564,584,057 | IssuesEvent | 2020-06-08 08:20:44 | hazelcast/hazelcast-jet | https://api.github.com/repos/hazelcast/hazelcast-jet | reopened | com.hazelcast.jet.job.JobSubmissionSlownessRegressionTest.regressionTestForPR1488 | test-failure | _master_ / _3.2-maintenance_ (commit 166503c66113d5fe3d7e3158f4aba87cc683a7b9)
Failed on nightly build: http://jenkins.hazelcast.com/job/jet-oss-maintenance-nightly/52/testReport/junit/com.hazelcast.jet.job/JobSubmissionSlownessRegressionTest/regressionTestForPR1488/
**It fails intermittently and it is caused by nature of the test since it is not deterministic. Fix will be probably very tricky and will probably need redesign of whole test.**
Stacktrace:
```
java.lang.AssertionError: Job submission rate should not decrease. First rate: 5.197103105280812, second rate: 3.840073037683318
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at com.hazelcast.jet.job.JobSubmissionSlownessRegressionTest.regressionTestForPR1488(JobSubmissionSlownessRegressionTest.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:106)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:98)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
Standard output:
```
Hiccups measured while running test 'regressionTestForPR1488(com.hazelcast.jet.job.JobSubmissionSlownessRegressionTest):'
19:16:50, accumulated pauses: 984 ms, max pause: 616 ms, pauses over 1000 ms: 0
19:16:55, accumulated pauses: 391 ms, max pause: 101 ms, pauses over 1000 ms: 0
19:17:00, accumulated pauses: 46 ms, max pause: 3 ms, pauses over 1000 ms: 0
19:17:05, accumulated pauses: 1449 ms, max pause: 1122 ms, pauses over 1000 ms: 1
19:17:10, accumulated pauses: 120 ms, max pause: 18 ms, pauses over 1000 ms: 0
19:17:15, accumulated pauses: 83 ms, max pause: 33 ms, pauses over 1000 ms: 0
19:17:20, accumulated pauses: 278 ms, max pause: 87 ms, pauses over 1000 ms: 0
19:17:25, accumulated pauses: 88 ms, max pause: 55 ms, pauses over 1000 ms: 0
19:17:30, accumulated pauses: 367 ms, max pause: 190 ms, pauses over 1000 ms: 0
19:17:35, accumulated pauses: 225 ms, max pause: 116 ms, pauses over 1000 ms: 0
19:17:40, accumulated pauses: 93 ms, max pause: 45 ms, pauses over 1000 ms: 0
19:17:45, accumulated pauses: 36 ms, max pause: 1 ms, pauses over 1000 ms: 0
19:17:50, accumulated pauses: 399 ms, max pause: 121 ms, pauses over 1000 ms: 0
19:17:55, accumulated pauses: 132 ms, max pause: 19 ms, pauses over 1000 ms: 0
19:18:00, accumulated pauses: 424 ms, max pause: 314 ms, pauses over 1000 ms: 0
19:18:05, accumulated pauses: 272 ms, max pause: 67 ms, pauses over 1000 ms: 0
19:18:10, accumulated pauses: 56 ms, max pause: 13 ms, pauses over 1000 ms: 0
19:18:15, accumulated pauses: 236 ms, max pause: 64 ms, pauses over 1000 ms: 0
19:18:20, accumulated pauses: 267 ms, max pause: 93 ms, pauses over 1000 ms: 0
19:18:25, accumulated pauses: 62 ms, max pause: 31 ms, pauses over 1000 ms: 0
19:18:30, accumulated pauses: 709 ms, max pause: 626 ms, pauses over 1000 ms: 0
19:18:35, accumulated pauses: 232 ms, max pause: 45 ms, pauses over 1000 ms: 0
19:18:40, accumulated pauses: 76 ms, max pause: 39 ms, pauses over 1000 ms: 0
19:18:45, accumulated pauses: 184 ms, max pause: 67 ms, pauses over 1000 ms: 0
19:18:50, accumulated pauses: 194 ms, max pause: 68 ms, pauses over 1000 ms: 0
19:18:55, accumulated pauses: 682 ms, max pause: 469 ms, pauses over 1000 ms: 0
19:19:00, accumulated pauses: 103 ms, max pause: 31 ms, pauses over 1000 ms: 0
19:19:05, accumulated pauses: 249 ms, max pause: 56 ms, pauses over 1000 ms: 0
19:19:10, accumulated pauses: 185 ms, max pause: 57 ms, pauses over 1000 ms: 0
19:19:15, accumulated pauses: 85 ms, max pause: 48 ms, pauses over 1000 ms: 0
19:19:20, accumulated pauses: 295 ms, max pause: 73 ms, pauses over 1000 ms: 0
19:19:25, accumulated pauses: 935 ms, max pause: 828 ms, pauses over 1000 ms: 0
19:19:30, accumulated pauses: 244 ms, max pause: 43 ms, pauses over 1000 ms: 0
19:19:35, accumulated pauses: 36 ms, max pause: 2 ms, pauses over 1000 ms: 0
19:19:40, accumulated pauses: 330 ms, max pause: 99 ms, pauses over 1000 ms: 0
19:19:45, accumulated pauses: 104 ms, max pause: 17 ms, pauses over 1000 ms: 0
19:19:50, accumulated pauses: 36 ms, max pause: 0 ms, pauses over 1000 ms: 0
19:19:55, accumulated pauses: 710 ms, max pause: 570 ms, pauses over 1000 ms: 0
19:20:00, accumulated pauses: 381 ms, max pause: 106 ms, pauses over 1000 ms: 0
19:20:05, accumulated pauses: 66 ms, max pause: 12 ms, pauses over 1000 ms: 0
19:20:10, accumulated pauses: 60 ms, max pause: 13 ms, pauses over 1000 ms: 0
19:20:15, accumulated pauses: 688 ms, max pause: 561 ms, pauses over 1000 ms: 0
19:20:20, accumulated pauses: 44 ms, max pause: 2 ms, pauses over 1000 ms: 0
19:20:25, accumulated pauses: 51 ms, max pause: 3 ms, pauses over 1000 ms: 0
19:20:30, accumulated pauses: 40 ms, max pause: 3 ms, pauses over 1000 ms: 0
19:20:35, accumulated pauses: 341 ms, max pause: 122 ms, pauses over 1000 ms: 0
19:20:40, accumulated pauses: 441 ms, max pause: 190 ms, pauses over 1000 ms: 0
19:20:45, accumulated pauses: 37 ms, max pause: 0 ms, pauses over 1000 ms: 0
19:20:50, accumulated pauses: 51 ms, max pause: 11 ms, pauses over 1000 ms: 0
19:20:55, accumulated pauses: 349 ms, max pause: 202 ms, pauses over 1000 ms: 0
19:21:00, accumulated pauses: 270 ms, max pause: 134 ms, pauses over 1000 ms: 0
19:21:05, accumulated pauses: 118 ms, max pause: 60 ms, pauses over 1000 ms: 0
19:21:10, accumulated pauses: 633 ms, max pause: 261 ms, pauses over 1000 ms: 0
19:21:15, accumulated pauses: 116 ms, max pause: 19 ms, pauses over 1000 ms: 0
19:21:20, accumulated pauses: 334 ms, max pause: 260 ms, pauses over 1000 ms: 0
19:21:25, accumulated pauses: 42 ms, max pause: 1 ms, pauses over 1000 ms: 0
19:21:30, accumulated pauses: 457 ms, max pause: 219 ms, pauses over 1000 ms: 0
19:21:35, accumulated pauses: 276 ms, max pause: 43 ms, pauses over 1000 ms: 0
19:21:40, accumulated pauses: 83 ms, max pause: 39 ms, pauses over 1000 ms: 0
19:21:45, accumulated pauses: 34 ms, max pause: 1 ms, pauses over 1000 ms: 0
19:21:50, accumulated pauses: 277 ms, max pause: 117 ms, pauses over 1000 ms: 0
19:21:55, accumulated pauses: 251 ms, max pause: 58 ms, pauses over 1000 ms: 0
19:22:00, accumulated pauses: 64 ms, max pause: 5 ms, pauses over 1000 ms: 0
19:22:05, accumulated pauses: 79 ms, max pause: 22 ms, pauses over 1000 ms: 0
19:22:10, accumulated pauses: 1194 ms, max pause: 833 ms, pauses over 1000 ms: 0
19:22:15, accumulated pauses: 40 ms, max pause: 3 ms, pauses over 1000 ms: 0
19:22:20, accumulated pauses: 71 ms, max pause: 19 ms, pauses over 1000 ms: 0
19:22:25, accumulated pauses: 149 ms, max pause: 70 ms, pauses over 1000 ms: 0
19:22:30, accumulated pauses: 450 ms, max pause: 212 ms, pauses over 1000 ms: 0
19:22:35, accumulated pauses: 116 ms, max pause: 18 ms, pauses over 1000 ms: 0
19:22:40, accumulated pauses: 1004 ms, max pause: 567 ms, pauses over 1000 ms: 0
19:22:45, accumulated pauses: 197 ms, max pause: 17 ms, pauses over 1000 ms: 0
19:22:50, accumulated pauses: 208 ms, max pause: 171 ms, pauses over 1000 ms: 0
19:22:55, accumulated pauses: 385 ms, max pause: 126 ms, pauses over 1000 ms: 0
19:23:00, accumulated pauses: 242 ms, max pause: 141 ms, pauses over 1000 ms: 0
19:23:05, accumulated pauses: 95 ms, max pause: 19 ms, pauses over 1000 ms: 0
19:23:10, accumulated pauses: 821 ms, max pause: 729 ms, pauses over 1000 ms: 0
19:23:15, accumulated pauses: 243 ms, max pause: 100 ms, pauses over 1000 ms: 0
19:23:20, accumulated pauses: 171 ms, max pause: 12 ms, pauses over 1000 ms: 0
19:23:25, accumulated pauses: 177 ms, max pause: 78 ms, pauses over 1000 ms: 0
19:23:30, accumulated pauses: 157 ms, max pause: 35 ms, pauses over 1000 ms: 0
19:23:35, accumulated pauses: 312 ms, max pause: 158 ms, pauses over 1000 ms: 0
19:23:40, accumulated pauses: 287 ms, max pause: 196 ms, pauses over 1000 ms: 0
19:23:45, accumulated pauses: 176 ms, max pause: 100 ms, pauses over 1000 ms: 0
19:23:50, accumulated pauses: 299 ms, max pause: 95 ms, pauses over 1000 ms: 0
19:23:55, accumulated pauses: 598 ms, max pause: 559 ms, pauses over 1000 ms: 0
```
| 1.0 | com.hazelcast.jet.job.JobSubmissionSlownessRegressionTest.regressionTestForPR1488 - _master_ / _3.2-maintenance_ (commit 166503c66113d5fe3d7e3158f4aba87cc683a7b9)
Failed on nightly build: http://jenkins.hazelcast.com/job/jet-oss-maintenance-nightly/52/testReport/junit/com.hazelcast.jet.job/JobSubmissionSlownessRegressionTest/regressionTestForPR1488/
**It fails intermittently and it is caused by nature of the test since it is not deterministic. Fix will be probably very tricky and will probably need redesign of whole test.**
Stacktrace:
```
java.lang.AssertionError: Job submission rate should not decrease. First rate: 5.197103105280812, second rate: 3.840073037683318
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at com.hazelcast.jet.job.JobSubmissionSlownessRegressionTest.regressionTestForPR1488(JobSubmissionSlownessRegressionTest.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:106)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:98)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
Standard output:
```
Hiccups measured while running test 'regressionTestForPR1488(com.hazelcast.jet.job.JobSubmissionSlownessRegressionTest):'
19:16:50, accumulated pauses: 984 ms, max pause: 616 ms, pauses over 1000 ms: 0
19:16:55, accumulated pauses: 391 ms, max pause: 101 ms, pauses over 1000 ms: 0
19:17:00, accumulated pauses: 46 ms, max pause: 3 ms, pauses over 1000 ms: 0
19:17:05, accumulated pauses: 1449 ms, max pause: 1122 ms, pauses over 1000 ms: 1
19:17:10, accumulated pauses: 120 ms, max pause: 18 ms, pauses over 1000 ms: 0
19:17:15, accumulated pauses: 83 ms, max pause: 33 ms, pauses over 1000 ms: 0
19:17:20, accumulated pauses: 278 ms, max pause: 87 ms, pauses over 1000 ms: 0
19:17:25, accumulated pauses: 88 ms, max pause: 55 ms, pauses over 1000 ms: 0
19:17:30, accumulated pauses: 367 ms, max pause: 190 ms, pauses over 1000 ms: 0
19:17:35, accumulated pauses: 225 ms, max pause: 116 ms, pauses over 1000 ms: 0
19:17:40, accumulated pauses: 93 ms, max pause: 45 ms, pauses over 1000 ms: 0
19:17:45, accumulated pauses: 36 ms, max pause: 1 ms, pauses over 1000 ms: 0
19:17:50, accumulated pauses: 399 ms, max pause: 121 ms, pauses over 1000 ms: 0
19:17:55, accumulated pauses: 132 ms, max pause: 19 ms, pauses over 1000 ms: 0
19:18:00, accumulated pauses: 424 ms, max pause: 314 ms, pauses over 1000 ms: 0
19:18:05, accumulated pauses: 272 ms, max pause: 67 ms, pauses over 1000 ms: 0
19:18:10, accumulated pauses: 56 ms, max pause: 13 ms, pauses over 1000 ms: 0
19:18:15, accumulated pauses: 236 ms, max pause: 64 ms, pauses over 1000 ms: 0
19:18:20, accumulated pauses: 267 ms, max pause: 93 ms, pauses over 1000 ms: 0
19:18:25, accumulated pauses: 62 ms, max pause: 31 ms, pauses over 1000 ms: 0
19:18:30, accumulated pauses: 709 ms, max pause: 626 ms, pauses over 1000 ms: 0
19:18:35, accumulated pauses: 232 ms, max pause: 45 ms, pauses over 1000 ms: 0
19:18:40, accumulated pauses: 76 ms, max pause: 39 ms, pauses over 1000 ms: 0
19:18:45, accumulated pauses: 184 ms, max pause: 67 ms, pauses over 1000 ms: 0
19:18:50, accumulated pauses: 194 ms, max pause: 68 ms, pauses over 1000 ms: 0
19:18:55, accumulated pauses: 682 ms, max pause: 469 ms, pauses over 1000 ms: 0
19:19:00, accumulated pauses: 103 ms, max pause: 31 ms, pauses over 1000 ms: 0
19:19:05, accumulated pauses: 249 ms, max pause: 56 ms, pauses over 1000 ms: 0
19:19:10, accumulated pauses: 185 ms, max pause: 57 ms, pauses over 1000 ms: 0
19:19:15, accumulated pauses: 85 ms, max pause: 48 ms, pauses over 1000 ms: 0
19:19:20, accumulated pauses: 295 ms, max pause: 73 ms, pauses over 1000 ms: 0
19:19:25, accumulated pauses: 935 ms, max pause: 828 ms, pauses over 1000 ms: 0
19:19:30, accumulated pauses: 244 ms, max pause: 43 ms, pauses over 1000 ms: 0
19:19:35, accumulated pauses: 36 ms, max pause: 2 ms, pauses over 1000 ms: 0
19:19:40, accumulated pauses: 330 ms, max pause: 99 ms, pauses over 1000 ms: 0
19:19:45, accumulated pauses: 104 ms, max pause: 17 ms, pauses over 1000 ms: 0
19:19:50, accumulated pauses: 36 ms, max pause: 0 ms, pauses over 1000 ms: 0
19:19:55, accumulated pauses: 710 ms, max pause: 570 ms, pauses over 1000 ms: 0
19:20:00, accumulated pauses: 381 ms, max pause: 106 ms, pauses over 1000 ms: 0
19:20:05, accumulated pauses: 66 ms, max pause: 12 ms, pauses over 1000 ms: 0
19:20:10, accumulated pauses: 60 ms, max pause: 13 ms, pauses over 1000 ms: 0
19:20:15, accumulated pauses: 688 ms, max pause: 561 ms, pauses over 1000 ms: 0
19:20:20, accumulated pauses: 44 ms, max pause: 2 ms, pauses over 1000 ms: 0
19:20:25, accumulated pauses: 51 ms, max pause: 3 ms, pauses over 1000 ms: 0
19:20:30, accumulated pauses: 40 ms, max pause: 3 ms, pauses over 1000 ms: 0
19:20:35, accumulated pauses: 341 ms, max pause: 122 ms, pauses over 1000 ms: 0
19:20:40, accumulated pauses: 441 ms, max pause: 190 ms, pauses over 1000 ms: 0
19:20:45, accumulated pauses: 37 ms, max pause: 0 ms, pauses over 1000 ms: 0
19:20:50, accumulated pauses: 51 ms, max pause: 11 ms, pauses over 1000 ms: 0
19:20:55, accumulated pauses: 349 ms, max pause: 202 ms, pauses over 1000 ms: 0
19:21:00, accumulated pauses: 270 ms, max pause: 134 ms, pauses over 1000 ms: 0
19:21:05, accumulated pauses: 118 ms, max pause: 60 ms, pauses over 1000 ms: 0
19:21:10, accumulated pauses: 633 ms, max pause: 261 ms, pauses over 1000 ms: 0
19:21:15, accumulated pauses: 116 ms, max pause: 19 ms, pauses over 1000 ms: 0
19:21:20, accumulated pauses: 334 ms, max pause: 260 ms, pauses over 1000 ms: 0
19:21:25, accumulated pauses: 42 ms, max pause: 1 ms, pauses over 1000 ms: 0
19:21:30, accumulated pauses: 457 ms, max pause: 219 ms, pauses over 1000 ms: 0
19:21:35, accumulated pauses: 276 ms, max pause: 43 ms, pauses over 1000 ms: 0
19:21:40, accumulated pauses: 83 ms, max pause: 39 ms, pauses over 1000 ms: 0
19:21:45, accumulated pauses: 34 ms, max pause: 1 ms, pauses over 1000 ms: 0
19:21:50, accumulated pauses: 277 ms, max pause: 117 ms, pauses over 1000 ms: 0
19:21:55, accumulated pauses: 251 ms, max pause: 58 ms, pauses over 1000 ms: 0
19:22:00, accumulated pauses: 64 ms, max pause: 5 ms, pauses over 1000 ms: 0
19:22:05, accumulated pauses: 79 ms, max pause: 22 ms, pauses over 1000 ms: 0
19:22:10, accumulated pauses: 1194 ms, max pause: 833 ms, pauses over 1000 ms: 0
19:22:15, accumulated pauses: 40 ms, max pause: 3 ms, pauses over 1000 ms: 0
19:22:20, accumulated pauses: 71 ms, max pause: 19 ms, pauses over 1000 ms: 0
19:22:25, accumulated pauses: 149 ms, max pause: 70 ms, pauses over 1000 ms: 0
19:22:30, accumulated pauses: 450 ms, max pause: 212 ms, pauses over 1000 ms: 0
19:22:35, accumulated pauses: 116 ms, max pause: 18 ms, pauses over 1000 ms: 0
19:22:40, accumulated pauses: 1004 ms, max pause: 567 ms, pauses over 1000 ms: 0
19:22:45, accumulated pauses: 197 ms, max pause: 17 ms, pauses over 1000 ms: 0
19:22:50, accumulated pauses: 208 ms, max pause: 171 ms, pauses over 1000 ms: 0
19:22:55, accumulated pauses: 385 ms, max pause: 126 ms, pauses over 1000 ms: 0
19:23:00, accumulated pauses: 242 ms, max pause: 141 ms, pauses over 1000 ms: 0
19:23:05, accumulated pauses: 95 ms, max pause: 19 ms, pauses over 1000 ms: 0
19:23:10, accumulated pauses: 821 ms, max pause: 729 ms, pauses over 1000 ms: 0
19:23:15, accumulated pauses: 243 ms, max pause: 100 ms, pauses over 1000 ms: 0
19:23:20, accumulated pauses: 171 ms, max pause: 12 ms, pauses over 1000 ms: 0
19:23:25, accumulated pauses: 177 ms, max pause: 78 ms, pauses over 1000 ms: 0
19:23:30, accumulated pauses: 157 ms, max pause: 35 ms, pauses over 1000 ms: 0
19:23:35, accumulated pauses: 312 ms, max pause: 158 ms, pauses over 1000 ms: 0
19:23:40, accumulated pauses: 287 ms, max pause: 196 ms, pauses over 1000 ms: 0
19:23:45, accumulated pauses: 176 ms, max pause: 100 ms, pauses over 1000 ms: 0
19:23:50, accumulated pauses: 299 ms, max pause: 95 ms, pauses over 1000 ms: 0
19:23:55, accumulated pauses: 598 ms, max pause: 559 ms, pauses over 1000 ms: 0
```
| non_process | com hazelcast jet job jobsubmissionslownessregressiontest master maintenance commit failed on nightly build it fails intermittently and it is caused by nature of the test since it is not deterministic fix will be probably very tricky and will probably need redesign of whole test stacktrace java lang assertionerror job submission rate should not decrease first rate second rate at org junit assert fail assert java at org junit assert asserttrue assert java at com hazelcast jet job jobsubmissionslownessregressiontest jobsubmissionslownessregressiontest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java util concurrent futuretask run futuretask java at java lang thread run thread java standard output hiccups measured while running test com hazelcast jet job jobsubmissionslownessregressiontest accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms | 0 |
47,480 | 13,237,571,041 | IssuesEvent | 2020-08-18 21:58:41 | benchabot/NSwag | https://api.github.com/repos/benchabot/NSwag | opened | CVE-2020-7656 (Medium) detected in jquery-1.7.2.min.js, jquery-1.7.1.min.js | security vulnerability | ## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.7.2.min.js</b>, <b>jquery-1.7.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.7.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/NSwag/src/NSwag.Sample.NetCoreAngular/node_modules/js-base64/test/index.html</p>
<p>Path to vulnerable library: /NSwag/src/NSwag.Sample.NetCoreAngular/node_modules/js-base64/test/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/NSwag/src/NSwag.Sample.NetCoreAurelia/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: /NSwag/src/NSwag.Sample.NetCoreAurelia/node_modules/vm-browserify/example/run/index.html,/NSwag/src/NSwag.Sample.NetCoreAngular/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/benchabot/NSwag/commit/8db6af2594d3efef9fa2f40f467925d2039d81ee">8db6af2594d3efef9fa2f40f467925d2039d81ee</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568">https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: jquery-rails - 2.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7656 (Medium) detected in jquery-1.7.2.min.js, jquery-1.7.1.min.js - ## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.7.2.min.js</b>, <b>jquery-1.7.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.7.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/NSwag/src/NSwag.Sample.NetCoreAngular/node_modules/js-base64/test/index.html</p>
<p>Path to vulnerable library: /NSwag/src/NSwag.Sample.NetCoreAngular/node_modules/js-base64/test/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/NSwag/src/NSwag.Sample.NetCoreAurelia/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: /NSwag/src/NSwag.Sample.NetCoreAurelia/node_modules/vm-browserify/example/run/index.html,/NSwag/src/NSwag.Sample.NetCoreAngular/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/benchabot/NSwag/commit/8db6af2594d3efef9fa2f40f467925d2039d81ee">8db6af2594d3efef9fa2f40f467925d2039d81ee</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568">https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: jquery-rails - 2.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in jquery min js jquery min js cve medium severity vulnerability vulnerable libraries jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm nswag src nswag sample netcoreangular node modules js test index html path to vulnerable library nswag src nswag sample netcoreangular node modules js test index html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm nswag src nswag sample netcoreaurelia node modules vm browserify example run index html path to vulnerable library nswag src nswag sample netcoreaurelia node modules vm browserify example run index html nswag src nswag sample netcoreangular node modules vm browserify example run index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery prior to allows cross site scripting attacks via the load method the load method fails to recognize and remove html tags that contain a whitespace character i e which results in the enclosed script logic to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery rails step up your open source security game with whitesource | 0 |
195,640 | 6,916,392,428 | IssuesEvent | 2017-11-29 02:18:00 | googleapis/toolkit | https://api.github.com/repos/googleapis/toolkit | closed | Generated Python libraries should use google.cloud.future.operation instead of google.gax._OperationFuture | Lang: Python Priority: P1 Type: Enhancement | `google.cloud.future.operation` is in `google-cloud-core`. | 1.0 | Generated Python libraries should use google.cloud.future.operation instead of google.gax._OperationFuture - `google.cloud.future.operation` is in `google-cloud-core`. | non_process | generated python libraries should use google cloud future operation instead of google gax operationfuture google cloud future operation is in google cloud core | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.