Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
199,571 | 15,049,249,810 | IssuesEvent | 2021-02-03 11:13:44 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] TimeSeriesDataStreamsIT.testShrinkAfterRollover | :Core/Features/ILM+SLM >test-failure Team:Core/Features v8.0.0 | <!--
Please fill out the following information, and ensure you have attempted
to reproduce locally
-->
**Build scan**:
https://gradle-enterprise.elastic.co/s/nqwhqniws3pp2
**Repro line**:
```
./gradlew ':x-pack:plugin:ilm:qa:multi-node:javaRestTest' --tests "org.elasticsearch.xpack.ilm.TimeSeriesDataStreamsIT.testShrinkAfterRollover" -Dtests.seed=1DCA2402DADCCD33 -Dtests.security.manager=true -Dtests.locale=es-AR -Dtests.timezone=America/Grand_Turk -Druntime.java=11
```
**Reproduces locally?**:
Not exactly - it failed for me locally with `the rollover action created the rollover index`.
**Applicable branches**:
`master`
**Failure history**:
https://gradle-enterprise.elastic.co/scans/tests?search.relativeStartTime=P7D&search.timeZoneId=Europe/London&tests.container=org.elasticsearch.xpack.ilm.TimeSeriesDataStreamsIT&tests.sortField=FAILED&tests.test=testShrinkAfterRollover&tests.unstableOnly=true
**Failure excerpt**:
```
java.lang.AssertionError: the shrunken index was deleted by the delete action
at org.elasticsearch.xpack.ilm.TimeSeriesDataStreamsIT.lambda$testShrinkAfterRollover$12(TimeSeriesDataStreamsIT.java:127)
at org.elasticsearch.xpack.ilm.TimeSeriesDataStreamsIT.testShrinkAfterRollover(TimeSeriesDataStreamsIT.java:127)
```
| 1.0 | [CI] TimeSeriesDataStreamsIT.testShrinkAfterRollover - <!--
Please fill out the following information, and ensure you have attempted
to reproduce locally
-->
**Build scan**:
https://gradle-enterprise.elastic.co/s/nqwhqniws3pp2
**Repro line**:
```
./gradlew ':x-pack:plugin:ilm:qa:multi-node:javaRestTest' --tests "org.elasticsearch.xpack.ilm.TimeSeriesDataStreamsIT.testShrinkAfterRollover" -Dtests.seed=1DCA2402DADCCD33 -Dtests.security.manager=true -Dtests.locale=es-AR -Dtests.timezone=America/Grand_Turk -Druntime.java=11
```
**Reproduces locally?**:
Not exactly - it failed for me locally with `the rollover action created the rollover index`.
**Applicable branches**:
`master`
**Failure history**:
https://gradle-enterprise.elastic.co/scans/tests?search.relativeStartTime=P7D&search.timeZoneId=Europe/London&tests.container=org.elasticsearch.xpack.ilm.TimeSeriesDataStreamsIT&tests.sortField=FAILED&tests.test=testShrinkAfterRollover&tests.unstableOnly=true
**Failure excerpt**:
```
java.lang.AssertionError: the shrunken index was deleted by the delete action
at org.elasticsearch.xpack.ilm.TimeSeriesDataStreamsIT.lambda$testShrinkAfterRollover$12(TimeSeriesDataStreamsIT.java:127)
at org.elasticsearch.xpack.ilm.TimeSeriesDataStreamsIT.testShrinkAfterRollover(TimeSeriesDataStreamsIT.java:127)
```
| test | timeseriesdatastreamsit testshrinkafterrollover please fill out the following information and ensure you have attempted to reproduce locally build scan repro line gradlew x pack plugin ilm qa multi node javaresttest tests org elasticsearch xpack ilm timeseriesdatastreamsit testshrinkafterrollover dtests seed dtests security manager true dtests locale es ar dtests timezone america grand turk druntime java reproduces locally not exactly it failed for me locally with the rollover action created the rollover index applicable branches master failure history failure excerpt java lang assertionerror the shrunken index was deleted by the delete action at org elasticsearch xpack ilm timeseriesdatastreamsit lambda testshrinkafterrollover timeseriesdatastreamsit java at org elasticsearch xpack ilm timeseriesdatastreamsit testshrinkafterrollover timeseriesdatastreamsit java | 1 |
25,033 | 4,128,253,667 | IssuesEvent | 2016-06-10 04:49:32 | openshift/source-to-image | https://api.github.com/repos/openshift/source-to-image | closed | Build test-images test flake | area/tests priority/P3 | Sometimes our test end with `You do not have necessary test images, be sure to run 'hack/build-test-images.sh' beforehand.` This is one example of such error: https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_sti/247/console . It would be good to nail down the reason for that. | 1.0 | Build test-images test flake - Sometimes our test end with `You do not have necessary test images, be sure to run 'hack/build-test-images.sh' beforehand.` This is one example of such error: https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_sti/247/console . It would be good to nail down the reason for that. | test | build test images test flake sometimes our test end with you do not have necessary test images be sure to run hack build test images sh beforehand this is one example of such error it would be good to nail down the reason for that | 1 |
50,655 | 6,418,969,681 | IssuesEvent | 2017-08-08 20:09:46 | Automattic/wp-calypso | https://api.github.com/repos/Automattic/wp-calypso | closed | Activity log: Handling large quantities of activities | API Design Jetpack Activity Log [Status] In Progress [Type] Question | STUB
I want to start some conversation about how we'll deal with activity data. | 1.0 | Activity log: Handling large quantities of activities - STUB
I want to start some conversation about how we'll deal with activity data. | non_test | activity log handling large quantities of activities stub i want to start some conversation about how we ll deal with activity data | 0 |
6,270 | 14,075,970,981 | IssuesEvent | 2020-11-04 09:49:01 | dusk-network/rusk | https://api.github.com/repos/dusk-network/rusk | opened | Refactor DUSK Contract circuits | area:architecture area:cryptography type:refactor | As per recently modified [specs](https://app.gitbook.com/@dusk-network/s/specs/specifications/smart-contracts/genenesis-contracts/dusk-contract/methods), there is a need to change the circuits in the DUSK contract.
These changes include:
- Using Schnorr gadget over secret key gadget
- Replacing preimage gadget
- Implementing crossover | 1.0 | Refactor DUSK Contract circuits - As per recently modified [specs](https://app.gitbook.com/@dusk-network/s/specs/specifications/smart-contracts/genenesis-contracts/dusk-contract/methods), there is a need to change the circuits in the DUSK contract.
These changes include:
- Using Schnorr gadget over secret key gadget
- Replacing preimage gadget
- Implementing crossover | non_test | refactor dusk contract circuits as per recently modified there is a need to change the circuits in the dusk contract these changes include using schnorr gadget over secret key gadget replacing preimage gadget implementing crossover | 0 |
57,415 | 7,055,840,251 | IssuesEvent | 2018-01-04 10:08:31 | MantaRayMedia/eahealth | https://api.github.com/repos/MantaRayMedia/eahealth | opened | Home page | needs design | ### Done criteria
- [ ] ?
- [ ] ?
---
### Wireframes
- [ ] https://projects.invisionapp.com/d/main#/console/12443475/271624627/preview
---
### Designs
- [ ] ?
- [ ] ?
| 1.0 | Home page - ### Done criteria
- [ ] ?
- [ ] ?
---
### Wireframes
- [ ] https://projects.invisionapp.com/d/main#/console/12443475/271624627/preview
---
### Designs
- [ ] ?
- [ ] ?
| non_test | home page done criteria wireframes designs | 0 |
53,252 | 6,713,301,196 | IssuesEvent | 2017-10-13 12:59:04 | SAP/techne | https://api.github.com/repos/SAP/techne | closed | Design Tool bar | backlog batch 2 design UX | goal: design and improve the toolbar
### Current v1.5.x
<img width="970" alt="screen shot 2017-07-13 at 12 38 37" src="https://user-images.githubusercontent.com/22662903/28162679-3a9416de-67c8-11e7-94ab-01f6e7ba065d.png">
### Tasks
- [x] 1. Define use cases
- [x] 1. Define **Filtering**
- [ ] 2. Define **Search**
- [x] 3. Define **Pagination**
- [x] 4. Define **Views**
- [x] 5. Define **Actions** - new
- [x] Define UX Elements for 1
- [x] Visual Design
- [x] Document results
| 1.0 | Design Tool bar - goal: design and improve the toolbar
### Current v1.5.x
<img width="970" alt="screen shot 2017-07-13 at 12 38 37" src="https://user-images.githubusercontent.com/22662903/28162679-3a9416de-67c8-11e7-94ab-01f6e7ba065d.png">
### Tasks
- [x] 1. Define use cases
- [x] 1. Define **Filtering**
- [ ] 2. Define **Search**
- [x] 3. Define **Pagination**
- [x] 4. Define **Views**
- [x] 5. Define **Actions** - new
- [x] Define UX Elements for 1
- [x] Visual Design
- [x] Document results
| non_test | design tool bar goal design and improve the toolbar current x img width alt screen shot at src tasks define use cases define filtering define search define pagination define views define actions new define ux elements for visual design document results | 0 |
92,674 | 8,375,629,180 | IssuesEvent | 2018-10-05 17:02:32 | elastic/eui | https://api.github.com/repos/elastic/eui | closed | Add guide to writing visual regression tests to README.md | test | In #630, the request was made to add a style guide/best practices guide for writing visual UI Tests.
This issue will track that work.
- [ ] Best Practices for Writing VR Tets
- [ ] Page Objects
- [ ] Navigation
- [ ] Hook Usage | 1.0 | Add guide to writing visual regression tests to README.md - In #630, the request was made to add a style guide/best practices guide for writing visual UI Tests.
This issue will track that work.
- [ ] Best Practices for Writing VR Tets
- [ ] Page Objects
- [ ] Navigation
- [ ] Hook Usage | test | add guide to writing visual regression tests to readme md in the request was made to add a style guide best practices guide for writing visual ui tests this issue will track that work best practices for writing vr tets page objects navigation hook usage | 1 |
559,907 | 16,580,612,515 | IssuesEvent | 2021-05-31 11:16:08 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | 9.2.0 1897 AshlarStoneTable and AdornedAshlarStoneTable missing "mount" component | Category: Gameplay Priority: Low Squad: Redwood Status: Fixed | AshlarStoneTable and AdornedAshlarStoneTable are missing the "mount" components
also missing the bit just below porotected override void when comparing them

| 1.0 | 9.2.0 1897 AshlarStoneTable and AdornedAshlarStoneTable missing "mount" component - AshlarStoneTable and AdornedAshlarStoneTable are missing the "mount" components
also missing the bit just below porotected override void when comparing them

| non_test | ashlarstonetable and adornedashlarstonetable missing mount component ashlarstonetable and adornedashlarstonetable are missing the mount components also missing the bit just below porotected override void when comparing them | 0 |
3,154 | 2,659,168,233 | IssuesEvent | 2015-03-18 19:25:50 | DynamoRIO/drmemory | https://api.github.com/repos/DynamoRIO/drmemory | closed | app_suite.strcasecmp fails because expected locales are not available | Component-Tests OpSys-Linux Type-Bug | The test fails if en_US.iso88591 and en_US.iso885915 are not available. | 1.0 | app_suite.strcasecmp fails because expected locales are not available - The test fails if en_US.iso88591 and en_US.iso885915 are not available. | test | app suite strcasecmp fails because expected locales are not available the test fails if en us and en us are not available | 1 |
671,145 | 22,744,430,235 | IssuesEvent | 2022-07-07 07:55:06 | tektoncd/pipeline | https://api.github.com/repos/tektoncd/pipeline | closed | Tasks are failing when "init" is the first argument followed by two or more other arguments. | kind/bug priority/critical-urgent | # Expected Behavior
Given the following task
```yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tkn-arg-test
labels:
app.kubernetes.io/version: "0.4"
annotations:
tekton.dev/pipelines.minVersion: "0.22.0"
tekton.dev/tags: cli
spec:
description: >-
Test consuming args
params:
- name: ARGS
description: The terraform cli commands to tun
type: array
default:
- "--help"
- name: USER_HOME
description: Override home directory to /tekton/home
type: string
default: "/tekton/home"
steps:
- name: echo-cli
image: registry.access.redhat.com/ubi9/ubi-minimal:9.0.0-1580
workingDir: /tekton/home
args:
- "$(params.ARGS)"
command: ["echo"]
resources:
limits:
cpu: 250m
memory: 1Gi
requests:
cpu: 250m
memory: 500Mi
env:
- name: "HOME"
value: $(params.USER_HOME)
```
The following command should work:
```bash
$ tkn task start tkn-arg-test "-p=ARGS=init,two" --use-param-defaults --showlog
# Works…
[echo-cli] init two
$ tkn task start tkn-arg-test "-p=ARGS=init,two,three" --use-param-defaults --showlog
# Doesn't work
[echo-cli] 2022/07/05 17:05:42 init error: open two: no such file or directory
```
# Actual Behavior
It fails as written above
# Steps to Reproduce the Problem
- Create the task
- Issue the `tkn` commands
# Additional Info
This affect any Pipeline release that contains the follow change :
https://github.com/tektoncd/pipeline/pull/4826. This means it affects 0.32 to 0.37 and the fix should be backport in all.
The main reason why it fails is the way the `entrypoint` binary is written and the way `flag` acts. To simplify a tiny bit, the command created on a task looks like `/ko-app/entrypoint --entrypoint echo -- command and args here` (with a bunch of flags before `--` ommited here). With the above example, it becomes `/ko-app/entrypoint --entrypoint echo -- init foo bar`. The `flag` package, **in that particular case** ignores the `--` and tells the rest of the code that the arguments are `init foo bar`, which is similar to `/ko-app/entrypoint --entrypoint echo init foo bar`. And *this*, in terms, goes into the `init` subcommand.
/assign
/priority critical-urgent | 1.0 | Tasks are failing when "init" is the first argument followed by two or more other arguments. - # Expected Behavior
Given the following task
```yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tkn-arg-test
labels:
app.kubernetes.io/version: "0.4"
annotations:
tekton.dev/pipelines.minVersion: "0.22.0"
tekton.dev/tags: cli
spec:
description: >-
Test consuming args
params:
- name: ARGS
description: The terraform cli commands to tun
type: array
default:
- "--help"
- name: USER_HOME
description: Override home directory to /tekton/home
type: string
default: "/tekton/home"
steps:
- name: echo-cli
image: registry.access.redhat.com/ubi9/ubi-minimal:9.0.0-1580
workingDir: /tekton/home
args:
- "$(params.ARGS)"
command: ["echo"]
resources:
limits:
cpu: 250m
memory: 1Gi
requests:
cpu: 250m
memory: 500Mi
env:
- name: "HOME"
value: $(params.USER_HOME)
```
The following command should work:
```bash
$ tkn task start tkn-arg-test "-p=ARGS=init,two" --use-param-defaults --showlog
# Works…
[echo-cli] init two
$ tkn task start tkn-arg-test "-p=ARGS=init,two,three" --use-param-defaults --showlog
# Doesn't work
[echo-cli] 2022/07/05 17:05:42 init error: open two: no such file or directory
```
# Actual Behavior
It fails as written above
# Steps to Reproduce the Problem
- Create the task
- Issue the `tkn` commands
# Additional Info
This affect any Pipeline release that contains the follow change :
https://github.com/tektoncd/pipeline/pull/4826. This means it affects 0.32 to 0.37 and the fix should be backport in all.
The main reason why it fails is the way the `entrypoint` binary is written and the way `flag` acts. To simplify a tiny bit, the command created on a task looks like `/ko-app/entrypoint --entrypoint echo -- command and args here` (with a bunch of flags before `--` ommited here). With the above example, it becomes `/ko-app/entrypoint --entrypoint echo -- init foo bar`. The `flag` package, **in that particular case** ignores the `--` and tells the rest of the code that the arguments are `init foo bar`, which is similar to `/ko-app/entrypoint --entrypoint echo init foo bar`. And *this*, in terms, goes into the `init` subcommand.
/assign
/priority critical-urgent | non_test | tasks are failing when init is the first argument followed by two or more other arguments expected behavior given the following task yaml apiversion tekton dev kind task metadata name tkn arg test labels app kubernetes io version annotations tekton dev pipelines minversion tekton dev tags cli spec description test consuming args params name args description the terraform cli commands to tun type array default help name user home description override home directory to tekton home type string default tekton home steps name echo cli image registry access redhat com ubi minimal workingdir tekton home args params args command resources limits cpu memory requests cpu memory env name home value params user home the following command should work bash tkn task start tkn arg test p args init two use param defaults showlog works… init two tkn task start tkn arg test p args init two three use param defaults showlog doesn t work init error open two no such file or directory actual behavior it fails as written above steps to reproduce the problem create the task issue the tkn commands additional info this affect any pipeline release that contains the follow change this means it affects to and the fix should be backport in all the main reason why it fails is the way the entrypoint binary is written and the way flag acts to simplify a tiny bit the command created on a task looks like ko app entrypoint entrypoint echo command and args here with a bunch of flags before ommited here with the above example it becomes ko app entrypoint entrypoint echo init foo bar the flag package in that particular case ignores the and tells the rest of the code that the arguments are init foo bar which is similar to ko app entrypoint entrypoint echo init foo bar and this in terms goes into the init subcommand assign priority critical urgent | 0 |
175,831 | 6,554,350,106 | IssuesEvent | 2017-09-06 05:14:22 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | [upgrade test failure][sig-api-machinery] Initializers will be set to nil if a patch removes the last pending initializer | kind/bug priority/critical-urgent release-blocker sig/api-machinery | Seeing this test failure pretty consistently with kops, to the extent that we can't merge e.g. https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/kops/3288/pull-kops-e2e-kubernetes-aws/2743/
Based on timing, seems likely to be #51082, although something odd happened in that this failure occurred on that PR but it still merged: https://github.com/kubernetes/kubernetes/pull/51082#issuecomment-325180686
cc @kubernetes/sig-api-machinery-bugs | 1.0 | [upgrade test failure][sig-api-machinery] Initializers will be set to nil if a patch removes the last pending initializer - Seeing this test failure pretty consistently with kops, to the extent that we can't merge e.g. https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/kops/3288/pull-kops-e2e-kubernetes-aws/2743/
Based on timing, seems likely to be #51082, although something odd happened in that this failure occurred on that PR but it still merged: https://github.com/kubernetes/kubernetes/pull/51082#issuecomment-325180686
cc @kubernetes/sig-api-machinery-bugs | non_test | initializers will be set to nil if a patch removes the last pending initializer seeing this test failure pretty consistently with kops to the extent that we can t merge e g based on timing seems likely to be although something odd happened in that this failure occurred on that pr but it still merged cc kubernetes sig api machinery bugs | 0 |
102,274 | 8,823,837,712 | IssuesEvent | 2019-01-02 15:04:18 | NativeScript/nativescript-cli | https://api.github.com/repos/NativeScript/nativescript-cli | closed | Fresh project build error on ios | backlog os: ios ready for test | @hamdiwanis commented on [Sun Dec 16 2018](https://github.com/NativeScript/NativeScript/issues/6715)
**Environment**
- CLI: 5.1.0
- Cross-platform modules: 5.1.0
- Android Runtime: 5.1.0
- iOS Runtime: 5.1.0
- Plugin(s): none
**Describe the bug**
i get this error with ios build
```
linking ObjC for iOS Simulator, but dylib (/Volumes/project/platforms/ios/internal/NativeScript.framework/NativeScript) was compiled for MacOSX
ld: framework not found CoreServices for architecture i386
```
**To Reproduce**
```
tns create test
cd test
npm i
tns run ios
```
---
@NickIliev commented on [Mon Dec 17 2018](https://github.com/NativeScript/NativeScript/issues/6715#issuecomment-447735889)
@hamdiwanis what is the simulator/device you are running on?
| 1.0 | Fresh project build error on ios - @hamdiwanis commented on [Sun Dec 16 2018](https://github.com/NativeScript/NativeScript/issues/6715)
**Environment**
- CLI: 5.1.0
- Cross-platform modules: 5.1.0
- Android Runtime: 5.1.0
- iOS Runtime: 5.1.0
- Plugin(s): none
**Describe the bug**
i get this error with ios build
```
linking ObjC for iOS Simulator, but dylib (/Volumes/project/platforms/ios/internal/NativeScript.framework/NativeScript) was compiled for MacOSX
ld: framework not found CoreServices for architecture i386
```
**To Reproduce**
```
tns create test
cd test
npm i
tns run ios
```
---
@NickIliev commented on [Mon Dec 17 2018](https://github.com/NativeScript/NativeScript/issues/6715#issuecomment-447735889)
@hamdiwanis what is the simulator/device you are running on?
| test | fresh project build error on ios hamdiwanis commented on environment cli cross platform modules android runtime ios runtime plugin s none describe the bug i get this error with ios build linking objc for ios simulator but dylib volumes project platforms ios internal nativescript framework nativescript was compiled for macosx ld framework not found coreservices for architecture to reproduce tns create test cd test npm i tns run ios nickiliev commented on hamdiwanis what is the simulator device you are running on | 1 |
174,729 | 13,508,126,361 | IssuesEvent | 2020-09-14 07:13:03 | knative/serving | https://api.github.com/repos/knative/serving | closed | Debug TestActivatorHAGraceful and stabilize it | area/autoscale area/test-and-release kind/bug | <!--
Pro-tip: You can leave this block commented, and it still works!
Select the appropriate areas for your issue:
/area autoscale
/area test-and-release
Classify what kind of issue this is:
/kind bug
-->
We kind of optimistically added `TestActivatorHAGraceful` assuming it should always pass. It doesn't. The test will be skipped but we should really look at what's wrong there and triage if the activator really can't be gracefully shutdown without losing traffic occasionally.
| 1.0 | Debug TestActivatorHAGraceful and stabilize it - <!--
Pro-tip: You can leave this block commented, and it still works!
Select the appropriate areas for your issue:
/area autoscale
/area test-and-release
Classify what kind of issue this is:
/kind bug
-->
We kind of optimistically added `TestActivatorHAGraceful` assuming it should always pass. It doesn't. The test will be skipped but we should really look at what's wrong there and triage if the activator really can't be gracefully shutdown without losing traffic occasionally.
| test | debug testactivatorhagraceful and stabilize it pro tip you can leave this block commented and it still works select the appropriate areas for your issue area autoscale area test and release classify what kind of issue this is kind bug we kind of optimistically added testactivatorhagraceful assuming it should always pass it doesn t the test will be skipped but we should really look at what s wrong there and triage if the activator really can t be gracefully shutdown without losing traffic occasionally | 1 |
151,701 | 12,056,085,592 | IssuesEvent | 2020-04-15 13:57:58 | ekmett/lens | https://api.github.com/repos/ekmett/lens | closed | Figure out what to do about the language-haskell-extract dependency | packaging test-suite third-party | I am not yet able to run `lens`' test suite + GHC 8.10.1 on Travis CI without the use of `head.hackage`. Why, you ask? It is because various test suites depend on `test-framework-th`, which in turn depends on `language-haskell-extract`. Sadly, `language-haskell-extract` [does not build with `template-haskell-2.16.*`](https://github.com/finnsson/template-helper/issues/12), and the maintainer of the library has not been active on GitHub for about two years. Bottom line: `language-haskell-extract` is likely abandonware.
This poses a problem for `lens`. I can see two ways forward:
1. Beg someone to do a package takeover of `language-haskell-extract` and upload a new version.
2. Remove `lens`' dependency on `test-framework-th`. As far as I can tell, that package only provides a mild syntactic convenience in the form of automatic test discovery, so removing it shouldn't be too difficult. | 1.0 | Figure out what to do about the language-haskell-extract dependency - I am not yet able to run `lens`' test suite + GHC 8.10.1 on Travis CI without the use of `head.hackage`. Why, you ask? It is because various test suites depend on `test-framework-th`, which in turn depends on `language-haskell-extract`. Sadly, `language-haskell-extract` [does not build with `template-haskell-2.16.*`](https://github.com/finnsson/template-helper/issues/12), and the maintainer of the library has not been active on GitHub for about two years. Bottom line: `language-haskell-extract` is likely abandonware.
This poses a problem for `lens`. I can see two ways forward:
1. Beg someone to do a package takeover of `language-haskell-extract` and upload a new version.
2. Remove `lens`' dependency on `test-framework-th`. As far as I can tell, that package only provides a mild syntactic convenience in the form of automatic test discovery, so removing it shouldn't be too difficult. | test | figure out what to do about the language haskell extract dependency i am not yet able to run lens test suite ghc on travis ci without the use of head hackage why you ask it is because various test suites depend on test framework th which in turn depends on language haskell extract sadly language haskell extract and the maintainer of the library has not been active on github for about two years bottom line language haskell extract is likely abandonware this poses a problem for lens i can see two ways forward beg someone to do a package takeover of language haskell extract and upload a new version remove lens dependency on test framework th as far as i can tell that package only provides a mild syntactic convenience in the form of automatic test discovery so removing it shouldn t be too difficult | 1 |
262,456 | 22,841,318,045 | IssuesEvent | 2022-07-12 22:18:40 | mapbox/mapbox-gl-js | https://api.github.com/repos/mapbox/mapbox-gl-js | closed | `fog/terrain/sky-composition` render test is flaky | testing :100: | `render-tests/fog/terrain/sky-composition` failed once on a completely unrelated change. We should fix it | 1.0 | `fog/terrain/sky-composition` render test is flaky - `render-tests/fog/terrain/sky-composition` failed once on a completely unrelated change. We should fix it | test | fog terrain sky composition render test is flaky render tests fog terrain sky composition failed once on a completely unrelated change we should fix it | 1 |
209,826 | 23,730,862,402 | IssuesEvent | 2022-08-31 01:29:04 | arohablue/salesforce-a.kumar-ap06 | https://api.github.com/repos/arohablue/salesforce-a.kumar-ap06 | opened | CVE-2020-11022 (Medium) detected in jquery-3.4.1.js | security vulnerability | ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.4.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.js</a></p>
<p>Path to vulnerable library: /force-app/main/default/staticresources/Jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.1.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11022 (Medium) detected in jquery-3.4.1.js - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.4.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.js</a></p>
<p>Path to vulnerable library: /force-app/main/default/staticresources/Jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.1.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in jquery js cve medium severity vulnerability vulnerable library jquery js javascript library for dom operations library home page a href path to vulnerable library force app main default staticresources jquery js dependency hierarchy x jquery js vulnerable library vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with mend | 0 |
292,924 | 25,250,906,735 | IssuesEvent | 2022-11-15 14:37:41 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | com.hazelcast.map.impl.query.QueryIndexMigrationTest.testQueryWithIndexDuringJoin | Team: Core Type: Test-Failure Source: Internal Module: IMap Module: Query | master, rev afbee062bba410a246fd2b865360fe789824867a
Failed on Oracle 11, Linux
http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-sonar/937/testReport/com.hazelcast.map.impl.query/QueryIndexMigrationTest/
Stacktrace:
```
org.junit.runners.model.TestTimedOutException: test timed out after 60000 milliseconds
at java.base@11/java.lang.Thread.sleep(Native Method)
at java.base@11/java.lang.Thread.sleep(Thread.java:339)
at java.base@11/java.util.concurrent.TimeUnit.sleep(TimeUnit.java:446)
at app//com.hazelcast.test.HazelcastTestSupport.sleepMillis(HazelcastTestSupport.java:366)
at app//com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1270)
at app//com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1279)
at app//com.hazelcast.test.HazelcastTestSupport.waitAllForSafeState(HazelcastTestSupport.java:746)
at app//com.hazelcast.test.HazelcastTestSupport.waitAllForSafeState(HazelcastTestSupport.java:742)
at app//com.hazelcast.map.impl.query.QueryIndexMigrationTest.testQueryWithIndexDuringJoin(QueryIndexMigrationTest.java:249)
at java.base@11/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base@11/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base@11/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base@11/java.lang.reflect.Method.invoke(Method.java:566)
at app//org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at app//org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at app//org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at app//org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at app//com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at app//com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.base@11/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base@11/java.lang.Thread.run(Thread.java:834)
```
Standard output:
```
Finished Running Test: testQueryWithIndexesWhileMigrating[copyBehavior: COPY_ON_READ] in 16.605 seconds.
03:17:44,813 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:44,815 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:44,913 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:44,915 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,014 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,015 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,117 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,117 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,217 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,218 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,318 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,318 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,418 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,419 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,519 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,519 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,619 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,619 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,719 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,719 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,820 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,820 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,921 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,921 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,021 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,021 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,121 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,121 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,222 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,222 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,322 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,323 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,422 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,423 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,510 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,523 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,523 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,624 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,624 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,709 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,724 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,724 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,824 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,825 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-12 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,924 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,925 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-12 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,025 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,025 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,125 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,126 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,225 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,227 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,325 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,327 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,426 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,428 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,527 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,528 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,627 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,628 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-12 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,728 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,728 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-12 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,828 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,829 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,928 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,929 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,029 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,029 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,129 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,130 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,224 INFO |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.sad_rubin.HealthMonitor - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=623.2M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=69.57%, heap.memory.used/max=69.57%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=87.50%, load.system=19.53%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=4715, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,316 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,319 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,416 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,420 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,510 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,517 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,521 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,617 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,621 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,709 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,712 INFO |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.cool_rubin.HealthMonitor - [127.0.0.1]:5705 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=593.4M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=71.02%, heap.memory.used/max=71.02%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=18.13%, load.system=17.72%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-4, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=743, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,713 INFO |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.lucid_rubin.HealthMonitor - [127.0.0.1]:5704 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=593.0M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=71.04%, heap.memory.used/max=71.04%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=60.00%, load.system=0.00%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=861, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,714 INFO |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.stupefied_rubin.HealthMonitor - [127.0.0.1]:5703 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=592.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=71.06%, heap.memory.used/max=71.06%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=75.00%, load.system=71.43%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=759, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=1, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,715 INFO |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.vigorous_rubin.HealthMonitor - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=592.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=71.06%, heap.memory.used/max=71.06%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=50.00%, load.system=50.00%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-2, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=778, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,717 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,721 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,815 INFO |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.reverent_rubin.HealthMonitor - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=582.3M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=71.56%, heap.memory.used/max=71.56%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=20.26%, load.system=18.23%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-5, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=601, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=2, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,820 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,822 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,920 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,922 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,021 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,023 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,121 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,123 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,134 INFO |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.competent_rubin.HealthMonitor - [127.0.0.1]:5703 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.5G, heap.memory.free=554.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=72.91%, heap.memory.used/max=72.91%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=61.54%, load.system=57.14%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=583, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=2, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:49,221 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,223 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-12 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,322 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,324 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-12 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,422 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,424 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,524 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,525 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,625 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,626 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,726 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,727 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-14 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,826 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,827 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-14 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,927 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,929 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-14 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,027 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,029 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-14 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,127 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,130 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-14 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,130 INFO |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.admiring_rubin.HealthMonitor - [127.0.0.1]:5704 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.5G, heap.memory.free=488.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=76.13%, heap.memory.used/max=76.13%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=66.67%, load.system=75.00%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=625, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:50,228 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,230 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,328 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,331 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,428 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,431 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,511 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,528 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,531 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,629 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,632 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,710 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,808 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,808 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,908 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,908 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,008 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,008 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,109 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,111 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,210 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,212 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,310 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,314 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,411 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,414 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,511 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,514 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,615 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,617 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,716 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,720 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,817 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,822 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-13 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,917 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state,
...[truncated 3908441 chars]...
rubin] [03:18:30,318] [thread=hz.lucid_rubin.partition-operation.thread-4,unit=count,metric=operation.thread.priorityPendingCount]=0
[lucid_rubin] [03:18:30,318] [metric=classloading.totalLoadedClassesCount]=157351
[lucid_rubin] [03:18:30,318] [thread=hz.lucid_rubin.generic-operation.thread-1,unit=count,metric=operation.thread.completedPartitionSpecificRunnableCount]=0```
| 1.0 | com.hazelcast.map.impl.query.QueryIndexMigrationTest.testQueryWithIndexDuringJoin - master, rev afbee062bba410a246fd2b865360fe789824867a
Failed on Oracle 11, Linux
http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-sonar/937/testReport/com.hazelcast.map.impl.query/QueryIndexMigrationTest/
Stacktrace:
```
org.junit.runners.model.TestTimedOutException: test timed out after 60000 milliseconds
at java.base@11/java.lang.Thread.sleep(Native Method)
at java.base@11/java.lang.Thread.sleep(Thread.java:339)
at java.base@11/java.util.concurrent.TimeUnit.sleep(TimeUnit.java:446)
at app//com.hazelcast.test.HazelcastTestSupport.sleepMillis(HazelcastTestSupport.java:366)
at app//com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1270)
at app//com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1279)
at app//com.hazelcast.test.HazelcastTestSupport.waitAllForSafeState(HazelcastTestSupport.java:746)
at app//com.hazelcast.test.HazelcastTestSupport.waitAllForSafeState(HazelcastTestSupport.java:742)
at app//com.hazelcast.map.impl.query.QueryIndexMigrationTest.testQueryWithIndexDuringJoin(QueryIndexMigrationTest.java:249)
at java.base@11/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base@11/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base@11/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base@11/java.lang.reflect.Method.invoke(Method.java:566)
at app//org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at app//org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at app//org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at app//org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at app//com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at app//com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.base@11/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base@11/java.lang.Thread.run(Thread.java:834)
```
Standard output:
```
Finished Running Test: testQueryWithIndexesWhileMigrating[copyBehavior: COPY_ON_READ] in 16.605 seconds.
03:17:44,813 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:44,815 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:44,913 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:44,915 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,014 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,015 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,117 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,117 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,217 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,218 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,318 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,318 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,418 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,419 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,519 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,519 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,619 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,619 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,719 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,719 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,820 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,820 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,921 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:45,921 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,021 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,021 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,121 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,121 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,222 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,222 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,322 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,323 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,422 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,423 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,510 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,523 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,523 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,624 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,624 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,709 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,724 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,724 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,824 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,825 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-12 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,924 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:46,925 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-12 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,025 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,025 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-2 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,125 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,126 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,225 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,227 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,325 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,327 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,426 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,428 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,527 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,528 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,627 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,628 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-12 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,728 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,728 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-12 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,828 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,829 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,928 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:47,929 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,029 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,029 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,129 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,130 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,224 INFO |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.sad_rubin.HealthMonitor - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=623.2M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=69.57%, heap.memory.used/max=69.57%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=87.50%, load.system=19.53%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=4715, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,316 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,319 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,416 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,420 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,510 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,517 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,521 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,617 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,621 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,709 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,712 INFO |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.cool_rubin.HealthMonitor - [127.0.0.1]:5705 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=593.4M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=71.02%, heap.memory.used/max=71.02%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=18.13%, load.system=17.72%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-4, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=743, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,713 INFO |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.lucid_rubin.HealthMonitor - [127.0.0.1]:5704 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=593.0M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=71.04%, heap.memory.used/max=71.04%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=60.00%, load.system=0.00%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=861, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,714 INFO |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.stupefied_rubin.HealthMonitor - [127.0.0.1]:5703 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=592.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=71.06%, heap.memory.used/max=71.06%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=75.00%, load.system=71.43%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=759, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=1, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,715 INFO |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.vigorous_rubin.HealthMonitor - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=592.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=71.06%, heap.memory.used/max=71.06%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=50.00%, load.system=50.00%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-2, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=778, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,717 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,721 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,815 INFO |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.reverent_rubin.HealthMonitor - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.4G, heap.memory.free=582.3M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=71.56%, heap.memory.used/max=71.56%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=20.26%, load.system=18.23%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-5, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=601, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=2, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:48,820 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,822 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,920 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:48,922 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,021 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,023 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,121 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,123 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,134 INFO |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.competent_rubin.HealthMonitor - [127.0.0.1]:5703 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.5G, heap.memory.free=554.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=72.91%, heap.memory.used/max=72.91%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=61.54%, load.system=57.14%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=583, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=2, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:49,221 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,223 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-12 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,322 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,324 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-12 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,422 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,424 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,524 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,525 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,625 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,626 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,726 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,727 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-14 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,826 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,827 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-14 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,927 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:49,929 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-14 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,027 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-16 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,029 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-14 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,127 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,130 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-14 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,130 INFO |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [HealthMonitor] hz.admiring_rubin.HealthMonitor - [127.0.0.1]:5704 [dev] [5.0-SNAPSHOT] processors=8, physical.memory.total=755.6G, physical.memory.free=693.5G, swap.space.total=4.0G, swap.space.free=4.0G, heap.memory.used=1.5G, heap.memory.free=488.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=76.13%, heap.memory.used/max=76.13%, minor.gc.count=7487, minor.gc.time=73256ms, major.gc.count=6, major.gc.time=2181ms, load.process=66.67%, load.system=75.00%, load.systemAverage=15.29, thread.count=1062, thread.peakCount=2648, cluster.timeDiff=-3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=625, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
03:17:50,228 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,230 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,328 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,331 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,428 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,431 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,511 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,528 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,531 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,629 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,632 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,710 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,808 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,808 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,908 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:50,908 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,008 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,008 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,109 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,111 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,210 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-4 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,212 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,310 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,314 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,411 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-7 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,414 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,511 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,514 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,615 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,617 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-11 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,716 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,720 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-8 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,817 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,822 DEBUG |testQueryWithIndexDuringJoin[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.sad_rubin.cached.thread-13 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_OWNED
03:17:51,917 DEBUG |testIndexCleanupOnMigration[copyBehavior: COPY_ON_READ]| - [JobCoordinationService] hz.focused_rubin.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state,
...[truncated 3908441 chars]...
rubin] [03:18:30,318] [thread=hz.lucid_rubin.partition-operation.thread-4,unit=count,metric=operation.thread.priorityPendingCount]=0
[lucid_rubin] [03:18:30,318] [metric=classloading.totalLoadedClassesCount]=157351
[lucid_rubin] [03:18:30,318] [thread=hz.lucid_rubin.generic-operation.thread-1,unit=count,metric=operation.thread.completedPartitionSpecificRunnableCount]=0```
| test | com hazelcast map impl query queryindexmigrationtest testquerywithindexduringjoin master rev failed on oracle linux stacktrace org junit runners model testtimedoutexception test timed out after milliseconds at java base java lang thread sleep native method at java base java lang thread sleep thread java at java base java util concurrent timeunit sleep timeunit java at app com hazelcast test hazelcasttestsupport sleepmillis hazelcasttestsupport java at app com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at app com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at app com hazelcast test hazelcasttestsupport waitallforsafestate hazelcasttestsupport java at app com hazelcast test hazelcasttestsupport waitallforsafestate hazelcasttestsupport java at app com hazelcast map impl query queryindexmigrationtest testquerywithindexduringjoin queryindexmigrationtest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at app org junit runners model frameworkmethod runreflectivecall frameworkmethod java at app org junit internal runners model reflectivecallable run reflectivecallable java at app org junit runners model frameworkmethod invokeexplosively frameworkmethod java at app org junit internal runners statements invokemethod evaluate invokemethod java at app com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at app com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java base java util concurrent futuretask run futuretask java at java base java lang thread run thread java standard output finished running test testquerywithindexeswhilemigrating in seconds debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned info testquerywithindexduringjoin hz sad rubin healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned info testquerywithindexduringjoin hz cool rubin healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info testquerywithindexduringjoin hz lucid rubin healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info testquerywithindexduringjoin hz stupefied rubin healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info testquerywithindexduringjoin hz vigorous rubin healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned info testindexcleanuponmigration hz reverent rubin healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned info testindexcleanuponmigration hz competent rubin healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned info testindexcleanuponmigration hz admiring rubin healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testquerywithindexduringjoin hz sad rubin cached thread not starting jobs because partition replication is not in safe state but in replica not owned debug testindexcleanuponmigration hz focused rubin cached thread not starting jobs because partition replication is not in safe state rubin | 1 |
494,585 | 14,260,915,616 | IssuesEvent | 2020-11-20 10:31:25 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.google.com - design is broken | browser-fenix engine-gecko priority-critical | <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/62164 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.google.com/#spf=1605867858484
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
The usual design isn't loading properly
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/8f43d693-a643-4e3d-928e-5b88f6f4b454.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201118041908</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/e1f7a386-8a1c-4edf-af96-8edac53aa0b0)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.google.com - design is broken - <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/62164 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.google.com/#spf=1605867858484
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
The usual design isn't loading properly
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/8f43d693-a643-4e3d-928e-5b88f6f4b454.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201118041908</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/e1f7a386-8a1c-4edf-af96-8edac53aa0b0)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | design is broken url browser version firefox mobile operating system android tested another browser yes chrome problem type design is broken description items not fully visible steps to reproduce the usual design isn t loading properly view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
225,037 | 17,212,903,309 | IssuesEvent | 2021-07-19 07:49:51 | felixfaisal/formica | https://api.github.com/repos/felixfaisal/formica | closed | Documentation using docasaurus | documentation enhancement | ## Task
Create better documentation
## Solution (optional)
Using Docusauras we can have better documentation that will allow contributors or developers to easily setup the system locally
## Type of Change
- [ ] New feature
- [ ] Feature update/enhancement
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [x] Documentation
- [ ] Other: Please describe
## Screenshot(s) (optional)
Include any screenshots that can assist the person in reviewing the PR
| 1.0 | Documentation using docasaurus - ## Task
Create better documentation
## Solution (optional)
Using Docusauras we can have better documentation that will allow contributors or developers to easily setup the system locally
## Type of Change
- [ ] New feature
- [ ] Feature update/enhancement
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [x] Documentation
- [ ] Other: Please describe
## Screenshot(s) (optional)
Include any screenshots that can assist the person in reviewing the PR
| non_test | documentation using docasaurus task create better documentation solution optional using docusauras we can have better documentation that will allow contributors or developers to easily setup the system locally type of change new feature feature update enhancement breaking change fix or feature that would cause existing functionality to not work as expected documentation other please describe screenshot s optional include any screenshots that can assist the person in reviewing the pr | 0 |
45,103 | 5,693,634,248 | IssuesEvent | 2017-04-15 03:42:44 | code4craft/webmagic | https://api.github.com/repos/code4craft/webmagic | closed | 无法设置POST请求发送的编码 | enhancement major toTest | 发送post请求的时候,有需要设置content-type的,没有找到在哪里设置,这里有一个关键问题,框架里,现在所有post请求的content-type都是默认的:application/x-www-form-urlencoded, chartset=iso-8859-1,而网站这里的charset是有要求的,这个需要有一个可以设置的地方
| 1.0 | 无法设置POST请求发送的编码 - 发送post请求的时候,有需要设置content-type的,没有找到在哪里设置,这里有一个关键问题,框架里,现在所有post请求的content-type都是默认的:application/x-www-form-urlencoded, chartset=iso-8859-1,而网站这里的charset是有要求的,这个需要有一个可以设置的地方
| test | 无法设置post请求发送的编码 发送post请求的时候,有需要设置content type的,没有找到在哪里设置,这里有一个关键问题,框架里,现在所有post请求的content type都是默认的:application x www form urlencoded chartset iso ,而网站这里的charset是有要求的,这个需要有一个可以设置的地方 | 1 |
269,224 | 28,960,036,313 | IssuesEvent | 2023-05-10 01:10:05 | dpteam/RK3188_TABLET | https://api.github.com/repos/dpteam/RK3188_TABLET | reopened | CVE-2022-4095 (High) detected in linuxv3.0 | Mend: dependency security vulnerability | ## CVE-2022-4095 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/verygreen/linux.git>https://github.com/verygreen/linux.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in Linux kernel before 5.19.2. This issue occurs in cmd_hdl_filter in drivers/staging/rtl8712/rtl8712_cmd.c, allowing an attacker to launch a local denial of service attack and gain escalation of privileges.
<p>Publish Date: 2023-03-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4095>CVE-2022-4095</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4095">https://www.linuxkernelcves.com/cves/CVE-2022-4095</a></p>
<p>Release Date: 2022-11-21</p>
<p>Fix Resolution: v4.9.328,v4.14.293,v4.19.258,v5.4.213,v5.10.142,v5.15.66</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-4095 (High) detected in linuxv3.0 - ## CVE-2022-4095 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/verygreen/linux.git>https://github.com/verygreen/linux.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in Linux kernel before 5.19.2. This issue occurs in cmd_hdl_filter in drivers/staging/rtl8712/rtl8712_cmd.c, allowing an attacker to launch a local denial of service attack and gain escalation of privileges.
<p>Publish Date: 2023-03-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4095>CVE-2022-4095</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4095">https://www.linuxkernelcves.com/cves/CVE-2022-4095</a></p>
<p>Release Date: 2022-11-21</p>
<p>Fix Resolution: v4.9.328,v4.14.293,v4.19.258,v5.4.213,v5.10.142,v5.15.66</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in cve high severity vulnerability vulnerable library linux kernel source tree library home page a href found in base branch master vulnerable source files drivers staging cmd c drivers staging cmd c drivers staging cmd c vulnerability details a use after free flaw was found in linux kernel before this issue occurs in cmd hdl filter in drivers staging cmd c allowing an attacker to launch a local denial of service attack and gain escalation of privileges publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
186,369 | 14,394,660,443 | IssuesEvent | 2020-12-03 01:49:26 | github-vet/rangeclosure-findings | https://api.github.com/repos/github-vet/rangeclosure-findings | closed | TheGreek9/Whats-For-Lunch: pkg/mod/golang.org/x/tools@v0.0.0-20181207195948-8634b1ecd393/go/internal/gccgoimporter/importer_test.go; 3 LoC | fresh test tiny |
Found a possible issue in [TheGreek9/Whats-For-Lunch](https://www.github.com/TheGreek9/Whats-For-Lunch) at [pkg/mod/golang.org/x/tools@v0.0.0-20181207195948-8634b1ecd393/go/internal/gccgoimporter/importer_test.go](https://github.com/TheGreek9/Whats-For-Lunch/blob/c566d8064a596f886d61e2b0ad9d2195882be45c/pkg/mod/golang.org/x/tools@v0.0.0-20181207195948-8634b1ecd393/go/internal/gccgoimporter/importer_test.go#L103-L105)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/TheGreek9/Whats-For-Lunch/blob/c566d8064a596f886d61e2b0ad9d2195882be45c/pkg/mod/golang.org/x/tools@v0.0.0-20181207195948-8634b1ecd393/go/internal/gccgoimporter/importer_test.go#L103-L105)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range importerTests {
runImporterTest(t, imp, initmap, &test)
}
```
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to test at line 104 may start a goroutine
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: c566d8064a596f886d61e2b0ad9d2195882be45c
| 1.0 | TheGreek9/Whats-For-Lunch: pkg/mod/golang.org/x/tools@v0.0.0-20181207195948-8634b1ecd393/go/internal/gccgoimporter/importer_test.go; 3 LoC -
Found a possible issue in [TheGreek9/Whats-For-Lunch](https://www.github.com/TheGreek9/Whats-For-Lunch) at [pkg/mod/golang.org/x/tools@v0.0.0-20181207195948-8634b1ecd393/go/internal/gccgoimporter/importer_test.go](https://github.com/TheGreek9/Whats-For-Lunch/blob/c566d8064a596f886d61e2b0ad9d2195882be45c/pkg/mod/golang.org/x/tools@v0.0.0-20181207195948-8634b1ecd393/go/internal/gccgoimporter/importer_test.go#L103-L105)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/TheGreek9/Whats-For-Lunch/blob/c566d8064a596f886d61e2b0ad9d2195882be45c/pkg/mod/golang.org/x/tools@v0.0.0-20181207195948-8634b1ecd393/go/internal/gccgoimporter/importer_test.go#L103-L105)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range importerTests {
runImporterTest(t, imp, initmap, &test)
}
```
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to test at line 104 may start a goroutine
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: c566d8064a596f886d61e2b0ad9d2195882be45c
| test | whats for lunch pkg mod golang org x tools go internal gccgoimporter importer test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for test range importertests runimportertest t imp initmap test below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to test at line may start a goroutine leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
303,250 | 26,196,098,527 | IssuesEvent | 2023-01-03 13:36:06 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] PartiallyCachedShardAllocationIntegTests testPartialSearchableSnapshotNotAllocatedToNodesWithoutCache failing | :Distributed/Snapshot/Restore >test-failure Team:Distributed | **Build scan:**
https://gradle-enterprise.elastic.co/s/n2fya3bzklnog/tests/:x-pack:plugin:searchable-snapshots:internalClusterTest/org.elasticsearch.xpack.searchablesnapshots.cache.shared.PartiallyCachedShardAllocationIntegTests/testPartialSearchableSnapshotNotAllocatedToNodesWithoutCache
**Reproduction line:**
```
./gradlew ':x-pack:plugin:searchable-snapshots:internalClusterTest' --tests "org.elasticsearch.xpack.searchablesnapshots.cache.shared.PartiallyCachedShardAllocationIntegTests.testPartialSearchableSnapshotNotAllocatedToNodesWithoutCache" -Dtests.seed=93FC1C9EB0FD4E2E -Dtests.locale=sr-BA -Dtests.timezone=America/Goose_Bay -Druntime.java=17
```
**Applicable branches:**
8.6
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.searchablesnapshots.cache.shared.PartiallyCachedShardAllocationIntegTests&tests.test=testPartialSearchableSnapshotNotAllocatedToNodesWithoutCache
**Failure excerpt:**
```
java.lang.AssertionError: test leaves persistent cluster metadata behind
Expected: an empty collection
but: <[cluster.routing.rebalance.enable]>
at __randomizedtesting.SeedInfo.seed([93FC1C9EB0FD4E2E:DC0AAA708D408FF]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.elasticsearch.test.ESIntegTestCase.afterInternal(ESIntegTestCase.java:569)
at org.elasticsearch.test.ESIntegTestCase.cleanUpCluster(ESIntegTestCase.java:2237)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:1004)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
``` | 1.0 | [CI] PartiallyCachedShardAllocationIntegTests testPartialSearchableSnapshotNotAllocatedToNodesWithoutCache failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/n2fya3bzklnog/tests/:x-pack:plugin:searchable-snapshots:internalClusterTest/org.elasticsearch.xpack.searchablesnapshots.cache.shared.PartiallyCachedShardAllocationIntegTests/testPartialSearchableSnapshotNotAllocatedToNodesWithoutCache
**Reproduction line:**
```
./gradlew ':x-pack:plugin:searchable-snapshots:internalClusterTest' --tests "org.elasticsearch.xpack.searchablesnapshots.cache.shared.PartiallyCachedShardAllocationIntegTests.testPartialSearchableSnapshotNotAllocatedToNodesWithoutCache" -Dtests.seed=93FC1C9EB0FD4E2E -Dtests.locale=sr-BA -Dtests.timezone=America/Goose_Bay -Druntime.java=17
```
**Applicable branches:**
8.6
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.searchablesnapshots.cache.shared.PartiallyCachedShardAllocationIntegTests&tests.test=testPartialSearchableSnapshotNotAllocatedToNodesWithoutCache
**Failure excerpt:**
```
java.lang.AssertionError: test leaves persistent cluster metadata behind
Expected: an empty collection
but: <[cluster.routing.rebalance.enable]>
at __randomizedtesting.SeedInfo.seed([93FC1C9EB0FD4E2E:DC0AAA708D408FF]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.elasticsearch.test.ESIntegTestCase.afterInternal(ESIntegTestCase.java:569)
at org.elasticsearch.test.ESIntegTestCase.cleanUpCluster(ESIntegTestCase.java:2237)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:1004)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
``` | test | partiallycachedshardallocationintegtests testpartialsearchablesnapshotnotallocatedtonodeswithoutcache failing build scan reproduction line gradlew x pack plugin searchable snapshots internalclustertest tests org elasticsearch xpack searchablesnapshots cache shared partiallycachedshardallocationintegtests testpartialsearchablesnapshotnotallocatedtonodeswithoutcache dtests seed dtests locale sr ba dtests timezone america goose bay druntime java applicable branches reproduces locally didn t try failure history failure excerpt java lang assertionerror test leaves persistent cluster metadata behind expected an empty collection but at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org elasticsearch test esintegtestcase afterinternal esintegtestcase java at org elasticsearch test esintegtestcase cleanupcluster esintegtestcase java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java | 1 |
288,766 | 31,924,980,359 | IssuesEvent | 2023-09-19 00:34:13 | NileshGule/cloud-native-ninja | https://api.github.com/repos/NileshGule/cloud-native-ninja | opened | jest-dom-5.16.5.tgz: 1 vulnerabilities (highest severity is: 5.0) | Mend: dependency security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jest-dom-5.16.5.tgz</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/NileshGule/cloud-native-ninja/commit/c04fe7000230b4a479ad8e4e4d31201f2e3f6536">c04fe7000230b4a479ad8e4e4d31201f2e3f6536</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (jest-dom version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2023-26364](https://www.mend.io/vulnerability-database/CVE-2023-26364) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.0 | css-tools-4.2.0.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p><p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2023-26364</summary>
### Vulnerable Library - <b>css-tools-4.2.0.tgz</b></p>
<p></p>
<p>Library home page: <a href="https://registry.npmjs.org/@adobe/css-tools/-/css-tools-4.2.0.tgz">https://registry.npmjs.org/@adobe/css-tools/-/css-tools-4.2.0.tgz</a></p>
<p>
Dependency Hierarchy:
- jest-dom-5.16.5.tgz (Root Library)
- :x: **css-tools-4.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/NileshGule/cloud-native-ninja/commit/c04fe7000230b4a479ad8e4e4d31201f2e3f6536">c04fe7000230b4a479ad8e4e4d31201f2e3f6536</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
@adobe/css-tools version 4.3.0 and earlier are affected by an Improper Input Validation vulnerability that could result in a denial of service while attempting to parse CSS.
<p>Publish Date: 2023-02-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-26364>CVE-2023-26364</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-hpx4-r86g-5jrg">https://github.com/advisories/GHSA-hpx4-r86g-5jrg</a></p>
<p>Release Date: 2023-02-23</p>
<p>Fix Resolution: @adobe/css-tools - 4.3.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | True | jest-dom-5.16.5.tgz: 1 vulnerabilities (highest severity is: 5.0) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jest-dom-5.16.5.tgz</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/NileshGule/cloud-native-ninja/commit/c04fe7000230b4a479ad8e4e4d31201f2e3f6536">c04fe7000230b4a479ad8e4e4d31201f2e3f6536</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (jest-dom version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2023-26364](https://www.mend.io/vulnerability-database/CVE-2023-26364) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.0 | css-tools-4.2.0.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p><p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2023-26364</summary>
### Vulnerable Library - <b>css-tools-4.2.0.tgz</b></p>
<p></p>
<p>Library home page: <a href="https://registry.npmjs.org/@adobe/css-tools/-/css-tools-4.2.0.tgz">https://registry.npmjs.org/@adobe/css-tools/-/css-tools-4.2.0.tgz</a></p>
<p>
Dependency Hierarchy:
- jest-dom-5.16.5.tgz (Root Library)
- :x: **css-tools-4.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/NileshGule/cloud-native-ninja/commit/c04fe7000230b4a479ad8e4e4d31201f2e3f6536">c04fe7000230b4a479ad8e4e4d31201f2e3f6536</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
@adobe/css-tools version 4.3.0 and earlier are affected by an Improper Input Validation vulnerability that could result in a denial of service while attempting to parse CSS.
<p>Publish Date: 2023-02-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-26364>CVE-2023-26364</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-hpx4-r86g-5jrg">https://github.com/advisories/GHSA-hpx4-r86g-5jrg</a></p>
<p>Release Date: 2023-02-23</p>
<p>Fix Resolution: @adobe/css-tools - 4.3.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | non_test | jest dom tgz vulnerabilities highest severity is vulnerable library jest dom tgz found in head commit a href vulnerabilities cve severity cvss dependency type fixed in jest dom version remediation possible medium css tools tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed in some cases remediation pr cannot be created automatically for a vulnerability despite the availability of remediation details cve vulnerable library css tools tgz library home page a href dependency hierarchy jest dom tgz root library x css tools tgz vulnerable library found in head commit a href found in base branch main vulnerability details adobe css tools version and earlier are affected by an improper input validation vulnerability that could result in a denial of service while attempting to parse css publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope changed impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution adobe css tools step up your open source security game with mend | 0 |
16,364 | 2,613,994,303 | IssuesEvent | 2015-02-28 02:25:28 | RMRobotics/FTC_5421_2014-2015 | https://api.github.com/repos/RMRobotics/FTC_5421_2014-2015 | closed | Rewrite Drive.h encoder fns to use the PolarMecDrive function calculations | high priority | The way we do the PolarMecDrive calculations makes it so that if we split off the calculations into a separate function, we can then feed that function either a maxSpeed of 100 or a maxSpeed of encoder value and it can spit out encoder values or speed values.
The calculation returns ratios to maxSpeed, so sending it an encoder value for maxSpeed will return the correct encoder ratio. This will make our code more consistent, and will also enable us to do arbitrary encoder fns for any speed, rotation, length. | 1.0 | Rewrite Drive.h encoder fns to use the PolarMecDrive function calculations - The way we do the PolarMecDrive calculations makes it so that if we split off the calculations into a separate function, we can then feed that function either a maxSpeed of 100 or a maxSpeed of encoder value and it can spit out encoder values or speed values.
The calculation returns ratios to maxSpeed, so sending it an encoder value for maxSpeed will return the correct encoder ratio. This will make our code more consistent, and will also enable us to do arbitrary encoder fns for any speed, rotation, length. | non_test | rewrite drive h encoder fns to use the polarmecdrive function calculations the way we do the polarmecdrive calculations makes it so that if we split off the calculations into a separate function we can then feed that function either a maxspeed of or a maxspeed of encoder value and it can spit out encoder values or speed values the calculation returns ratios to maxspeed so sending it an encoder value for maxspeed will return the correct encoder ratio this will make our code more consistent and will also enable us to do arbitrary encoder fns for any speed rotation length | 0 |
166,683 | 6,309,948,066 | IssuesEvent | 2017-07-23 04:39:58 | USGS-Astrogeology/PySAT_Point_Spectra_GUI | https://api.github.com/repos/USGS-Astrogeology/PySAT_Point_Spectra_GUI | opened | Add 'CV' and 'IC' for regression UIs | enhancement Priority: High | The regression UIs do not have the CV portion added,
regression.py has had the CV part commented out, this will fixed soon | 1.0 | Add 'CV' and 'IC' for regression UIs - The regression UIs do not have the CV portion added,
regression.py has had the CV part commented out, this will fixed soon | non_test | add cv and ic for regression uis the regression uis do not have the cv portion added regression py has had the cv part commented out this will fixed soon | 0 |
107,398 | 9,211,811,693 | IssuesEvent | 2019-03-09 18:30:00 | GatorEducator/gatorgrouper | https://api.github.com/repos/GatorEducator/gatorgrouper | closed | Perform Mutation Testing to Assess the Adequacy of the Test Suite | technique testing tools | Even though GatorGrouper has a high-coverage test suite, it would be a good idea for us to assess the quality of the tests through the use of a mutation testing tool. There are several mutation testing tools for Python, such as MutPy and Cosmic-Ray. However, we would need to investigate the types of mutation operators employed by these tools and learn how to introduce them into our build system. | 1.0 | Perform Mutation Testing to Assess the Adequacy of the Test Suite - Even though GatorGrouper has a high-coverage test suite, it would be a good idea for us to assess the quality of the tests through the use of a mutation testing tool. There are several mutation testing tools for Python, such as MutPy and Cosmic-Ray. However, we would need to investigate the types of mutation operators employed by these tools and learn how to introduce them into our build system. | test | perform mutation testing to assess the adequacy of the test suite even though gatorgrouper has a high coverage test suite it would be a good idea for us to assess the quality of the tests through the use of a mutation testing tool there are several mutation testing tools for python such as mutpy and cosmic ray however we would need to investigate the types of mutation operators employed by these tools and learn how to introduce them into our build system | 1 |
300,174 | 22,644,408,251 | IssuesEvent | 2022-07-01 07:14:46 | AlexKollar/Cryptex | https://api.github.com/repos/AlexKollar/Cryptex | opened | Cryptex Github.io | documentation enhancement | I figure it might be time to make an official github pages for Cryptex.
Despite the repo speaking for itself having a nice fancy page for it would be cool.
Anyone want to tag in on this feel free. 👍🏼 | 1.0 | Cryptex Github.io - I figure it might be time to make an official github pages for Cryptex.
Despite the repo speaking for itself having a nice fancy page for it would be cool.
Anyone want to tag in on this feel free. 👍🏼 | non_test | cryptex github io i figure it might be time to make an official github pages for cryptex despite the repo speaking for itself having a nice fancy page for it would be cool anyone want to tag in on this feel free 👍🏼 | 0 |
21,138 | 3,686,285,121 | IssuesEvent | 2016-02-25 00:26:50 | cgstudiomap/cgstudiomap | https://api.github.com/repos/cgstudiomap/cgstudiomap | opened | En tant que studio j'aimerais pouvoir trouver une option pour me désinscrire du mailing. | 0 - Backlog design Development | Lorsque les studios se désinscrivent de mailchimp, je vais cliquer sur opt-out dans l'instance odoo, ce qui évite de leurs renvoyer des emails de masse dans le future:

Je ne connais pas les lois de tous les pays, mais au Canada depuis 2014, si une personne se désinscrit, il peut nous poursuivre pour ne pas respecter et renvoyer un eblast. C'est une option obligatoire.
<!---
@huboard:{"order":2.2750898387435825e-15,"milestone_order":3.848781182532762e-49}
-->
| 1.0 | En tant que studio j'aimerais pouvoir trouver une option pour me désinscrire du mailing. - Lorsque les studios se désinscrivent de mailchimp, je vais cliquer sur opt-out dans l'instance odoo, ce qui évite de leurs renvoyer des emails de masse dans le future:

Je ne connais pas les lois de tous les pays, mais au Canada depuis 2014, si une personne se désinscrit, il peut nous poursuivre pour ne pas respecter et renvoyer un eblast. C'est une option obligatoire.
<!---
@huboard:{"order":2.2750898387435825e-15,"milestone_order":3.848781182532762e-49}
-->
| non_test | en tant que studio j aimerais pouvoir trouver une option pour me désinscrire du mailing lorsque les studios se désinscrivent de mailchimp je vais cliquer sur opt out dans l instance odoo ce qui évite de leurs renvoyer des emails de masse dans le future je ne connais pas les lois de tous les pays mais au canada depuis si une personne se désinscrit il peut nous poursuivre pour ne pas respecter et renvoyer un eblast c est une option obligatoire huboard order milestone order | 0 |
326,714 | 28,014,474,873 | IssuesEvent | 2023-03-27 21:17:41 | jar285/mywebclass-simulation | https://api.github.com/repos/jar285/mywebclass-simulation | closed | Integrate automated testing into CI/CD pipeline for continuous monitoring | Testing | As a developer, I want to ensure that my code changes do not negatively impact the performance of the application, so that I can deploy with confidence. To achieve this, I need to integrate automated testing into the CI/CD pipeline to continuously monitor performance metrics. This will allow me to detect any performance regressions early in the development cycle and address them before they become larger issues. The automated tests should measure the load time of the application, and fail the build if the load time exceeds a certain threshold. | 1.0 | Integrate automated testing into CI/CD pipeline for continuous monitoring - As a developer, I want to ensure that my code changes do not negatively impact the performance of the application, so that I can deploy with confidence. To achieve this, I need to integrate automated testing into the CI/CD pipeline to continuously monitor performance metrics. This will allow me to detect any performance regressions early in the development cycle and address them before they become larger issues. The automated tests should measure the load time of the application, and fail the build if the load time exceeds a certain threshold. | test | integrate automated testing into ci cd pipeline for continuous monitoring as a developer i want to ensure that my code changes do not negatively impact the performance of the application so that i can deploy with confidence to achieve this i need to integrate automated testing into the ci cd pipeline to continuously monitor performance metrics this will allow me to detect any performance regressions early in the development cycle and address them before they become larger issues the automated tests should measure the load time of the application and fail the build if the load time exceeds a certain threshold | 1 |
343,528 | 30,670,838,688 | IssuesEvent | 2023-07-25 22:19:48 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Flaky cypress test: `composer.spec > Rich text editor > autocomplete behaviour tests` | A-Developer-Experience Z-Labs Z-Flaky-Test A-Rich-Text-Editor | ### Your use case
```
Composer
Rich text editor
Mentions
Plain text mode
autocomplete behaviour tests:
CypressError: Timed out retrying after 10050ms: `cy.click()` failed because the page updated while this command was executing. Cypress tried to locate elements based on this query:
> <div.mx_AccessibleButton.mx_RoomTile>
```
### Have you considered any alternatives?
_No response_
### Additional context
_No response_ | 1.0 | Flaky cypress test: `composer.spec > Rich text editor > autocomplete behaviour tests` - ### Your use case
```
Composer
Rich text editor
Mentions
Plain text mode
autocomplete behaviour tests:
CypressError: Timed out retrying after 10050ms: `cy.click()` failed because the page updated while this command was executing. Cypress tried to locate elements based on this query:
> <div.mx_AccessibleButton.mx_RoomTile>
```
### Have you considered any alternatives?
_No response_
### Additional context
_No response_ | test | flaky cypress test composer spec rich text editor autocomplete behaviour tests your use case composer rich text editor mentions plain text mode autocomplete behaviour tests cypresserror timed out retrying after cy click failed because the page updated while this command was executing cypress tried to locate elements based on this query have you considered any alternatives no response additional context no response | 1 |
365,557 | 10,789,235,290 | IssuesEvent | 2019-11-05 11:27:05 | ipfs/ipfs-cluster | https://api.github.com/repos/ipfs/ipfs-cluster | closed | Pin expiration | difficulty:hard help wanted priority:low ready |
#### Basic information
* [x] Version information (mark as appropiate):
* [x] Master
* [ ] Release candidate for next version
* [ ] Latest stable version
* [ ] An older version I should not be using
* [x] Type (mark as appropiate):
* [ ] Bug
* [x] Feature request
* [ ] Enhancement
#### Description
<!--
Include a description of the problem or the feature.
When reporting a bug, please try to include:
* What you were doing when you experienced the bug.
* Any relevant log messages (and the peers they belong to if you have logs for several peers).
* When possible, steps to reproduce the bug.
-->
IPFS Cluster could automatically expire Pins given an expiring date, associated to the Pin. The pinset could be watched regularly and unpins triggered for expired items.
This would probably be a feature of the main component, as pin parameters are handled from there. It would imply storing expiry dates associated to Pins in the state.
Note, unpinning from cluster would just mean that content in the IPFS nodes can be garbage collected, but not that the content will stop being available from the network (if it has been fetched from somewhere else). | 1.0 | Pin expiration -
#### Basic information
* [x] Version information (mark as appropiate):
* [x] Master
* [ ] Release candidate for next version
* [ ] Latest stable version
* [ ] An older version I should not be using
* [x] Type (mark as appropiate):
* [ ] Bug
* [x] Feature request
* [ ] Enhancement
#### Description
<!--
Include a description of the problem or the feature.
When reporting a bug, please try to include:
* What you were doing when you experienced the bug.
* Any relevant log messages (and the peers they belong to if you have logs for several peers).
* When possible, steps to reproduce the bug.
-->
IPFS Cluster could automatically expire Pins given an expiring date, associated to the Pin. The pinset could be watched regularly and unpins triggered for expired items.
This would probably be a feature of the main component, as pin parameters are handled from there. It would imply storing expiry dates associated to Pins in the state.
Note, unpinning from cluster would just mean that content in the IPFS nodes can be garbage collected, but not that the content will stop being available from the network (if it has been fetched from somewhere else). | non_test | pin expiration basic information version information mark as appropiate master release candidate for next version latest stable version an older version i should not be using type mark as appropiate bug feature request enhancement description include a description of the problem or the feature when reporting a bug please try to include what you were doing when you experienced the bug any relevant log messages and the peers they belong to if you have logs for several peers when possible steps to reproduce the bug ipfs cluster could automatically expire pins given an expiring date associated to the pin the pinset could be watched regularly and unpins triggered for expired items this would probably be a feature of the main component as pin parameters are handled from there it would imply storing expiry dates associated to pins in the state note unpinning from cluster would just mean that content in the ipfs nodes can be garbage collected but not that the content will stop being available from the network if it has been fetched from somewhere else | 0 |
185,076 | 14,292,764,518 | IssuesEvent | 2020-11-24 01:55:31 | github-vet/rangeclosure-findings | https://api.github.com/repos/github-vet/rangeclosure-findings | closed | benfab/clair-demo: clair/clair/vendor/golang.org/x/net/icmp/multipart_test.go; 42 LoC | fresh small test |
Found a possible issue in [benfab/clair-demo](https://www.github.com/benfab/clair-demo) at [clair/clair/vendor/golang.org/x/net/icmp/multipart_test.go](https://github.com/benfab/clair-demo/blob/ddcb5f6ed3272c8f301a36f329e3d2563852542b/clair/clair/vendor/golang.org/x/net/icmp/multipart_test.go#L132-L173)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/benfab/clair-demo/blob/ddcb5f6ed3272c8f301a36f329e3d2563852542b/clair/clair/vendor/golang.org/x/net/icmp/multipart_test.go#L132-L173)
<details>
<summary>Click here to show the 42 line(s) of Go which triggered the analyzer.</summary>
```go
for i, tt := range marshalAndParseMultipartMessageForIPv4Tests {
b, err := tt.Marshal(nil)
if err != nil {
t.Fatal(err)
}
if b[5] != 32 {
t.Errorf("#%v: got %v; want 32", i, b[5])
}
m, err := icmp.ParseMessage(iana.ProtocolICMP, b)
if err != nil {
t.Fatal(err)
}
if m.Type != tt.Type || m.Code != tt.Code {
t.Errorf("#%v: got %v; want %v", i, m, &tt)
}
switch m.Type {
case ipv4.ICMPTypeDestinationUnreachable:
got, want := m.Body.(*icmp.DstUnreach), tt.Body.(*icmp.DstUnreach)
if !reflect.DeepEqual(got.Extensions, want.Extensions) {
t.Error(dumpExtensions(i, got.Extensions, want.Extensions))
}
if len(got.Data) != 128 {
t.Errorf("#%v: got %v; want 128", i, len(got.Data))
}
case ipv4.ICMPTypeTimeExceeded:
got, want := m.Body.(*icmp.TimeExceeded), tt.Body.(*icmp.TimeExceeded)
if !reflect.DeepEqual(got.Extensions, want.Extensions) {
t.Error(dumpExtensions(i, got.Extensions, want.Extensions))
}
if len(got.Data) != 128 {
t.Errorf("#%v: got %v; want 128", i, len(got.Data))
}
case ipv4.ICMPTypeParameterProblem:
got, want := m.Body.(*icmp.ParamProb), tt.Body.(*icmp.ParamProb)
if !reflect.DeepEqual(got.Extensions, want.Extensions) {
t.Error(dumpExtensions(i, got.Extensions, want.Extensions))
}
if len(got.Data) != 128 {
t.Errorf("#%v: got %v; want 128", i, len(got.Data))
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: ddcb5f6ed3272c8f301a36f329e3d2563852542b
| 1.0 | benfab/clair-demo: clair/clair/vendor/golang.org/x/net/icmp/multipart_test.go; 42 LoC -
Found a possible issue in [benfab/clair-demo](https://www.github.com/benfab/clair-demo) at [clair/clair/vendor/golang.org/x/net/icmp/multipart_test.go](https://github.com/benfab/clair-demo/blob/ddcb5f6ed3272c8f301a36f329e3d2563852542b/clair/clair/vendor/golang.org/x/net/icmp/multipart_test.go#L132-L173)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/benfab/clair-demo/blob/ddcb5f6ed3272c8f301a36f329e3d2563852542b/clair/clair/vendor/golang.org/x/net/icmp/multipart_test.go#L132-L173)
<details>
<summary>Click here to show the 42 line(s) of Go which triggered the analyzer.</summary>
```go
for i, tt := range marshalAndParseMultipartMessageForIPv4Tests {
b, err := tt.Marshal(nil)
if err != nil {
t.Fatal(err)
}
if b[5] != 32 {
t.Errorf("#%v: got %v; want 32", i, b[5])
}
m, err := icmp.ParseMessage(iana.ProtocolICMP, b)
if err != nil {
t.Fatal(err)
}
if m.Type != tt.Type || m.Code != tt.Code {
t.Errorf("#%v: got %v; want %v", i, m, &tt)
}
switch m.Type {
case ipv4.ICMPTypeDestinationUnreachable:
got, want := m.Body.(*icmp.DstUnreach), tt.Body.(*icmp.DstUnreach)
if !reflect.DeepEqual(got.Extensions, want.Extensions) {
t.Error(dumpExtensions(i, got.Extensions, want.Extensions))
}
if len(got.Data) != 128 {
t.Errorf("#%v: got %v; want 128", i, len(got.Data))
}
case ipv4.ICMPTypeTimeExceeded:
got, want := m.Body.(*icmp.TimeExceeded), tt.Body.(*icmp.TimeExceeded)
if !reflect.DeepEqual(got.Extensions, want.Extensions) {
t.Error(dumpExtensions(i, got.Extensions, want.Extensions))
}
if len(got.Data) != 128 {
t.Errorf("#%v: got %v; want 128", i, len(got.Data))
}
case ipv4.ICMPTypeParameterProblem:
got, want := m.Body.(*icmp.ParamProb), tt.Body.(*icmp.ParamProb)
if !reflect.DeepEqual(got.Extensions, want.Extensions) {
t.Error(dumpExtensions(i, got.Extensions, want.Extensions))
}
if len(got.Data) != 128 {
t.Errorf("#%v: got %v; want 128", i, len(got.Data))
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: ddcb5f6ed3272c8f301a36f329e3d2563852542b
| test | benfab clair demo clair clair vendor golang org x net icmp multipart test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for i tt range b err tt marshal nil if err nil t fatal err if b t errorf v got v want i b m err icmp parsemessage iana protocolicmp b if err nil t fatal err if m type tt type m code tt code t errorf v got v want v i m tt switch m type case icmptypedestinationunreachable got want m body icmp dstunreach tt body icmp dstunreach if reflect deepequal got extensions want extensions t error dumpextensions i got extensions want extensions if len got data t errorf v got v want i len got data case icmptypetimeexceeded got want m body icmp timeexceeded tt body icmp timeexceeded if reflect deepequal got extensions want extensions t error dumpextensions i got extensions want extensions if len got data t errorf v got v want i len got data case icmptypeparameterproblem got want m body icmp paramprob tt body icmp paramprob if reflect deepequal got extensions want extensions t error dumpextensions i got extensions want extensions if len got data t errorf v got v want i len got data leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
200,852 | 15,160,619,166 | IssuesEvent | 2021-02-12 07:26:21 | tutao/tutanota | https://api.github.com/repos/tutao/tutanota | reopened | Canceling of message deletion leads to disappearing message | bug tested | - [x] This is not a feature request (existing functionality does not work, **not** missing functionality).
I will request features on [forum](https://www.reddit.com/r/tutanota/) or via support.
- [x] I've searched and did not find a similar issue.
**Bug in mobile app**
**Describe the bug**
Message disappears if I cancel my attempt to delete a message.
**To Reproduce**
- I swipe left on a message
- A popup appears, which i cancel
- The message is gone
**Expected behavior**
I would expect that the message is still there after clicking on cancel
**Screenshots**

**Smartphone (please complete the following information):**
- Device: Honor 8
- OS: Android 8.0.0
- Version: Tutanota 3.73.1
**Additional context**
Add any other context about the problem here.
| 1.0 | Canceling of message deletion leads to disappearing message - - [x] This is not a feature request (existing functionality does not work, **not** missing functionality).
I will request features on [forum](https://www.reddit.com/r/tutanota/) or via support.
- [x] I've searched and did not find a similar issue.
**Bug in mobile app**
**Describe the bug**
Message disappears if I cancel my attempt to delete a message.
**To Reproduce**
- I swipe left on a message
- A popup appears, which i cancel
- The message is gone
**Expected behavior**
I would expect that the message is still there after clicking on cancel
**Screenshots**

**Smartphone (please complete the following information):**
- Device: Honor 8
- OS: Android 8.0.0
- Version: Tutanota 3.73.1
**Additional context**
Add any other context about the problem here.
| test | canceling of message deletion leads to disappearing message this is not a feature request existing functionality does not work not missing functionality i will request features on or via support i ve searched and did not find a similar issue bug in mobile app describe the bug message disappears if i cancel my attempt to delete a message to reproduce i swipe left on a message a popup appears which i cancel the message is gone expected behavior i would expect that the message is still there after clicking on cancel screenshots smartphone please complete the following information device honor os android version tutanota additional context add any other context about the problem here | 1 |
821 | 2,550,200,811 | IssuesEvent | 2015-02-01 08:03:00 | BVLC/caffe | https://api.github.com/repos/BVLC/caffe | opened | Explain Convolution / Deconvolution | documentation | Explain convolution by documentation and example to illustrate
- conv. net style filtering, just to be clear
- multiple tops + bottoms
- separable filtering
- groups
- deconvolution
- ?
and even cover the computational strategies behind the different engines. This relates to #1776 since interpolation kernels and separable edge filters are obviously filters. | 1.0 | Explain Convolution / Deconvolution - Explain convolution by documentation and example to illustrate
- conv. net style filtering, just to be clear
- multiple tops + bottoms
- separable filtering
- groups
- deconvolution
- ?
and even cover the computational strategies behind the different engines. This relates to #1776 since interpolation kernels and separable edge filters are obviously filters. | non_test | explain convolution deconvolution explain convolution by documentation and example to illustrate conv net style filtering just to be clear multiple tops bottoms separable filtering groups deconvolution and even cover the computational strategies behind the different engines this relates to since interpolation kernels and separable edge filters are obviously filters | 0 |
104,982 | 22,793,401,179 | IssuesEvent | 2022-07-10 10:52:38 | Team-Discipline/Xlack-Backend | https://api.github.com/repos/Team-Discipline/Xlack-Backend | closed | [b] Authorization Feature | enhancement code review | - [x] CRUD Authorization feature.
- [x] Check authorization when client wants crucial works. | 1.0 | [b] Authorization Feature - - [x] CRUD Authorization feature.
- [x] Check authorization when client wants crucial works. | non_test | authorization feature crud authorization feature check authorization when client wants crucial works | 0 |
198,318 | 6,972,236,942 | IssuesEvent | 2017-12-11 16:25:04 | Scifabric/pybossa | https://api.github.com/repos/Scifabric/pybossa | closed | Delete tasks if you are an admin and there are results | priority.medium | Right now PYBOSSA prevents anyone to delete a task if it has a result. This is to prevent free-riders (a project owner creates a project, allows anyone to participate, gets all the task runs, and before sharing anything it deletes all the data).
However, there are scenarios where an admin should be able to delete them. For this cases, [this code](https://github.com/Scifabric/pybossa/blob/master/pybossa/auth/task.py#L44) should be fixed. | 1.0 | Delete tasks if you are an admin and there are results - Right now PYBOSSA prevents anyone to delete a task if it has a result. This is to prevent free-riders (a project owner creates a project, allows anyone to participate, gets all the task runs, and before sharing anything it deletes all the data).
However, there are scenarios where an admin should be able to delete them. For this cases, [this code](https://github.com/Scifabric/pybossa/blob/master/pybossa/auth/task.py#L44) should be fixed. | non_test | delete tasks if you are an admin and there are results right now pybossa prevents anyone to delete a task if it has a result this is to prevent free riders a project owner creates a project allows anyone to participate gets all the task runs and before sharing anything it deletes all the data however there are scenarios where an admin should be able to delete them for this cases should be fixed | 0 |
238 | 2,525,517,027 | IssuesEvent | 2015-01-21 01:53:45 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | TEST_BENCH does nothing | A-build A-testsuite E-easy | `make tips` says that `TEST_BENCH=1` will run benchmarks but it is not true. Are we ever running benchmarks? | 1.0 | TEST_BENCH does nothing - `make tips` says that `TEST_BENCH=1` will run benchmarks but it is not true. Are we ever running benchmarks? | non_test | test bench does nothing make tips says that test bench will run benchmarks but it is not true are we ever running benchmarks | 0 |
185,817 | 15,033,771,411 | IssuesEvent | 2021-02-02 11:57:52 | systemd/systemd | https://api.github.com/repos/systemd/systemd | closed | docs: logind methods do not enforce inhibitiors of type "block" for sufficiently privileged clients | documentation login | ### Submission type
- Bug report
<!-- **NOTE:** Do not submit anything other than bug reports or RFEs via the issue tracker! -->
### systemd version the issue has been seen with
systemd-233-6.fc26.x86_64
<!-- **NOTE:** Do not submit bug reports about anything but the two most recently released systemd versions upstream! -->
<!-- For older version please use distribution trackers (see https://github.com/systemd/systemd/blob/master/.github/CONTRIBUTING.md#filing-issues). -->
### Used distribution
Fedora 26
### In case of bug report: Expected behaviour you didn't see
According to the [documentation](https://www.freedesktop.org/wiki/Software/systemd/logind/), calling `org.freedesktop.login1.Manager.Suspend()` - which is how `systemctl suspend` works for unprivileged users - should "enforce" inhibition locks.
> PowerOff(), Reboot(), Suspend(), Hibernate(), HybridSleep() results in the system being powered off, rebooted, suspend, hibernated or hibernated+suspended. The only argument is the PolicyKit interactivity boolean (see above). The main purpose of these calls is that they enforce PolicyKit policy and hence allow powering off/rebooting/suspending/hibernating even by unprivileged users. _They also enforce inhibition locks._
The bug is that this is not correct. logind does not enforce inhibition locks of type "block" when this method is called. Instead, the implementation of `systemctl` (`logind_check_inhibitors()`) shows that inhibition locks of type "block" are enforced by the client.
### In case of bug report: Unexpected behaviour you saw
Sleep proceeds without being blocked.
### In case of bug report: Steps to reproduce the problem
```
$ systemd-inhibit --what=sleep sleep 60 &
$ systemctl suspend -i
``` | 1.0 | docs: logind methods do not enforce inhibitiors of type "block" for sufficiently privileged clients - ### Submission type
- Bug report
<!-- **NOTE:** Do not submit anything other than bug reports or RFEs via the issue tracker! -->
### systemd version the issue has been seen with
systemd-233-6.fc26.x86_64
<!-- **NOTE:** Do not submit bug reports about anything but the two most recently released systemd versions upstream! -->
<!-- For older version please use distribution trackers (see https://github.com/systemd/systemd/blob/master/.github/CONTRIBUTING.md#filing-issues). -->
### Used distribution
Fedora 26
### In case of bug report: Expected behaviour you didn't see
According to the [documentation](https://www.freedesktop.org/wiki/Software/systemd/logind/), calling `org.freedesktop.login1.Manager.Suspend()` - which is how `systemctl suspend` works for unprivileged users - should "enforce" inhibition locks.
> PowerOff(), Reboot(), Suspend(), Hibernate(), HybridSleep() results in the system being powered off, rebooted, suspend, hibernated or hibernated+suspended. The only argument is the PolicyKit interactivity boolean (see above). The main purpose of these calls is that they enforce PolicyKit policy and hence allow powering off/rebooting/suspending/hibernating even by unprivileged users. _They also enforce inhibition locks._
The bug is that this is not correct. logind does not enforce inhibition locks of type "block" when this method is called. Instead, the implementation of `systemctl` (`logind_check_inhibitors()`) shows that inhibition locks of type "block" are enforced by the client.
### In case of bug report: Unexpected behaviour you saw
Sleep proceeds without being blocked.
### In case of bug report: Steps to reproduce the problem
```
$ systemd-inhibit --what=sleep sleep 60 &
$ systemctl suspend -i
``` | non_test | docs logind methods do not enforce inhibitiors of type block for sufficiently privileged clients submission type bug report systemd version the issue has been seen with systemd used distribution fedora in case of bug report expected behaviour you didn t see according to the calling org freedesktop manager suspend which is how systemctl suspend works for unprivileged users should enforce inhibition locks poweroff reboot suspend hibernate hybridsleep results in the system being powered off rebooted suspend hibernated or hibernated suspended the only argument is the policykit interactivity boolean see above the main purpose of these calls is that they enforce policykit policy and hence allow powering off rebooting suspending hibernating even by unprivileged users they also enforce inhibition locks the bug is that this is not correct logind does not enforce inhibition locks of type block when this method is called instead the implementation of systemctl logind check inhibitors shows that inhibition locks of type block are enforced by the client in case of bug report unexpected behaviour you saw sleep proceeds without being blocked in case of bug report steps to reproduce the problem systemd inhibit what sleep sleep systemctl suspend i | 0 |
232,001 | 17,767,931,116 | IssuesEvent | 2021-08-30 09:56:52 | vlang/vab | https://api.github.com/repos/vlang/vab | closed | Specify more details in README | documentation question | Exciting project!
Questions it'd be great to answer some place, either here or in the README:
1. Can we do network calls from V in an Android app generated with vab? Last I heard there are extra hoops one must jump through to use the network on Android from a language other than Java or Kotlin, though that was 3 years ago.
2. Are there any apps in the Play Store right now that were written in V? If so, consider linking to them!
3. What limitations should we expect right now?
4. Has anyone built the UI and glue code in Kotlin, and built the rest of the app in V?
Thanks! | 1.0 | Specify more details in README - Exciting project!
Questions it'd be great to answer some place, either here or in the README:
1. Can we do network calls from V in an Android app generated with vab? Last I heard there are extra hoops one must jump through to use the network on Android from a language other than Java or Kotlin, though that was 3 years ago.
2. Are there any apps in the Play Store right now that were written in V? If so, consider linking to them!
3. What limitations should we expect right now?
4. Has anyone built the UI and glue code in Kotlin, and built the rest of the app in V?
Thanks! | non_test | specify more details in readme exciting project questions it d be great to answer some place either here or in the readme can we do network calls from v in an android app generated with vab last i heard there are extra hoops one must jump through to use the network on android from a language other than java or kotlin though that was years ago are there any apps in the play store right now that were written in v if so consider linking to them what limitations should we expect right now has anyone built the ui and glue code in kotlin and built the rest of the app in v thanks | 0 |
348,549 | 31,626,455,795 | IssuesEvent | 2023-09-06 05:55:27 | wazuh/wazuh-qa | https://api.github.com/repos/wazuh/wazuh-qa | closed | Validate Syscollector deltas PKs handling | level/subtask type/test | | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
|4.5.2| RC2| https://github.com/wazuh/wazuh/pull/18742|
## Description
This issue aims to execute some manual smoke and backward compatibility tests related to https://github.com/wazuh/wazuh/issues/18714 changes
**NOTE:**
Despite this test should be done with pre-release package, the fact that v4.5.2-rc2 contains this change as the main change, we agreed to test it with fresh built packages
https://ci.wazuh.info/job/Packages_builder/164048/
## Proposed checks
Check current manager (v4.5.2-rc1) behavior with older Wazuh Agents on different OS families
- Wazuh agent versions (last patch version for last minors that have relevant changes):
- v4.1.5: legacy syscollector sync
- v4.2.7: syscollector rework
- v4.4.5: syscollector deltas
- OS (one per package manager supported):
- [x] Windows: last wide-used version
- [x] Linux
- [x] RPM Legacy: Amazon Linux 2
- [x] RPM new: / OpenSuse Tumbleweed
- [x] DEB: Ubuntu 22.04 LTS
- [x] macOS (Ventura) Brew/Pkg
From version to version, Upgrading it is the method to be used
## Scope and history
- Whole sync information sent: available before 4.2. All inventory info was sent in each scan
- Rsync: sync algorithm based on binary search. Available in syscollector since 4.2 (only available sync mechanism) up to nowadays (restricted only on the first scan and combined with dbsync/deltas from the second scan)
- Dbsync and deltas: sync algorithm based on diff since the last scan. Available in syscollector since 4.4 up to nowadays (restricted only in second and further scans)
## Preconditions
- Set up an v4.5.2-rc1 Wazuh Manager. OVA or AMI is the simpler way
- Set 'echo wazuh_db.debug=2>> local_internal_options.conf ' and restart
- For each OS and Agent version
- Install the Agent version on the selected OS
- Set connection information according to Manager information
- Set syscollector frequency to allow a fluent manual testing and debug: 4/5 minutes should be OK
- Disable SCA, Syscheck, Rootcheck and other modules to simplify log handing
- Start the agent
## Expected results
- Manager inventory for the agent is ALWAYS consistent
- Manager's daemons do not show any ERROR/WARNING /CRITICAL error
- Manager's daemons, specially analysisd, modulesd and wazuhdb still run after/duning syscollector information handling
Tool bump_info.sh
```sh
#!/bin/bash
#
TOKEN=$(curl -s -u wazuh:wazuh -k -X POST "https://localhost:55000/security/user/authenticate?raw=true")
echo "Last scan"
curl -s -k -X GET "https://localhost:55000/syscollector/$AGENTID/os?select=scan.time&pretty=true" -H "Authorization: Bearer $TOKEN" | jq -r '.data.affected_items[0].scan.time'
echo "Comparing Packages"
curl -s -k -X GET "https://localhost:55000/syscollector/$AGENTID/packages?select=name" -H "Authorization: Bearer $TOKEN" | jq -r '.data.total_affected_items'
sqlite3 /var/ossec/queue/db/$AGENTID.db "select count(*) from sys_programs;"
echo "Comparing Processes"
curl -s -k -X GET "https://localhost:55000/syscollector/$AGENTID/processes?select=name" -H "Authorization: Bearer $TOKEN" | jq -r '.data.total_affected_items'
sqlite3 /var/ossec/queue/db/$AGENTID.db "select count(*) from sys_processes;"
echo "Comparing Ports"
curl -s -k -X GET "https://localhost:55000/syscollector/$AGENTID/ports?select=local.port" -H "Authorization: Bearer $TOKEN" | jq -r '.data.total_affected_items'
sqlite3 /var/ossec/queue/db/$AGENTID.db "select count(*) from sys_ports;"
grep -E "ERROR|WARNING|CRITICAL" /var/ossec/logs/ossec.log
```
For 4.1.5 agents:
- Any scan sync sending all information at once
For 4.2.7 agents::
- Any scan sync using Rsync
For 4.4.5 agents:
- First scan sync using Rsync
- Second scan and further ones
- Sync using Dbsync/deltas
- Sync using Rsync -> avoiding sync because Dbsync/deltas populated OK | 1.0 | Validate Syscollector deltas PKs handling - | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
|4.5.2| RC2| https://github.com/wazuh/wazuh/pull/18742|
## Description
This issue aims to execute some manual smoke and backward compatibility tests related to https://github.com/wazuh/wazuh/issues/18714 changes
**NOTE:**
Despite this test should be done with pre-release package, the fact that v4.5.2-rc2 contains this change as the main change, we agreed to test it with fresh built packages
https://ci.wazuh.info/job/Packages_builder/164048/
## Proposed checks
Check current manager (v4.5.2-rc1) behavior with older Wazuh Agents on different OS families
- Wazuh agent versions (last patch version for last minors that have relevant changes):
- v4.1.5: legacy syscollector sync
- v4.2.7: syscollector rework
- v4.4.5: syscollector deltas
- OS (one per package manager supported):
- [x] Windows: last wide-used version
- [x] Linux
- [x] RPM Legacy: Amazon Linux 2
- [x] RPM new: / OpenSuse Tumbleweed
- [x] DEB: Ubuntu 22.04 LTS
- [x] macOS (Ventura) Brew/Pkg
From version to version, Upgrading it is the method to be used
## Scope and history
- Whole sync information sent: available before 4.2. All inventory info was sent in each scan
- Rsync: sync algorithm based on binary search. Available in syscollector since 4.2 (only available sync mechanism) up to nowadays (restricted only on the first scan and combined with dbsync/deltas from the second scan)
- Dbsync and deltas: sync algorithm based on diff since the last scan. Available in syscollector since 4.4 up to nowadays (restricted only in second and further scans)
## Preconditions
- Set up an v4.5.2-rc1 Wazuh Manager. OVA or AMI is the simpler way
- Set 'echo wazuh_db.debug=2>> local_internal_options.conf ' and restart
- For each OS and Agent version
- Install the Agent version on the selected OS
- Set connection information according to Manager information
- Set syscollector frequency to allow a fluent manual testing and debug: 4/5 minutes should be OK
- Disable SCA, Syscheck, Rootcheck and other modules to simplify log handing
- Start the agent
## Expected results
- Manager inventory for the agent is ALWAYS consistent
- Manager's daemons do not show any ERROR/WARNING /CRITICAL error
- Manager's daemons, specially analysisd, modulesd and wazuhdb still run after/duning syscollector information handling
Tool bump_info.sh
```sh
#!/bin/bash
#
TOKEN=$(curl -s -u wazuh:wazuh -k -X POST "https://localhost:55000/security/user/authenticate?raw=true")
echo "Last scan"
curl -s -k -X GET "https://localhost:55000/syscollector/$AGENTID/os?select=scan.time&pretty=true" -H "Authorization: Bearer $TOKEN" | jq -r '.data.affected_items[0].scan.time'
echo "Comparing Packages"
curl -s -k -X GET "https://localhost:55000/syscollector/$AGENTID/packages?select=name" -H "Authorization: Bearer $TOKEN" | jq -r '.data.total_affected_items'
sqlite3 /var/ossec/queue/db/$AGENTID.db "select count(*) from sys_programs;"
echo "Comparing Processes"
curl -s -k -X GET "https://localhost:55000/syscollector/$AGENTID/processes?select=name" -H "Authorization: Bearer $TOKEN" | jq -r '.data.total_affected_items'
sqlite3 /var/ossec/queue/db/$AGENTID.db "select count(*) from sys_processes;"
echo "Comparing Ports"
curl -s -k -X GET "https://localhost:55000/syscollector/$AGENTID/ports?select=local.port" -H "Authorization: Bearer $TOKEN" | jq -r '.data.total_affected_items'
sqlite3 /var/ossec/queue/db/$AGENTID.db "select count(*) from sys_ports;"
grep -E "ERROR|WARNING|CRITICAL" /var/ossec/logs/ossec.log
```
For 4.1.5 agents:
- Any scan sync sending all information at once
For 4.2.7 agents::
- Any scan sync using Rsync
For 4.4.5 agents:
- First scan sync using Rsync
- Second scan and further ones
- Sync using Dbsync/deltas
- Sync using Rsync -> avoiding sync because Dbsync/deltas populated OK | test | validate syscollector deltas pks handling target version related issue related pr description this issue aims to execute some manual smoke and backward compatibility tests related to changes note despite this test should be done with pre release package the fact that contains this change as the main change we agreed to test it with fresh built packages proposed checks check current manager behavior with older wazuh agents on different os families wazuh agent versions last patch version for last minors that have relevant changes legacy syscollector sync syscollector rework syscollector deltas os one per package manager supported windows last wide used version linux rpm legacy amazon linux rpm new opensuse tumbleweed deb ubuntu lts macos ventura brew pkg from version to version upgrading it is the method to be used scope and history whole sync information sent available before all inventory info was sent in each scan rsync sync algorithm based on binary search available in syscollector since only available sync mechanism up to nowadays restricted only on the first scan and combined with dbsync deltas from the second scan dbsync and deltas sync algorithm based on diff since the last scan available in syscollector since up to nowadays restricted only in second and further scans preconditions set up an wazuh manager ova or ami is the simpler way set echo wazuh db debug local internal options conf and restart for each os and agent version install the agent version on the selected os set connection information according to manager information set syscollector frequency to allow a fluent manual testing and debug minutes should be ok disable sca syscheck rootcheck and other modules to simplify log handing start the agent expected results manager inventory for the agent is always consistent manager s daemons do not show any error warning critical error manager s daemons specially analysisd modulesd and wazuhdb still run after duning syscollector information handling tool bump info sh sh bin bash token curl s u wazuh wazuh k x post echo last scan curl s k x get h authorization bearer token jq r data affected items scan time echo comparing packages curl s k x get h authorization bearer token jq r data total affected items var ossec queue db agentid db select count from sys programs echo comparing processes curl s k x get h authorization bearer token jq r data total affected items var ossec queue db agentid db select count from sys processes echo comparing ports curl s k x get h authorization bearer token jq r data total affected items var ossec queue db agentid db select count from sys ports grep e error warning critical var ossec logs ossec log for agents any scan sync sending all information at once for agents any scan sync using rsync for agents first scan sync using rsync second scan and further ones sync using dbsync deltas sync using rsync avoiding sync because dbsync deltas populated ok | 1 |
102,598 | 8,850,244,784 | IssuesEvent | 2019-01-08 12:42:49 | FreeRDP/FreeRDP | https://api.github.com/repos/FreeRDP/FreeRDP | closed | Memory leaks in TestFreeRDPCodecRemoteFX | fixed-waiting-test | Compiling with the following scripst, I get in memory leaks in TestFreeRDPCodecRemoteFX.
Summary: compiling with various -fsanitize options with gcc-8 on ubuntu-18.04.
In details:
I run this script to install gcc-8 with its dependencies in `/usr/local/gcc` on ubuntu-18.04 and ubuntu-14.04 (also tested on debian 8.11 jessie).
[get-and-compile-gcc.txt](https://github.com/FreeRDP/FreeRDP/files/2703001/get-and-compile-gcc.txt)
Then I use this script to compile and test FreeRDP (eg. master 5b24dc1aca924bf0c7bdb959233f887f35b84c9b):
[compile-freerdp-memory-leaks.txt](https://github.com/FreeRDP/FreeRDP/files/2703009/compile-freerdp-memory-leaks.txt)
and get:
```
Start 171: TestFreeRDPCodecRemoteFX
171/186 Test #171: TestFreeRDPCodecRemoteFX .................***Failed 0.21 sec
=================================================================
==14224==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 16384 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x55997288e0dc in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:783
#2 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#3 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Direct leak of 192 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f9cf949 in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:210
#2 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#3 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#4 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Direct leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f9e199e in allocateRegion /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/region.c:193
#2 0x7fe02f9e410d in region16_union_rect /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/region.c:507
#3 0x7fe02f9daa28 in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1191
#4 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#5 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#6 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 24712 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f560d1c in _aligned_offset_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:104
#2 0x7fe02f560b7a in _aligned_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:61
#3 0x7fe02f499510 in BufferPool_Take /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/BufferPool.c:183
#4 0x7fe02f9b444c in rfx_decode_rgb /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx_decode.c:79
#5 0x7fe02f9d581d in rfx_process_message_tile_work_callback /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:747
#6 0x7fe02f51a082 in thread_pool_work_func /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/pool/pool.c:91
#7 0x7fe02f51f124 in thread_launcher /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/thread/thread.c:334
#8 0x7fe02e9776da in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x76da)
Indirect leak of 24712 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f560d1c in _aligned_offset_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:104
#2 0x7fe02f560b7a in _aligned_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:61
#3 0x7fe02f499510 in BufferPool_Take /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/BufferPool.c:183
#4 0x7fe02f9b3f80 in rfx_decode_component /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx_decode.c:45
#5 0x7fe02f9b459d in rfx_decode_rgb /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx_decode.c:83
#6 0x7fe02f9d581d in rfx_process_message_tile_work_callback /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:747
#7 0x7fe02f51a082 in thread_pool_work_func /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/pool/pool.c:91
#8 0x7fe02f51f124 in thread_launcher /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/thread/thread.c:334
#9 0x7fe02e9776da in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x76da)
Indirect leak of 16424 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f560d1c in _aligned_offset_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:104
#2 0x7fe02f560b7a in _aligned_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:61
#3 0x7fe02f9cf5dc in rfx_decoder_tile_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:165
#4 0x7fe02f49b77b in ObjectPool_Take /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/ObjectPool.c:54
#5 0x7fe02f9d74e5 in rfx_process_message_tileset /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:887
#6 0x7fe02f9d9dae in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1126
#7 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#8 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#9 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 256 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f49bf3c in ObjectPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/ObjectPool.c:132
#2 0x7fe02f9cfafb in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:227
#3 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 256 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f49b087 in BufferPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/BufferPool.c:455
#2 0x7fe02f9cfd70 in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:259
#3 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f9cfa2a in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:217
#2 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#3 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#4 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 104 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f49ae59 in BufferPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/BufferPool.c:434
#2 0x7fe02f9cfd70 in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:259
#3 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 104 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f49be7d in ObjectPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/ObjectPool.c:126
#2 0x7fe02f9cfafb in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:227
#3 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 80 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f9cf5b8 in rfx_decoder_tile_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:162
#2 0x7fe02f49b77b in ObjectPool_Take /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/ObjectPool.c:54
#3 0x7fe02f9d74e5 in rfx_process_message_tileset /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:887
#4 0x7fe02f9d9dae in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1126
#5 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#6 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#7 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbaf40 in realloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdef40)
#1 0x7fe02f9d6135 in rfx_process_message_tileset /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:809
#2 0x7fe02f9d9dae in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1126
#3 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f4269d0 in InitializeCriticalSectionEx /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/synch/critical.c:76
#2 0x7fe02f426b2c in InitializeCriticalSectionAndSpinCount /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/synch/critical.c:102
#3 0x7fe02f49c00d in ObjectPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/ObjectPool.c:141
#4 0x7fe02f9cfafb in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:227
#5 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#6 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#7 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f4269d0 in InitializeCriticalSectionEx /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/synch/critical.c:76
#2 0x7fe02f426b2c in InitializeCriticalSectionAndSpinCount /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/synch/critical.c:102
#3 0x7fe02f49af8c in BufferPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/BufferPool.c:447
#4 0x7fe02f9cfd70 in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:259
#5 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#6 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#7 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 8 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbaf40 in realloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdef40)
#1 0x7fe02f9d7281 in rfx_process_message_tileset /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:855
#2 0x7fe02f9d9dae in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1126
#3 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 8 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbaf40 in realloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdef40)
#1 0x7fe02f9d4b60 in rfx_process_message_region /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:691
#2 0x7fe02f9d9d7e in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1122
#3 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
SUMMARY: AddressSanitizer: 83496 byte(s) leaked in 17 allocation(s).
``` | 1.0 | Memory leaks in TestFreeRDPCodecRemoteFX - Compiling with the following scripst, I get in memory leaks in TestFreeRDPCodecRemoteFX.
Summary: compiling with various -fsanitize options with gcc-8 on ubuntu-18.04.
In details:
I run this script to install gcc-8 with its dependencies in `/usr/local/gcc` on ubuntu-18.04 and ubuntu-14.04 (also tested on debian 8.11 jessie).
[get-and-compile-gcc.txt](https://github.com/FreeRDP/FreeRDP/files/2703001/get-and-compile-gcc.txt)
Then I use this script to compile and test FreeRDP (eg. master 5b24dc1aca924bf0c7bdb959233f887f35b84c9b):
[compile-freerdp-memory-leaks.txt](https://github.com/FreeRDP/FreeRDP/files/2703009/compile-freerdp-memory-leaks.txt)
and get:
```
Start 171: TestFreeRDPCodecRemoteFX
171/186 Test #171: TestFreeRDPCodecRemoteFX .................***Failed 0.21 sec
=================================================================
==14224==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 16384 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x55997288e0dc in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:783
#2 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#3 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Direct leak of 192 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f9cf949 in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:210
#2 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#3 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#4 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Direct leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f9e199e in allocateRegion /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/region.c:193
#2 0x7fe02f9e410d in region16_union_rect /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/region.c:507
#3 0x7fe02f9daa28 in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1191
#4 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#5 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#6 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 24712 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f560d1c in _aligned_offset_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:104
#2 0x7fe02f560b7a in _aligned_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:61
#3 0x7fe02f499510 in BufferPool_Take /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/BufferPool.c:183
#4 0x7fe02f9b444c in rfx_decode_rgb /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx_decode.c:79
#5 0x7fe02f9d581d in rfx_process_message_tile_work_callback /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:747
#6 0x7fe02f51a082 in thread_pool_work_func /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/pool/pool.c:91
#7 0x7fe02f51f124 in thread_launcher /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/thread/thread.c:334
#8 0x7fe02e9776da in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x76da)
Indirect leak of 24712 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f560d1c in _aligned_offset_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:104
#2 0x7fe02f560b7a in _aligned_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:61
#3 0x7fe02f499510 in BufferPool_Take /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/BufferPool.c:183
#4 0x7fe02f9b3f80 in rfx_decode_component /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx_decode.c:45
#5 0x7fe02f9b459d in rfx_decode_rgb /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx_decode.c:83
#6 0x7fe02f9d581d in rfx_process_message_tile_work_callback /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:747
#7 0x7fe02f51a082 in thread_pool_work_func /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/pool/pool.c:91
#8 0x7fe02f51f124 in thread_launcher /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/thread/thread.c:334
#9 0x7fe02e9776da in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x76da)
Indirect leak of 16424 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f560d1c in _aligned_offset_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:104
#2 0x7fe02f560b7a in _aligned_malloc /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/crt/alignment.c:61
#3 0x7fe02f9cf5dc in rfx_decoder_tile_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:165
#4 0x7fe02f49b77b in ObjectPool_Take /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/ObjectPool.c:54
#5 0x7fe02f9d74e5 in rfx_process_message_tileset /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:887
#6 0x7fe02f9d9dae in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1126
#7 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#8 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#9 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 256 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f49bf3c in ObjectPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/ObjectPool.c:132
#2 0x7fe02f9cfafb in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:227
#3 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 256 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f49b087 in BufferPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/BufferPool.c:455
#2 0x7fe02f9cfd70 in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:259
#3 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 128 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f9cfa2a in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:217
#2 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#3 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#4 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 104 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f49ae59 in BufferPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/BufferPool.c:434
#2 0x7fe02f9cfd70 in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:259
#3 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 104 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f49be7d in ObjectPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/ObjectPool.c:126
#2 0x7fe02f9cfafb in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:227
#3 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 80 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbad38 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xded38)
#1 0x7fe02f9cf5b8 in rfx_decoder_tile_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:162
#2 0x7fe02f49b77b in ObjectPool_Take /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/ObjectPool.c:54
#3 0x7fe02f9d74e5 in rfx_process_message_tileset /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:887
#4 0x7fe02f9d9dae in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1126
#5 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#6 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#7 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 40 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbaf40 in realloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdef40)
#1 0x7fe02f9d6135 in rfx_process_message_tileset /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:809
#2 0x7fe02f9d9dae in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1126
#3 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f4269d0 in InitializeCriticalSectionEx /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/synch/critical.c:76
#2 0x7fe02f426b2c in InitializeCriticalSectionAndSpinCount /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/synch/critical.c:102
#3 0x7fe02f49c00d in ObjectPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/ObjectPool.c:141
#4 0x7fe02f9cfafb in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:227
#5 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#6 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#7 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbab50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x7fe02f4269d0 in InitializeCriticalSectionEx /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/synch/critical.c:76
#2 0x7fe02f426b2c in InitializeCriticalSectionAndSpinCount /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/synch/critical.c:102
#3 0x7fe02f49af8c in BufferPool_New /home/pjb/src/span/sources/FreeRDP/winpr/libwinpr/utils/collections/BufferPool.c:447
#4 0x7fe02f9cfd70 in rfx_context_new /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:259
#5 0x55997288e0b2 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:779
#6 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#7 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 8 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbaf40 in realloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdef40)
#1 0x7fe02f9d7281 in rfx_process_message_tileset /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:855
#2 0x7fe02f9d9dae in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1126
#3 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
Indirect leak of 8 byte(s) in 1 object(s) allocated from:
#0 0x7fe02fcbaf40 in realloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdef40)
#1 0x7fe02f9d4b60 in rfx_process_message_region /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:691
#2 0x7fe02f9d9d7e in rfx_process_message /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/rfx.c:1122
#3 0x55997288e1b1 in TestFreeRDPCodecRemoteFX /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodecRemoteFX.c:793
#4 0x55997287003b in main /home/pjb/src/span/sources/FreeRDP/libfreerdp/codec/test/TestFreeRDPCodec.c:174
#5 0x7fe02efbcb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
SUMMARY: AddressSanitizer: 83496 byte(s) leaked in 17 allocation(s).
``` | test | memory leaks in testfreerdpcodecremotefx compiling with the following scripst i get in memory leaks in testfreerdpcodecremotefx summary compiling with various fsanitize options with gcc on ubuntu in details i run this script to install gcc with its dependencies in usr local gcc on ubuntu and ubuntu also tested on debian jessie then i use this script to compile and test freerdp eg master and get start testfreerdpcodecremotefx test testfreerdpcodecremotefx failed sec error leaksanitizer detected memory leaks direct leak of byte s in object s allocated from in interceptor calloc usr lib linux gnu libasan so in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so direct leak of byte s in object s allocated from in interceptor calloc usr lib linux gnu libasan so in rfx context new home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so direct leak of byte s in object s allocated from in interceptor malloc usr lib linux gnu libasan so in allocateregion home pjb src span sources freerdp libfreerdp codec region c in union rect home pjb src span sources freerdp libfreerdp codec region c in rfx process message home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in interceptor malloc usr lib linux gnu libasan so in aligned offset malloc home pjb src span sources freerdp winpr libwinpr crt alignment c in aligned malloc home pjb src span sources freerdp winpr libwinpr crt alignment c in bufferpool take home pjb src span sources freerdp winpr libwinpr utils collections bufferpool c in rfx decode rgb home pjb src span sources freerdp libfreerdp codec rfx decode c in rfx process message tile work callback home pjb src span sources freerdp libfreerdp codec rfx c in thread pool work func home pjb src span sources freerdp winpr libwinpr pool pool c in thread launcher home pjb src span sources freerdp winpr libwinpr thread thread c in start thread lib linux gnu libpthread so indirect leak of byte s in object s allocated from in interceptor malloc usr lib linux gnu libasan so in aligned offset malloc home pjb src span sources freerdp winpr libwinpr crt alignment c in aligned malloc home pjb src span sources freerdp winpr libwinpr crt alignment c in bufferpool take home pjb src span sources freerdp winpr libwinpr utils collections bufferpool c in rfx decode component home pjb src span sources freerdp libfreerdp codec rfx decode c in rfx decode rgb home pjb src span sources freerdp libfreerdp codec rfx decode c in rfx process message tile work callback home pjb src span sources freerdp libfreerdp codec rfx c in thread pool work func home pjb src span sources freerdp winpr libwinpr pool pool c in thread launcher home pjb src span sources freerdp winpr libwinpr thread thread c in start thread lib linux gnu libpthread so indirect leak of byte s in object s allocated from in interceptor malloc usr lib linux gnu libasan so in aligned offset malloc home pjb src span sources freerdp winpr libwinpr crt alignment c in aligned malloc home pjb src span sources freerdp winpr libwinpr crt alignment c in rfx decoder tile new home pjb src span sources freerdp libfreerdp codec rfx c in objectpool take home pjb src span sources freerdp winpr libwinpr utils collections objectpool c in rfx process message tileset home pjb src span sources freerdp libfreerdp codec rfx c in rfx process message home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in interceptor calloc usr lib linux gnu libasan so in objectpool new home pjb src span sources freerdp winpr libwinpr utils collections objectpool c in rfx context new home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in interceptor calloc usr lib linux gnu libasan so in bufferpool new home pjb src span sources freerdp winpr libwinpr utils collections bufferpool c in rfx context new home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in interceptor calloc usr lib linux gnu libasan so in rfx context new home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in interceptor malloc usr lib linux gnu libasan so in bufferpool new home pjb src span sources freerdp winpr libwinpr utils collections bufferpool c in rfx context new home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in interceptor calloc usr lib linux gnu libasan so in objectpool new home pjb src span sources freerdp winpr libwinpr utils collections objectpool c in rfx context new home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in interceptor calloc usr lib linux gnu libasan so in rfx decoder tile new home pjb src span sources freerdp libfreerdp codec rfx c in objectpool take home pjb src span sources freerdp winpr libwinpr utils collections objectpool c in rfx process message tileset home pjb src span sources freerdp libfreerdp codec rfx c in rfx process message home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in realloc usr lib linux gnu libasan so in rfx process message tileset home pjb src span sources freerdp libfreerdp codec rfx c in rfx process message home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in interceptor malloc usr lib linux gnu libasan so in initializecriticalsectionex home pjb src span sources freerdp winpr libwinpr synch critical c in initializecriticalsectionandspincount home pjb src span sources freerdp winpr libwinpr synch critical c in objectpool new home pjb src span sources freerdp winpr libwinpr utils collections objectpool c in rfx context new home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in interceptor malloc usr lib linux gnu libasan so in initializecriticalsectionex home pjb src span sources freerdp winpr libwinpr synch critical c in initializecriticalsectionandspincount home pjb src span sources freerdp winpr libwinpr synch critical c in bufferpool new home pjb src span sources freerdp winpr libwinpr utils collections bufferpool c in rfx context new home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in realloc usr lib linux gnu libasan so in rfx process message tileset home pjb src span sources freerdp libfreerdp codec rfx c in rfx process message home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so indirect leak of byte s in object s allocated from in realloc usr lib linux gnu libasan so in rfx process message region home pjb src span sources freerdp libfreerdp codec rfx c in rfx process message home pjb src span sources freerdp libfreerdp codec rfx c in testfreerdpcodecremotefx home pjb src span sources freerdp libfreerdp codec test testfreerdpcodecremotefx c in main home pjb src span sources freerdp libfreerdp codec test testfreerdpcodec c in libc start main lib linux gnu libc so summary addresssanitizer byte s leaked in allocation s | 1 |
160,593 | 20,112,503,044 | IssuesEvent | 2022-02-07 16:16:18 | PGreaneyLYIT/easybuggy4django | https://api.github.com/repos/PGreaneyLYIT/easybuggy4django | closed | CVE-2020-10378 (Medium) detected in Pillow-5.1.0.tar.gz - autoclosed | security vulnerability | ## CVE-2020-10378 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/PGreaneyLYIT/easybuggy4django/commit/5403b8fbcea4b699ce64d05146aeaa76d1062d89">5403b8fbcea4b699ce64d05146aeaa76d1062d89</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In libImaging/PcxDecode.c in Pillow before 7.1.0, an out-of-bounds read can occur when reading PCX files where state->shuffle is instructed to read beyond state->buffer.
<p>Publish Date: 2020-06-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10378>CVE-2020-10378</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/python-pillow/Pillow/commit/41b554bc56982ee4f30238a7677c0f4ff90a73a8">https://github.com/python-pillow/Pillow/commit/41b554bc56982ee4f30238a7677c0f4ff90a73a8</a></p>
<p>Release Date: 2020-07-27</p>
<p>Fix Resolution: 7.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-10378 (Medium) detected in Pillow-5.1.0.tar.gz - autoclosed - ## CVE-2020-10378 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-5.1.0.tar.gz</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz">https://files.pythonhosted.org/packages/89/b8/2f49bf71cbd0e9485bb36f72d438421b69b7356180695ae10bd4fd3066f5/Pillow-5.1.0.tar.gz</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.1.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/PGreaneyLYIT/easybuggy4django/commit/5403b8fbcea4b699ce64d05146aeaa76d1062d89">5403b8fbcea4b699ce64d05146aeaa76d1062d89</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In libImaging/PcxDecode.c in Pillow before 7.1.0, an out-of-bounds read can occur when reading PCX files where state->shuffle is instructed to read beyond state->buffer.
<p>Publish Date: 2020-06-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10378>CVE-2020-10378</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/python-pillow/Pillow/commit/41b554bc56982ee4f30238a7677c0f4ff90a73a8">https://github.com/python-pillow/Pillow/commit/41b554bc56982ee4f30238a7677c0f4ff90a73a8</a></p>
<p>Release Date: 2020-07-27</p>
<p>Fix Resolution: 7.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in pillow tar gz autoclosed cve medium severity vulnerability vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy x pillow tar gz vulnerable library found in head commit a href found in base branch master vulnerability details in libimaging pcxdecode c in pillow before an out of bounds read can occur when reading pcx files where state shuffle is instructed to read beyond state buffer publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
157,461 | 12,374,089,183 | IssuesEvent | 2020-05-19 00:27:49 | xqrzd/kudu-client-net | https://api.github.com/repos/xqrzd/kudu-client-net | opened | TestAuthzTokenExpiration is flaky | flaky test | This test fails very rarely,
```
[xUnit.net 00:01:11.12] Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration [FAIL]
X Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration [36s 99ms]
Error Message:
System.OperationCanceledException : Couldn't complete RPC before timeout: Not authorized: authz token verification failure: token expired
at Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.<>c__DisplayClass6_0.<<TestAuthzTokenExpiration>g__ScanTableAsync|1>d.MoveNext() in /home/runner/work/kudu-client-net/kudu-client-net/test/Knet.Kudu.Client.FunctionalTests/MultiMasterAuthzTokenTests.cs:line 129
--- End of stack trace from previous location where exception was thrown ---
at Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration() in /home/runner/work/kudu-client-net/kudu-client-net/test/Knet.Kudu.Client.FunctionalTests/MultiMasterAuthzTokenTests.cs:line 105
--- End of stack trace from previous location where exception was thrown ---
----- Inner Stack Trace -----
at Knet.Kudu.Client.Connection.KuduConnection.SendReceiveAsync(RequestHeader header, KuduRpc rpc, CancellationToken cancellationToken) in /home/runner/work/kudu-client-net/kudu-client-net/src/Knet.Kudu.Client/Connection/KuduConnection.cs:line 90
at Knet.Kudu.Client.KuduClient.SendRpcToServerGenericAsync[T](KuduRpc`1 rpc, ServerInfo serverInfo, CancellationToken cancellationToken) in /home/runner/work/kudu-client-net/kudu-client-net/src/Knet.Kudu.Client/KuduClient.cs:line 1232
at Knet.Kudu.Client.KuduClient.SendRpcToServerGenericAsync[T](KuduRpc`1 rpc, ServerInfo serverInfo, CancellationToken cancellationToken) in /home/runner/work/kudu-client-net/kudu-client-net/src/Knet.Kudu.Client/KuduClient.cs:line 1232
at Knet.Kudu.Client.KuduClient.SendRpcToTabletAsync[T](KuduTabletRpc`1 rpc, ServerInfo serverInfo, CancellationToken cancellationToken) in /home/runner/work/kudu-client-net/kudu-client-net/src/Knet.Kudu.Client/KuduClient.cs:line 1173
at Knet.Kudu.Client.KuduClient.SendRpcToTabletAsync[T](KuduTabletRpc`1 rpc, CancellationToken cancellationToken) in /home/runner/work/kudu-client-net/kudu-client-net/src/Knet.Kudu.Client/KuduClient.cs:line 1067
at Knet.Kudu.Client.KuduClient.SendRpcAsync[T](KuduRpc`1 rpc, CancellationToken cancellationToken)
```
```
[xUnit.net 00:01:12.10] Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration [FAIL]
X Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration [34s 57ms]
Error Message:
Assert.Equal() Failure
Expected: 802660
Actual: 802650
Stack Trace:
at Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration() in /home/runner/work/kudu-client-net/kudu-client-net/test/Knet.Kudu.Client.FunctionalTests/MultiMasterAuthzTokenTests.cs:line 107
--- End of stack trace from previous location where exception was thrown ---
```
It looks timeout related, the test may need updated with a longer timeout. | 1.0 | TestAuthzTokenExpiration is flaky - This test fails very rarely,
```
[xUnit.net 00:01:11.12] Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration [FAIL]
X Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration [36s 99ms]
Error Message:
System.OperationCanceledException : Couldn't complete RPC before timeout: Not authorized: authz token verification failure: token expired
at Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.<>c__DisplayClass6_0.<<TestAuthzTokenExpiration>g__ScanTableAsync|1>d.MoveNext() in /home/runner/work/kudu-client-net/kudu-client-net/test/Knet.Kudu.Client.FunctionalTests/MultiMasterAuthzTokenTests.cs:line 129
--- End of stack trace from previous location where exception was thrown ---
at Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration() in /home/runner/work/kudu-client-net/kudu-client-net/test/Knet.Kudu.Client.FunctionalTests/MultiMasterAuthzTokenTests.cs:line 105
--- End of stack trace from previous location where exception was thrown ---
----- Inner Stack Trace -----
at Knet.Kudu.Client.Connection.KuduConnection.SendReceiveAsync(RequestHeader header, KuduRpc rpc, CancellationToken cancellationToken) in /home/runner/work/kudu-client-net/kudu-client-net/src/Knet.Kudu.Client/Connection/KuduConnection.cs:line 90
at Knet.Kudu.Client.KuduClient.SendRpcToServerGenericAsync[T](KuduRpc`1 rpc, ServerInfo serverInfo, CancellationToken cancellationToken) in /home/runner/work/kudu-client-net/kudu-client-net/src/Knet.Kudu.Client/KuduClient.cs:line 1232
at Knet.Kudu.Client.KuduClient.SendRpcToServerGenericAsync[T](KuduRpc`1 rpc, ServerInfo serverInfo, CancellationToken cancellationToken) in /home/runner/work/kudu-client-net/kudu-client-net/src/Knet.Kudu.Client/KuduClient.cs:line 1232
at Knet.Kudu.Client.KuduClient.SendRpcToTabletAsync[T](KuduTabletRpc`1 rpc, ServerInfo serverInfo, CancellationToken cancellationToken) in /home/runner/work/kudu-client-net/kudu-client-net/src/Knet.Kudu.Client/KuduClient.cs:line 1173
at Knet.Kudu.Client.KuduClient.SendRpcToTabletAsync[T](KuduTabletRpc`1 rpc, CancellationToken cancellationToken) in /home/runner/work/kudu-client-net/kudu-client-net/src/Knet.Kudu.Client/KuduClient.cs:line 1067
at Knet.Kudu.Client.KuduClient.SendRpcAsync[T](KuduRpc`1 rpc, CancellationToken cancellationToken)
```
```
[xUnit.net 00:01:12.10] Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration [FAIL]
X Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration [34s 57ms]
Error Message:
Assert.Equal() Failure
Expected: 802660
Actual: 802650
Stack Trace:
at Knet.Kudu.Client.FunctionalTests.MultiMasterAuthzTokenTests.TestAuthzTokenExpiration() in /home/runner/work/kudu-client-net/kudu-client-net/test/Knet.Kudu.Client.FunctionalTests/MultiMasterAuthzTokenTests.cs:line 107
--- End of stack trace from previous location where exception was thrown ---
```
It looks timeout related, the test may need updated with a longer timeout. | test | testauthztokenexpiration is flaky this test fails very rarely knet kudu client functionaltests multimasterauthztokentests testauthztokenexpiration x knet kudu client functionaltests multimasterauthztokentests testauthztokenexpiration error message system operationcanceledexception couldn t complete rpc before timeout not authorized authz token verification failure token expired at knet kudu client functionaltests multimasterauthztokentests c g scantableasync d movenext in home runner work kudu client net kudu client net test knet kudu client functionaltests multimasterauthztokentests cs line end of stack trace from previous location where exception was thrown at knet kudu client functionaltests multimasterauthztokentests testauthztokenexpiration in home runner work kudu client net kudu client net test knet kudu client functionaltests multimasterauthztokentests cs line end of stack trace from previous location where exception was thrown inner stack trace at knet kudu client connection kuduconnection sendreceiveasync requestheader header kudurpc rpc cancellationtoken cancellationtoken in home runner work kudu client net kudu client net src knet kudu client connection kuduconnection cs line at knet kudu client kuduclient sendrpctoservergenericasync kudurpc rpc serverinfo serverinfo cancellationtoken cancellationtoken in home runner work kudu client net kudu client net src knet kudu client kuduclient cs line at knet kudu client kuduclient sendrpctoservergenericasync kudurpc rpc serverinfo serverinfo cancellationtoken cancellationtoken in home runner work kudu client net kudu client net src knet kudu client kuduclient cs line at knet kudu client kuduclient sendrpctotabletasync kudutabletrpc rpc serverinfo serverinfo cancellationtoken cancellationtoken in home runner work kudu client net kudu client net src knet kudu client kuduclient cs line at knet kudu client kuduclient sendrpctotabletasync kudutabletrpc rpc cancellationtoken cancellationtoken in home runner work kudu client net kudu client net src knet kudu client kuduclient cs line at knet kudu client kuduclient sendrpcasync kudurpc rpc cancellationtoken cancellationtoken knet kudu client functionaltests multimasterauthztokentests testauthztokenexpiration x knet kudu client functionaltests multimasterauthztokentests testauthztokenexpiration error message assert equal failure expected actual stack trace at knet kudu client functionaltests multimasterauthztokentests testauthztokenexpiration in home runner work kudu client net kudu client net test knet kudu client functionaltests multimasterauthztokentests cs line end of stack trace from previous location where exception was thrown it looks timeout related the test may need updated with a longer timeout | 1 |
240,842 | 20,086,310,040 | IssuesEvent | 2022-02-05 02:30:17 | flutter/flutter | https://api.github.com/repos/flutter/flutter | opened | Missing integration tests for Flutter on Android | a: tests platform-android engine P3 | There are a few scenarios that don't have integration tests.
Due to the lack of tests, customers get bugs that are fatal in many cases. e.g. application crash.
We just had one of these issues recently.
Some scenarios:
* **Flutter Surface destroyed**. `Shell::OnPlatformViewDestroyed` followed by `Shell::OnPlatformViewCreated`.
This can happen when the app is sent to the background or the app switch is toggled, then the app is brought to the foreground.
* **Flutter Activity lifecycle events**: pause, stop, destroy, restart, etc.. All these things aren't tested to my knowledge.
* **Flutter Fragment**. This is mostly used in add-to-app scenarios. The fragment is added to an activity that isn't Flutter Activity in the host app.
* **Cached/Shared engines**. This is mostly used in add-to-app scenarios. A single FlutterEngine instance is shared across multiple FlutterActivity or FlutterView instances.
* **Platform views interleaved with one of the above scenarios**. For example, an app that has multiple Activities. One might be FlutterActivities, while others aren't. We should be testing switching between activities.
To begin, these scenarios should leverage Android emulators, so they can run on presubmit checks. They should also run as part of the *engine* presubmit checks.
cc @GaryQian
cc @godofredoc @keyonghan I believe there was a plan to run some frameworks tests in the engine. Could we run these in the engine once added? | 1.0 | Missing integration tests for Flutter on Android - There are a few scenarios that don't have integration tests.
Due to the lack of tests, customers get bugs that are fatal in many cases. e.g. application crash.
We just had one of these issues recently.
Some scenarios:
* **Flutter Surface destroyed**. `Shell::OnPlatformViewDestroyed` followed by `Shell::OnPlatformViewCreated`.
This can happen when the app is sent to the background or the app switch is toggled, then the app is brought to the foreground.
* **Flutter Activity lifecycle events**: pause, stop, destroy, restart, etc.. All these things aren't tested to my knowledge.
* **Flutter Fragment**. This is mostly used in add-to-app scenarios. The fragment is added to an activity that isn't Flutter Activity in the host app.
* **Cached/Shared engines**. This is mostly used in add-to-app scenarios. A single FlutterEngine instance is shared across multiple FlutterActivity or FlutterView instances.
* **Platform views interleaved with one of the above scenarios**. For example, an app that has multiple Activities. One might be FlutterActivities, while others aren't. We should be testing switching between activities.
To begin, these scenarios should leverage Android emulators, so they can run on presubmit checks. They should also run as part of the *engine* presubmit checks.
cc @GaryQian
cc @godofredoc @keyonghan I believe there was a plan to run some frameworks tests in the engine. Could we run these in the engine once added? | test | missing integration tests for flutter on android there are a few scenarios that don t have integration tests due to the lack of tests customers get bugs that are fatal in many cases e g application crash we just had one of these issues recently some scenarios flutter surface destroyed shell onplatformviewdestroyed followed by shell onplatformviewcreated this can happen when the app is sent to the background or the app switch is toggled then the app is brought to the foreground flutter activity lifecycle events pause stop destroy restart etc all these things aren t tested to my knowledge flutter fragment this is mostly used in add to app scenarios the fragment is added to an activity that isn t flutter activity in the host app cached shared engines this is mostly used in add to app scenarios a single flutterengine instance is shared across multiple flutteractivity or flutterview instances platform views interleaved with one of the above scenarios for example an app that has multiple activities one might be flutteractivities while others aren t we should be testing switching between activities to begin these scenarios should leverage android emulators so they can run on presubmit checks they should also run as part of the engine presubmit checks cc garyqian cc godofredoc keyonghan i believe there was a plan to run some frameworks tests in the engine could we run these in the engine once added | 1 |
278,484 | 30,702,338,900 | IssuesEvent | 2023-07-27 01:21:47 | nidhi7598/linux-3.0.35 | https://api.github.com/repos/nidhi7598/linux-3.0.35 | closed | CVE-2013-2888 (High) detected in linux-stable-rtv3.8.6 - autoclosed | Mend: dependency security vulnerability | ## CVE-2013-2888 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/hid.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Multiple array index errors in drivers/hid/hid-core.c in the Human Interface Device (HID) subsystem in the Linux kernel through 3.11 allow physically proximate attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via a crafted device that provides an invalid Report ID.
<p>Publish Date: 2013-09-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-2888>CVE-2013-2888</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2888">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2888</a></p>
<p>Release Date: 2013-09-16</p>
<p>Fix Resolution: 3.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2013-2888 (High) detected in linux-stable-rtv3.8.6 - autoclosed - ## CVE-2013-2888 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/hid.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Multiple array index errors in drivers/hid/hid-core.c in the Human Interface Device (HID) subsystem in the Linux kernel through 3.11 allow physically proximate attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via a crafted device that provides an invalid Report ID.
<p>Publish Date: 2013-09-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-2888>CVE-2013-2888</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2888">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2888</a></p>
<p>Release Date: 2013-09-16</p>
<p>Fix Resolution: 3.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in linux stable autoclosed cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files include linux hid h vulnerability details multiple array index errors in drivers hid hid core c in the human interface device hid subsystem in the linux kernel through allow physically proximate attackers to execute arbitrary code or cause a denial of service heap memory corruption via a crafted device that provides an invalid report id publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
80,801 | 3,574,631,847 | IssuesEvent | 2016-01-27 12:48:26 | leeensminger/OED_Wetlands | https://api.github.com/repos/leeensminger/OED_Wetlands | closed | Selecting a feature in the feature manager returns IO Error | bug - high priority | When you select a feature in the feature manager, system returns an IO Error. System continues to Autozoom and the feature is displayed normally. This was not an issue in the previous production release.

| 1.0 | Selecting a feature in the feature manager returns IO Error - When you select a feature in the feature manager, system returns an IO Error. System continues to Autozoom and the feature is displayed normally. This was not an issue in the previous production release.

| non_test | selecting a feature in the feature manager returns io error when you select a feature in the feature manager system returns an io error system continues to autozoom and the feature is displayed normally this was not an issue in the previous production release | 0 |
127,648 | 10,476,582,838 | IssuesEvent | 2019-09-23 18:55:44 | NoahCardoza/HE-REvived-Issue-Tracker | https://api.github.com/repos/NoahCardoza/HE-REvived-Issue-Tracker | closed | NPC hardware resets use static values for all servers instead of going back to their original state | bug needs testing | Title, leads to some servers haveing software that dont on their servers, also some servers having the wrong internet speed, e.g. NSA goes down from 1gbit to 50mbps | 1.0 | NPC hardware resets use static values for all servers instead of going back to their original state - Title, leads to some servers haveing software that dont on their servers, also some servers having the wrong internet speed, e.g. NSA goes down from 1gbit to 50mbps | test | npc hardware resets use static values for all servers instead of going back to their original state title leads to some servers haveing software that dont on their servers also some servers having the wrong internet speed e g nsa goes down from to | 1 |
8,143 | 11,352,270,749 | IssuesEvent | 2020-01-24 13:16:46 | MarkBind/markbind | https://api.github.com/repos/MarkBind/markbind | closed | Support more than plain text for TopNav drop-downs | a-Process c.Enhancement f-TopNav | Current: TopNav dropdowns can only have plain text values e.g., `<dropdown text="Links" class="nav-link">`
Suggested: Allow formatted text, images, icons etc. | 1.0 | Support more than plain text for TopNav drop-downs - Current: TopNav dropdowns can only have plain text values e.g., `<dropdown text="Links" class="nav-link">`
Suggested: Allow formatted text, images, icons etc. | non_test | support more than plain text for topnav drop downs current topnav dropdowns can only have plain text values e g suggested allow formatted text images icons etc | 0 |
69,389 | 17,657,944,196 | IssuesEvent | 2021-08-21 00:30:40 | OpenChemistry/avogadrolibs | https://api.github.com/repos/OpenChemistry/avogadrolibs | closed | Release Builds for Mac shouldn't include "Testing" menu | bug build | Right now, the nightly / standard GitHub builds include ENABLE_TESTING to run unit tests on Ubuntu and Mac.
The means the Mac build includes the "Testing" menu, even for release builds.
We should separate out the standard build script from the "push to tag" release script. | 1.0 | Release Builds for Mac shouldn't include "Testing" menu - Right now, the nightly / standard GitHub builds include ENABLE_TESTING to run unit tests on Ubuntu and Mac.
The means the Mac build includes the "Testing" menu, even for release builds.
We should separate out the standard build script from the "push to tag" release script. | non_test | release builds for mac shouldn t include testing menu right now the nightly standard github builds include enable testing to run unit tests on ubuntu and mac the means the mac build includes the testing menu even for release builds we should separate out the standard build script from the push to tag release script | 0 |
351,995 | 32,040,339,746 | IssuesEvent | 2023-09-22 18:45:05 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | opened | Release 4.6.0 - Beta 1 - E2E UX tests - Elastic integration | type/test level/subtask |
## End-to-End (E2E) Testing Guideline
- **Documentation:** Always consult the development documentation for the current stage tag at [this link](https://documentation-dev.wazuh.com/v4.6.0-beta1/index.html). Be careful because some of the description steps might refer to a current version in production, always navigate using the current development documention for the stage under test.
- **Test Requirements:** Ensure your test comprehensively includes a full stack and agent/s deployment as per the Deployment requirements, detailing the machine OS, installed version, and revision.
- **Deployment Options:** While deployments can be local (using VMs, Vagrant, etc) or on the aws-dev account, opt for local deployments when feasible. For AWS access, coordinate with the CICD team through [this link](https://github.com/wazuh/internal-devel-requests/issues/new?assignees=&labels=level%2Ftask%2C+request%2Foperational%2C+type%2Fchange&projects=&template=operational--request.md&title=%3CTitle%3E).
- **External Accounts:** If tests require third-party accounts (e.g., GitHub, Azure, AWS, GCP), request the necessary access through the CICD team [here](https://github.com/wazuh/internal-devel-requests/issues/new?assignees=&labels=level%2Ftask%2C+request%2Foperational%2C+type%2Fchange&projects=&template=operational--request.md&title=%3CTitle%3E).
- **Alerts:** Every test should generate a minimum of one end-to-end alert, from the agent to the dashboard, irrespective of test type.
- **Multi-node Testing:** For multi-node wazuh-manager tests, ensure agents are connected to both workers and the master node.
- **Package Verification:** Use the pre-release package that matches the current TAG you're testing. Confirm its version and revision.
- **Filebeat Errors:** If you encounter errors with Filebeat during testing, refer to [this Slack discussion](https://wazuh-team.slack.com/archives/C03BDG0K6JC/p1672168163537809) for insights and resolutions.
- **Known Issues:** Familiarize yourself with previously reported issues in the Known Issues section. This helps in identifying already recognized errors during testing.
- **Reporting New Issues:** Any new errors discovered during testing that aren't listed under Known Issues should be reported. Assign the issue to the corresponding team (QA if unsure), add the `Release testing/publication` objective and `Very high` priority. Communicate these to the team and QA via the c-release Slack channel.
- **Test Conduct:** It's imperative to be thorough in your testing, offering enough detail for reviewers. Incomplete tests might necessitate a redo.
- **Documentation Feedback:** Encountering documentation gaps, unclear guidelines, or anything that disrupts the testing or UX? Open an issue, especially if it's not listed under Known Issues.
- **Format:** If this is your first time doing this, refer to the format (but not necessarily the content, as it may vary) of previous E2E tests, here you have an example https://github.com/wazuh/wazuh/issues/13994.
- **Status and completion:** Change the issue status within your team project accordingly. Once you finish testing and write the conclusions, move it to Pending review and notify the @wazuh/frontend team via Slack using the [c-release channel](https://wazuh-team.slack.com/archives/C02A737S5MJ). Beware that the reviewers might request additional information or task repetitions.
- **For reviewers:** Please move the issue to Pending final review and notify via Slack using the same thread if everything is ok, otherwise, perform an issue update with the requested changes and move it to On hold, increase the review_cycles in the team project by one and notify the issue assignee via Slack using the same thread.
For the conclusions and the issue testing and updates, use the following legend:
**Status legend**
- 🟢 All checks passed
- 🟡 Found a known issue
- 🔴 Found a new error
## Deployment requirements
| Component | Installation | Type | OS |
|----------|--------------|------|----|
| Indexer | [Quickstart](https://documentation-dev.wazuh.com/v4.6.0-beta1/quickstart.html) | - | Oracle Linux 7 x86_64 |
| Server | [Quickstart](https://documentation-dev.wazuh.com/v4.6.0-beta1/quickstart.html) | - | Oracle Linux 7 x86_64 |
| Dashboard | [Quickstart](https://documentation-dev.wazuh.com/v4.6.0-beta1/quickstart.html) | - | Oracle Linux 7 x86_64 |
| Agent | [Installing Wazuh agents](https://documentation-dev.wazuh.com/v4.6.0-beta1/installation-guide/wazuh-agent/index.html) | - | Oracle Linux 7 x86_64 |
## Test description
Follow and complete Elastic integration guide https://documentation-dev.wazuh.com/v4.6.0-beta1/integrations-guide/elastic-stack/index.html
## Known issues
There are no known issues.
## Conclusions
Summarize the errors detected (Known Issues included). Illustrate using the table below, removing current examples:
| **Status** | **Test** | **Failure type** | **Notes** |
|----------------|-------------|---------------------|----------------|
| 🟡 | Example Test: API Integration | Timeout issues on certain endpoints | Known issue: https://github.com/example/repo/issues/12345 |
| 🔴 | Example Test: Data Migration | Data inconsistency in the new version | New issue opened: https://github.com/example/repo/issues/67890 |
## Feedback
We value your feedback. Please provide insights on your testing experience.
- Was the testing guideline clear? Were there any ambiguities?
- Did you face any challenges not covered by the guideline?
- Suggestions for improvement:
## Reviewers validation
The criteria for completing this task is based on the validation of the conclusions and the test results by all reviewers.
All the checkboxes below must be marked in order to close this issue.
- [ ] @havidarou
- [ ] @wazuh/frontend | 1.0 | Release 4.6.0 - Beta 1 - E2E UX tests - Elastic integration -
## End-to-End (E2E) Testing Guideline
- **Documentation:** Always consult the development documentation for the current stage tag at [this link](https://documentation-dev.wazuh.com/v4.6.0-beta1/index.html). Be careful because some of the description steps might refer to a current version in production, always navigate using the current development documention for the stage under test.
- **Test Requirements:** Ensure your test comprehensively includes a full stack and agent/s deployment as per the Deployment requirements, detailing the machine OS, installed version, and revision.
- **Deployment Options:** While deployments can be local (using VMs, Vagrant, etc) or on the aws-dev account, opt for local deployments when feasible. For AWS access, coordinate with the CICD team through [this link](https://github.com/wazuh/internal-devel-requests/issues/new?assignees=&labels=level%2Ftask%2C+request%2Foperational%2C+type%2Fchange&projects=&template=operational--request.md&title=%3CTitle%3E).
- **External Accounts:** If tests require third-party accounts (e.g., GitHub, Azure, AWS, GCP), request the necessary access through the CICD team [here](https://github.com/wazuh/internal-devel-requests/issues/new?assignees=&labels=level%2Ftask%2C+request%2Foperational%2C+type%2Fchange&projects=&template=operational--request.md&title=%3CTitle%3E).
- **Alerts:** Every test should generate a minimum of one end-to-end alert, from the agent to the dashboard, irrespective of test type.
- **Multi-node Testing:** For multi-node wazuh-manager tests, ensure agents are connected to both workers and the master node.
- **Package Verification:** Use the pre-release package that matches the current TAG you're testing. Confirm its version and revision.
- **Filebeat Errors:** If you encounter errors with Filebeat during testing, refer to [this Slack discussion](https://wazuh-team.slack.com/archives/C03BDG0K6JC/p1672168163537809) for insights and resolutions.
- **Known Issues:** Familiarize yourself with previously reported issues in the Known Issues section. This helps in identifying already recognized errors during testing.
- **Reporting New Issues:** Any new errors discovered during testing that aren't listed under Known Issues should be reported. Assign the issue to the corresponding team (QA if unsure), add the `Release testing/publication` objective and `Very high` priority. Communicate these to the team and QA via the c-release Slack channel.
- **Test Conduct:** It's imperative to be thorough in your testing, offering enough detail for reviewers. Incomplete tests might necessitate a redo.
- **Documentation Feedback:** Encountering documentation gaps, unclear guidelines, or anything that disrupts the testing or UX? Open an issue, especially if it's not listed under Known Issues.
- **Format:** If this is your first time doing this, refer to the format (but not necessarily the content, as it may vary) of previous E2E tests, here you have an example https://github.com/wazuh/wazuh/issues/13994.
- **Status and completion:** Change the issue status within your team project accordingly. Once you finish testing and write the conclusions, move it to Pending review and notify the @wazuh/frontend team via Slack using the [c-release channel](https://wazuh-team.slack.com/archives/C02A737S5MJ). Beware that the reviewers might request additional information or task repetitions.
- **For reviewers:** Please move the issue to Pending final review and notify via Slack using the same thread if everything is ok, otherwise, perform an issue update with the requested changes and move it to On hold, increase the review_cycles in the team project by one and notify the issue assignee via Slack using the same thread.
For the conclusions and the issue testing and updates, use the following legend:
**Status legend**
- 🟢 All checks passed
- 🟡 Found a known issue
- 🔴 Found a new error
## Deployment requirements
| Component | Installation | Type | OS |
|----------|--------------|------|----|
| Indexer | [Quickstart](https://documentation-dev.wazuh.com/v4.6.0-beta1/quickstart.html) | - | Oracle Linux 7 x86_64 |
| Server | [Quickstart](https://documentation-dev.wazuh.com/v4.6.0-beta1/quickstart.html) | - | Oracle Linux 7 x86_64 |
| Dashboard | [Quickstart](https://documentation-dev.wazuh.com/v4.6.0-beta1/quickstart.html) | - | Oracle Linux 7 x86_64 |
| Agent | [Installing Wazuh agents](https://documentation-dev.wazuh.com/v4.6.0-beta1/installation-guide/wazuh-agent/index.html) | - | Oracle Linux 7 x86_64 |
## Test description
Follow and complete Elastic integration guide https://documentation-dev.wazuh.com/v4.6.0-beta1/integrations-guide/elastic-stack/index.html
## Known issues
There are no known issues.
## Conclusions
Summarize the errors detected (Known Issues included). Illustrate using the table below, removing current examples:
| **Status** | **Test** | **Failure type** | **Notes** |
|----------------|-------------|---------------------|----------------|
| 🟡 | Example Test: API Integration | Timeout issues on certain endpoints | Known issue: https://github.com/example/repo/issues/12345 |
| 🔴 | Example Test: Data Migration | Data inconsistency in the new version | New issue opened: https://github.com/example/repo/issues/67890 |
## Feedback
We value your feedback. Please provide insights on your testing experience.
- Was the testing guideline clear? Were there any ambiguities?
- Did you face any challenges not covered by the guideline?
- Suggestions for improvement:
## Reviewers validation
The criteria for completing this task is based on the validation of the conclusions and the test results by all reviewers.
All the checkboxes below must be marked in order to close this issue.
- [ ] @havidarou
- [ ] @wazuh/frontend | test | release beta ux tests elastic integration end to end testing guideline documentation always consult the development documentation for the current stage tag at be careful because some of the description steps might refer to a current version in production always navigate using the current development documention for the stage under test test requirements ensure your test comprehensively includes a full stack and agent s deployment as per the deployment requirements detailing the machine os installed version and revision deployment options while deployments can be local using vms vagrant etc or on the aws dev account opt for local deployments when feasible for aws access coordinate with the cicd team through external accounts if tests require third party accounts e g github azure aws gcp request the necessary access through the cicd team alerts every test should generate a minimum of one end to end alert from the agent to the dashboard irrespective of test type multi node testing for multi node wazuh manager tests ensure agents are connected to both workers and the master node package verification use the pre release package that matches the current tag you re testing confirm its version and revision filebeat errors if you encounter errors with filebeat during testing refer to for insights and resolutions known issues familiarize yourself with previously reported issues in the known issues section this helps in identifying already recognized errors during testing reporting new issues any new errors discovered during testing that aren t listed under known issues should be reported assign the issue to the corresponding team qa if unsure add the release testing publication objective and very high priority communicate these to the team and qa via the c release slack channel test conduct it s imperative to be thorough in your testing offering enough detail for reviewers incomplete tests might necessitate a redo documentation feedback encountering documentation gaps unclear guidelines or anything that disrupts the testing or ux open an issue especially if it s not listed under known issues format if this is your first time doing this refer to the format but not necessarily the content as it may vary of previous tests here you have an example status and completion change the issue status within your team project accordingly once you finish testing and write the conclusions move it to pending review and notify the wazuh frontend team via slack using the beware that the reviewers might request additional information or task repetitions for reviewers please move the issue to pending final review and notify via slack using the same thread if everything is ok otherwise perform an issue update with the requested changes and move it to on hold increase the review cycles in the team project by one and notify the issue assignee via slack using the same thread for the conclusions and the issue testing and updates use the following legend status legend 🟢 all checks passed 🟡 found a known issue 🔴 found a new error deployment requirements component installation type os indexer oracle linux server oracle linux dashboard oracle linux agent oracle linux test description follow and complete elastic integration guide known issues there are no known issues conclusions summarize the errors detected known issues included illustrate using the table below removing current examples status test failure type notes 🟡 example test api integration timeout issues on certain endpoints known issue 🔴 example test data migration data inconsistency in the new version new issue opened feedback we value your feedback please provide insights on your testing experience was the testing guideline clear were there any ambiguities did you face any challenges not covered by the guideline suggestions for improvement reviewers validation the criteria for completing this task is based on the validation of the conclusions and the test results by all reviewers all the checkboxes below must be marked in order to close this issue havidarou wazuh frontend | 1 |
254,423 | 8,073,863,203 | IssuesEvent | 2018-08-06 20:46:27 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | IndexOutOfRangeException | Medium Priority | **==Sent from Eco Crash Server==**
**Details:**
Message:Индекс находился вне границ массива.
<br><br>
Source:Eco.Simulation
**Stack:**
at Eco.Simulation.WorldLayers.History.WorldLayerHistory.MakeDeltaBitmap(Int32 width, Int32 height, Byte[] history, Func`3 pixelFunc, Boolean firstFrame)
<br><br>
at Eco.Simulation.WorldLayers.History.WorldLayerHistory.SaveGIF(String name, Int32 width, Int32 height, Func`3 pixelFunc, Boolean animate, Boolean backup)
<br><br>
at Eco.Simulation.WorldLayers.History.WorldLayerHistory.SaveGIF(WorldLayer layer, Boolean backup)
<br><br>
at Eco.Simulation.WorldLayers.History.WorldLayerHistory.Update()
<br><br>
at Eco.Simulation.EcoSim.CollectStats()
<br><br>
at Eco.Core.Plugins.TickTimeUtil.TimeSubprocess(Action func)
<br><br>
at Eco.Simulation.EcoSim.DoTick(TickSample tick)
<br><br>
at Eco.Simulation.EcoSim.Run()
<br><br>
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
<br><br>
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
<br><br>
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
<br><br>
at System.Threading.ThreadHelper.ThreadStart(Object obj) | 1.0 | IndexOutOfRangeException - **==Sent from Eco Crash Server==**
**Details:**
Message:Индекс находился вне границ массива.
<br><br>
Source:Eco.Simulation
**Stack:**
at Eco.Simulation.WorldLayers.History.WorldLayerHistory.MakeDeltaBitmap(Int32 width, Int32 height, Byte[] history, Func`3 pixelFunc, Boolean firstFrame)
<br><br>
at Eco.Simulation.WorldLayers.History.WorldLayerHistory.SaveGIF(String name, Int32 width, Int32 height, Func`3 pixelFunc, Boolean animate, Boolean backup)
<br><br>
at Eco.Simulation.WorldLayers.History.WorldLayerHistory.SaveGIF(WorldLayer layer, Boolean backup)
<br><br>
at Eco.Simulation.WorldLayers.History.WorldLayerHistory.Update()
<br><br>
at Eco.Simulation.EcoSim.CollectStats()
<br><br>
at Eco.Core.Plugins.TickTimeUtil.TimeSubprocess(Action func)
<br><br>
at Eco.Simulation.EcoSim.DoTick(TickSample tick)
<br><br>
at Eco.Simulation.EcoSim.Run()
<br><br>
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
<br><br>
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
<br><br>
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
<br><br>
at System.Threading.ThreadHelper.ThreadStart(Object obj) | non_test | indexoutofrangeexception sent from eco crash server details message индекс находился вне границ массива source eco simulation stack at eco simulation worldlayers history worldlayerhistory makedeltabitmap width height byte history func pixelfunc boolean firstframe at eco simulation worldlayers history worldlayerhistory savegif string name width height func pixelfunc boolean animate boolean backup at eco simulation worldlayers history worldlayerhistory savegif worldlayer layer boolean backup at eco simulation worldlayers history worldlayerhistory update at eco simulation ecosim collectstats at eco core plugins ticktimeutil timesubprocess action func at eco simulation ecosim dotick ticksample tick at eco simulation ecosim run at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state boolean preservesyncctx at system threading executioncontext run executioncontext executioncontext contextcallback callback object state boolean preservesyncctx at system threading executioncontext run executioncontext executioncontext contextcallback callback object state at system threading threadhelper threadstart object obj | 0 |
44,473 | 2,906,140,630 | IssuesEvent | 2015-06-19 07:57:39 | gbecan/OpenCompare | https://api.github.com/repos/gbecan/OpenCompare | closed | The editor fails to load the PCM "Comparison of European traffic laws" | bug duplicate Priority: low | A lot of strack traces appear while loading with always the same error : "d is not a function". | 1.0 | The editor fails to load the PCM "Comparison of European traffic laws" - A lot of strack traces appear while loading with always the same error : "d is not a function". | non_test | the editor fails to load the pcm comparison of european traffic laws a lot of strack traces appear while loading with always the same error d is not a function | 0 |
49,472 | 20,767,229,543 | IssuesEvent | 2022-03-15 22:06:42 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | Feature Request: Cloud9 - SSM support | enhancement service/cloud9 | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Cloud9 creates an EC2 instance, and can connect to it with ssh, or ssm (AWS Systems Manager Agent).
The SSM option is not available from this Terraform provider but is available through the go AWS SDK.
This feature would enable the creation of preconfigured Cloud9 environments through SSM patches.
### New or Affected Resource(s)
aws_cloud9_environment_ec2
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://aws.amazon.com/about-aws/whats-new/2018/04/introducing-amazon-ec2-fleet/
--->
https://github.com/hashicorp/terraform-provider-aws/issues/18487
There seems to be a stale PR that needs rework
https://github.com/hashicorp/terraform-provider-aws/pull/19195
| 1.0 | Feature Request: Cloud9 - SSM support - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Cloud9 creates an EC2 instance, and can connect to it with ssh, or ssm (AWS Systems Manager Agent).
The SSM option is not available from this Terraform provider but is available through the go AWS SDK.
This feature would enable the creation of preconfigured Cloud9 environments through SSM patches.
### New or Affected Resource(s)
aws_cloud9_environment_ec2
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://aws.amazon.com/about-aws/whats-new/2018/04/introducing-amazon-ec2-fleet/
--->
https://github.com/hashicorp/terraform-provider-aws/issues/18487
There seems to be a stale PR that needs rework
https://github.com/hashicorp/terraform-provider-aws/pull/19195
| non_test | feature request ssm support community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description creates an instance and can connect to it with ssh or ssm aws systems manager agent the ssm option is not available from this terraform provider but is available through the go aws sdk this feature would enable the creation of preconfigured environments through ssm patches new or affected resource s aws environment references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example there seems to be a stale pr that needs rework | 0 |
182,555 | 14,141,843,764 | IssuesEvent | 2020-11-10 13:17:39 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | MulticastDeserializationTest test failure | Source: Internal Team: Core Type: Test-Failure | - Fails on `Hazelcast-4.maintenance-ZuluJDK15`
- Fails on [Build #3 (Nov 8, 2020 7:30:42 AM)](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-4.maintenance-ZuluJDK15/3/testReport/junit/com.hazelcast.cluster/MulticastDeserializationTest/test/)
- Error
```
Untrusted deserialization is possible
```
- Stacktrace
```
java.lang.AssertionError: Untrusted deserialization is possible
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.assertTrue(Assert.java:42)
at org.junit.Assert.assertFalse(Assert.java:65)
at com.hazelcast.cluster.MulticastDeserializationTest.test(MulticastDeserializationTest.java:94)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:114)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:106)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:832)
```
- Noticed
```
09:35:07,829 WARN |testWithoutFilter| - [MulticastService] hz.nifty_bartik.MulticastThread - [127.0.0.1]:5701 [dev] [4.0.4-SNAPSHOT] class example.serialization.TestDeserialized cannot be cast to class com.hazelcast.internal.cluster.impl.JoinMessage (example.serialization.TestDeserialized and com.hazelcast.internal.cluster.impl.JoinMessage are in unnamed module of loader 'app')
java.lang.ClassCastException: class example.serialization.TestDeserialized cannot be cast to class com.hazelcast.internal.cluster.impl.JoinMessage (example.serialization.TestDeserialized and com.hazelcast.internal.cluster.impl.JoinMessage are in unnamed module of loader 'app')
at com.hazelcast.internal.cluster.impl.MulticastService.receive(MulticastService.java:249) [classes/:?]
at com.hazelcast.internal.cluster.impl.MulticastService.run(MulticastService.java:200) [classes/:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
```
- kindly check | 1.0 | MulticastDeserializationTest test failure - - Fails on `Hazelcast-4.maintenance-ZuluJDK15`
- Fails on [Build #3 (Nov 8, 2020 7:30:42 AM)](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-4.maintenance-ZuluJDK15/3/testReport/junit/com.hazelcast.cluster/MulticastDeserializationTest/test/)
- Error
```
Untrusted deserialization is possible
```
- Stacktrace
```
java.lang.AssertionError: Untrusted deserialization is possible
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.assertTrue(Assert.java:42)
at org.junit.Assert.assertFalse(Assert.java:65)
at com.hazelcast.cluster.MulticastDeserializationTest.test(MulticastDeserializationTest.java:94)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:114)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:106)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:832)
```
- Noticed
```
09:35:07,829 WARN |testWithoutFilter| - [MulticastService] hz.nifty_bartik.MulticastThread - [127.0.0.1]:5701 [dev] [4.0.4-SNAPSHOT] class example.serialization.TestDeserialized cannot be cast to class com.hazelcast.internal.cluster.impl.JoinMessage (example.serialization.TestDeserialized and com.hazelcast.internal.cluster.impl.JoinMessage are in unnamed module of loader 'app')
java.lang.ClassCastException: class example.serialization.TestDeserialized cannot be cast to class com.hazelcast.internal.cluster.impl.JoinMessage (example.serialization.TestDeserialized and com.hazelcast.internal.cluster.impl.JoinMessage are in unnamed module of loader 'app')
at com.hazelcast.internal.cluster.impl.MulticastService.receive(MulticastService.java:249) [classes/:?]
at com.hazelcast.internal.cluster.impl.MulticastService.run(MulticastService.java:200) [classes/:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
```
- kindly check | test | multicastdeserializationtest test failure fails on hazelcast maintenance fails on error untrusted deserialization is possible stacktrace java lang assertionerror untrusted deserialization is possible at org junit assert fail assert java at org junit assert asserttrue assert java at org junit assert assertfalse assert java at com hazelcast cluster multicastdeserializationtest test multicastdeserializationtest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java base java util concurrent futuretask run futuretask java at java base java lang thread run thread java noticed warn testwithoutfilter hz nifty bartik multicastthread class example serialization testdeserialized cannot be cast to class com hazelcast internal cluster impl joinmessage example serialization testdeserialized and com hazelcast internal cluster impl joinmessage are in unnamed module of loader app java lang classcastexception class example serialization testdeserialized cannot be cast to class com hazelcast internal cluster impl joinmessage example serialization testdeserialized and com hazelcast internal cluster impl joinmessage are in unnamed module of loader app at com hazelcast internal cluster impl multicastservice receive multicastservice java at com hazelcast internal cluster impl multicastservice run multicastservice java at java lang thread run thread java kindly check | 1 |
88,816 | 15,820,492,942 | IssuesEvent | 2021-04-05 19:03:26 | dmyers87/tika | https://api.github.com/repos/dmyers87/tika | closed | CVE-2020-9548 (High) detected in jackson-databind-2.9.9.2.jar - autoclosed | security vulnerability | ## CVE-2020-9548 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: tika/tika-parsers/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.2/jackson-databind-2.9.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dmyers87/tika/commit/b0634f6d9bc18cc79f623715d40c9e8ed98924fc">b0634f6d9bc18cc79f623715d40c9e8ed98924fc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to br.com.anteros.dbcp.AnterosDBCPConfig (aka anteros-core).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9548>CVE-2020-9548</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9548">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9548</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.6,2.9.10.4</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9.2","packageFilePaths":["/tika-parsers/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.6,2.9.10.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-9548","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to br.com.anteros.dbcp.AnterosDBCPConfig (aka anteros-core).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9548","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-9548 (High) detected in jackson-databind-2.9.9.2.jar - autoclosed - ## CVE-2020-9548 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: tika/tika-parsers/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.2/jackson-databind-2.9.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dmyers87/tika/commit/b0634f6d9bc18cc79f623715d40c9e8ed98924fc">b0634f6d9bc18cc79f623715d40c9e8ed98924fc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to br.com.anteros.dbcp.AnterosDBCPConfig (aka anteros-core).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9548>CVE-2020-9548</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9548">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9548</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.6,2.9.10.4</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9.2","packageFilePaths":["/tika-parsers/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.6,2.9.10.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-9548","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to br.com.anteros.dbcp.AnterosDBCPConfig (aka anteros-core).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9548","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tika tika parsers pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to br com anteros dbcp anterosdbcpconfig aka anteros core publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to br com anteros dbcp anterosdbcpconfig aka anteros core vulnerabilityurl | 0 |
330,790 | 28,485,820,638 | IssuesEvent | 2023-04-18 07:48:06 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix jax_numpy_manipulation.test_jax_expand_dims | JAX Frontend Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4586062504/jobs/8098629047" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4586062504/jobs/8098632534" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4586154204/jobs/8098798549" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4586003022/jobs/8098526466" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>Not found</summary>
Not found
</details>
<details>
<summary>Not found</summary>
Not found
</details>
<details>
<summary>Not found</summary>
Not found
</details>
<details>
<summary>Not found</summary>
Not found
</details>
| 1.0 | Fix jax_numpy_manipulation.test_jax_expand_dims - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4586062504/jobs/8098629047" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4586062504/jobs/8098632534" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4586154204/jobs/8098798549" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4586003022/jobs/8098526466" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>Not found</summary>
Not found
</details>
<details>
<summary>Not found</summary>
Not found
</details>
<details>
<summary>Not found</summary>
Not found
</details>
<details>
<summary>Not found</summary>
Not found
</details>
| test | fix jax numpy manipulation test jax expand dims tensorflow img src torch img src numpy img src jax img src not found not found not found not found not found not found not found not found | 1 |
64,807 | 14,682,422,100 | IssuesEvent | 2020-12-31 16:23:00 | labsai/EDDI | https://api.github.com/repos/labsai/EDDI | opened | CVE-2019-11358 (Medium) detected in jquery-3.3.1.min.js | security vulnerability | ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.3.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js</a></p>
<p>Path to vulnerable library: EDDI/apiserver/src/main/resources/js/jquery-3.3.1.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/labsai/EDDI/commit/e141334e85f823e2e1a3c8e4ac2c90fe6a35c48c">e141334e85f823e2e1a3c8e4ac2c90fe6a35c48c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-11358 (Medium) detected in jquery-3.3.1.min.js - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.3.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js</a></p>
<p>Path to vulnerable library: EDDI/apiserver/src/main/resources/js/jquery-3.3.1.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/labsai/EDDI/commit/e141334e85f823e2e1a3c8e4ac2c90fe6a35c48c">e141334e85f823e2e1a3c8e4ac2c90fe6a35c48c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library eddi apiserver src main resources js jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
65,920 | 8,854,732,338 | IssuesEvent | 2019-01-09 02:45:49 | govau/uikit | https://api.github.com/repos/govau/uikit | closed | Callout - use of aria-label in <p> | documentation | Regarding callout https://designsystem.gov.au/components/callout/
The use of aria-label in the `<section>` tag adds value as it adds a heading to the container. If you are looking at the element list using screen reader the aria-label added will be listed out.
But, for a `<p>` tag, I am not sure if aria-label adds any value.
example:
```
<p class="au-callout" aria-label="Callout description1">
A callout.
</p>
```
further reference: https://www.w3.org/TR/html51/grouping-content.html#elementdef-p | 1.0 | Callout - use of aria-label in <p> - Regarding callout https://designsystem.gov.au/components/callout/
The use of aria-label in the `<section>` tag adds value as it adds a heading to the container. If you are looking at the element list using screen reader the aria-label added will be listed out.
But, for a `<p>` tag, I am not sure if aria-label adds any value.
example:
```
<p class="au-callout" aria-label="Callout description1">
A callout.
</p>
```
further reference: https://www.w3.org/TR/html51/grouping-content.html#elementdef-p | non_test | callout use of aria label in regarding callout the use of aria label in the tag adds value as it adds a heading to the container if you are looking at the element list using screen reader the aria label added will be listed out but for a tag i am not sure if aria label adds any value example a callout further reference | 0 |
329,718 | 28,303,456,105 | IssuesEvent | 2023-04-10 08:37:28 | milvus-io/milvus | https://api.github.com/repos/milvus-io/milvus | opened | [Bug]: [benchmark][standalone] milvus search failed to report :"_InactiveRpcError of RPC that terminated with" | kind/bug needs-triage test/benchmark | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version:2.2.0-20230410-58eb118a
- Deployment mode(standalone or cluster):standalone
- MQ type(rocksmq, pulsar or kafka):
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
```
### Current Behavior
case : test_concurrent_locust_ivf_sq8_search_standalone
argo task : fouramf-concurrent-8svs2 , id : 2
server:
```
fouramf-concurrent-8svs2-2-etcd-0 1/1 Running 0 2m25s 10.104.6.100 4am-node13 <none> <none>
fouramf-concurrent-8svs2-2-milvus-standalone-8487bdd875-j4dw8 1/1 Running 0 2m25s 10.104.5.213 4am-node12 <none> <none>
fouramf-concurrent-8svs2-2-minio-db79b7fb9-f8lkg 1/1 Running 0 2m25s 10.104.6.99 4am-node13 <none> <none>
```
client log:
[test_concurrent_locust_ivf_sq8_search_standalone.zip](https://github.com/milvus-io/milvus/files/11189009/test_concurrent_locust_ivf_sq8_search_standalone.zip)
### Expected Behavior
_No response_
### Steps To Reproduce
```markdown
1. create a collection or use an existing collection
2. build index on vector column
3. insert a certain number of vectors
4. flush collection
5. build index on vector column with the same parameters
6. build index on on scalars column or not
7. count the total number of rows
8. load collection
9. perform concurrent operations
10. clean all collections or not
```
### Milvus Log
_No response_
### Anything else?
_No response_ | 1.0 | [Bug]: [benchmark][standalone] milvus search failed to report :"_InactiveRpcError of RPC that terminated with" - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version:2.2.0-20230410-58eb118a
- Deployment mode(standalone or cluster):standalone
- MQ type(rocksmq, pulsar or kafka):
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
```
### Current Behavior
case : test_concurrent_locust_ivf_sq8_search_standalone
argo task : fouramf-concurrent-8svs2 , id : 2
server:
```
fouramf-concurrent-8svs2-2-etcd-0 1/1 Running 0 2m25s 10.104.6.100 4am-node13 <none> <none>
fouramf-concurrent-8svs2-2-milvus-standalone-8487bdd875-j4dw8 1/1 Running 0 2m25s 10.104.5.213 4am-node12 <none> <none>
fouramf-concurrent-8svs2-2-minio-db79b7fb9-f8lkg 1/1 Running 0 2m25s 10.104.6.99 4am-node13 <none> <none>
```
client log:
[test_concurrent_locust_ivf_sq8_search_standalone.zip](https://github.com/milvus-io/milvus/files/11189009/test_concurrent_locust_ivf_sq8_search_standalone.zip)
### Expected Behavior
_No response_
### Steps To Reproduce
```markdown
1. create a collection or use an existing collection
2. build index on vector column
3. insert a certain number of vectors
4. flush collection
5. build index on vector column with the same parameters
6. build index on on scalars column or not
7. count the total number of rows
8. load collection
9. perform concurrent operations
10. clean all collections or not
```
### Milvus Log
_No response_
### Anything else?
_No response_ | test | milvus search failed to report inactiverpcerror of rpc that terminated with is there an existing issue for this i have searched the existing issues environment markdown milvus version deployment mode standalone or cluster standalone mq type rocksmq pulsar or kafka sdk version e g pymilvus os ubuntu or centos cpu memory gpu others current behavior case test concurrent locust ivf search standalone argo task fouramf concurrent id server fouramf concurrent etcd running fouramf concurrent milvus standalone running fouramf concurrent minio running client log expected behavior no response steps to reproduce markdown create a collection or use an existing collection build index on vector column insert a certain number of vectors flush collection build index on vector column with the same parameters build index on on scalars column or not count the total number of rows load collection perform concurrent operations clean all collections or not milvus log no response anything else no response | 1 |
89,854 | 8,215,730,092 | IssuesEvent | 2018-09-05 06:55:59 | imixs/imixs-adapters | https://api.github.com/repos/imixs/imixs-adapters | closed | LDAPLookupService - provide public cache method | Testing enhancement | provide public cache method so that a external client can add objects into the internal cache | 1.0 | LDAPLookupService - provide public cache method - provide public cache method so that a external client can add objects into the internal cache | test | ldaplookupservice provide public cache method provide public cache method so that a external client can add objects into the internal cache | 1 |
1,371 | 2,603,842,152 | IssuesEvent | 2015-02-24 18:14:57 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳龟头上面长疙瘩 | auto-migrated Priority-Medium Type-Defect | ```
沈阳龟头上面长疙瘩〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:25 | 1.0 | 沈阳龟头上面长疙瘩 - ```
沈阳龟头上面长疙瘩〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:25 | non_test | 沈阳龟头上面长疙瘩 沈阳龟头上面长疙瘩〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位� �� 。是一所與新中國同建立共輝煌的� ��史悠久、設備精良、技術權威、專家云集,是預防、保健、 醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等�� �隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東� ��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍 后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二�� �功。 original issue reported on code google com by gmail com on jun at | 0 |
258,295 | 22,300,432,490 | IssuesEvent | 2022-06-13 08:13:29 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Delete a secondary user store giving an invalid user store domain id | Priority/Highest Severity/Critical bug Affected-6.0.0 IS-6.0.0-Test-Hackathon | **Describe the issue:**
When try to delete a secondary user store using invalid user store domain id using REST API (https://localhost:9443/t/carbon.super/api/server/v1/userstores/{userstore-domain-id}) does not gives expected 404 as response. Instead gives 204 or 500.
<img width="1281" alt="Screenshot 2022-05-25 at 13 36 19" src="https://user-images.githubusercontent.com/38417165/170213180-4796e79c-a494-4575-a6a9-7dcea9df9e1e.png">
<img width="1281" alt="Screenshot 2022-05-25 at 13 38 09" src="https://user-images.githubusercontent.com/38417165/170213972-f487d37d-f622-4165-8fd6-17044b6d00b1.png">
**How to reproduce:**
1. Create a secondary user store.
2. Extract user store domain id.
3. Send a DELETE request to https://localhost:9443/t/carbon.super/api/server/v1/userstores/{userstore-domain-id}.
**Expected behavior:**
Delete a secondary user store giving an invalid user store domain id should give 404.
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: IS 6.0.0-m1
- OS: MacOS 12.1
- JDK: Oracle JDK 11.0.15
| 1.0 | Delete a secondary user store giving an invalid user store domain id - **Describe the issue:**
When try to delete a secondary user store using invalid user store domain id using REST API (https://localhost:9443/t/carbon.super/api/server/v1/userstores/{userstore-domain-id}) does not gives expected 404 as response. Instead gives 204 or 500.
<img width="1281" alt="Screenshot 2022-05-25 at 13 36 19" src="https://user-images.githubusercontent.com/38417165/170213180-4796e79c-a494-4575-a6a9-7dcea9df9e1e.png">
<img width="1281" alt="Screenshot 2022-05-25 at 13 38 09" src="https://user-images.githubusercontent.com/38417165/170213972-f487d37d-f622-4165-8fd6-17044b6d00b1.png">
**How to reproduce:**
1. Create a secondary user store.
2. Extract user store domain id.
3. Send a DELETE request to https://localhost:9443/t/carbon.super/api/server/v1/userstores/{userstore-domain-id}.
**Expected behavior:**
Delete a secondary user store giving an invalid user store domain id should give 404.
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: IS 6.0.0-m1
- OS: MacOS 12.1
- JDK: Oracle JDK 11.0.15
| test | delete a secondary user store giving an invalid user store domain id describe the issue when try to delete a secondary user store using invalid user store domain id using rest api does not gives expected as response instead gives or img width alt screenshot at src img width alt screenshot at src how to reproduce create a secondary user store extract user store domain id send a delete request to expected behavior delete a secondary user store giving an invalid user store domain id should give environment information please complete the following information remove any unnecessary fields product version is os macos jdk oracle jdk | 1 |
19,268 | 3,436,604,147 | IssuesEvent | 2015-12-12 14:47:02 | jgirald/ES2015F | https://api.github.com/repos/jgirald/ES2015F | opened | Fer imatges dels egipcis | Design TeamA | ## Descripció
Crear les imatges dels personatges egipcisper a fer-ne us a la web i la wiki
## Definition of Done
Les imatges es visualitzen correctament a la wiki
## Esforç estimat:
2h | 1.0 | Fer imatges dels egipcis - ## Descripció
Crear les imatges dels personatges egipcisper a fer-ne us a la web i la wiki
## Definition of Done
Les imatges es visualitzen correctament a la wiki
## Esforç estimat:
2h | non_test | fer imatges dels egipcis descripció crear les imatges dels personatges egipcisper a fer ne us a la web i la wiki definition of done les imatges es visualitzen correctament a la wiki esforç estimat | 0 |
122,131 | 10,212,662,649 | IssuesEvent | 2019-08-14 20:01:55 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Unexpeced errors show in logs after deleting an RKE cluster | [zube]: To Test area/logging area/rke kind/bug-qa team/ca | <!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
- run Rancher: master e1e6fc93
- add an EC2 cluster
- after the cluster is active, enable cluster monitoring, it fails because of this https://github.com/rancher/rancher/issues/19381#issuecomment-490191909
- delete the cluster
**Result:**
- the cluster is removed successfully, but Rancher's log keeps showing the following errors
```
2019/05/07 18:43:41 [ERROR] failed on subscribe deployment: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe prometheus: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedServiceAccountToken: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedSecret: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe pod: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe cronJob: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedDockerCredential: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe daemonSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe ingress: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe replicaSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedCertificate: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe dnsRecord: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe statefulSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe alertmanager: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe serviceMonitor: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe replicationController: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe prometheusRule: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe deployment: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe replicaSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe replicationController: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe daemonSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe statefulSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe job: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe cronJob: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe job: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe persistentVolumeClaim: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe service: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedSshAuth: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedBasicAuth: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe configMap: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
```
**Other details that may be helpful:**
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI):
Rancher: master e1e6fc93
- Installation option (single install/HA):
Single install
<!--
If the reported issue is regarding a created cluster, please provide requested info below
-->
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported):
- Machine type (cloud/VM/metal) and specifications (CPU/memory):
- Kubernetes version (use `kubectl version`):
```
(paste the output here)
```
- Docker version (use `docker version`):
```
(paste the output here)
```
| 1.0 | Unexpeced errors show in logs after deleting an RKE cluster - <!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
- run Rancher: master e1e6fc93
- add an EC2 cluster
- after the cluster is active, enable cluster monitoring, it fails because of this https://github.com/rancher/rancher/issues/19381#issuecomment-490191909
- delete the cluster
**Result:**
- the cluster is removed successfully, but Rancher's log keeps showing the following errors
```
2019/05/07 18:43:41 [ERROR] failed on subscribe deployment: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe prometheus: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedServiceAccountToken: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedSecret: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe pod: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe cronJob: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedDockerCredential: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe daemonSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe ingress: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe replicaSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedCertificate: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe dnsRecord: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe statefulSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe alertmanager: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe serviceMonitor: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe replicationController: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe prometheusRule: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe deployment: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe replicaSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe replicationController: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe daemonSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe statefulSet: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe job: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe cronJob: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe job: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe persistentVolumeClaim: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe service: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedSshAuth: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe namespacedBasicAuth: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
2019/05/07 18:43:41 [ERROR] failed on subscribe configMap: ClusterUnavailable 503: cluster.management.cattle.io "c-mvvxv" not found
```
**Other details that may be helpful:**
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI):
Rancher: master e1e6fc93
- Installation option (single install/HA):
Single install
<!--
If the reported issue is regarding a created cluster, please provide requested info below
-->
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported):
- Machine type (cloud/VM/metal) and specifications (CPU/memory):
- Kubernetes version (use `kubectl version`):
```
(paste the output here)
```
- Docker version (use `docker version`):
```
(paste the output here)
```
| test | unexpeced errors show in logs after deleting an rke cluster please search for existing issues first then read to see what we expect in an issue for security issues please email security rancher com instead of posting a public issue in github you may but are not required to use the gpg key located on keybase what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible run rancher master add an cluster after the cluster is active enable cluster monitoring it fails because of this delete the cluster result the cluster is removed successfully but rancher s log keeps showing the following errors failed on subscribe deployment clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe prometheus clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe namespacedserviceaccounttoken clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe namespacedsecret clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe pod clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe cronjob clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe namespaceddockercredential clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe daemonset clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe ingress clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe replicaset clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe namespacedcertificate clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe dnsrecord clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe statefulset clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe alertmanager clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe servicemonitor clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe replicationcontroller clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe prometheusrule clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe deployment clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe replicaset clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe replicationcontroller clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe daemonset clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe statefulset clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe job clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe cronjob clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe job clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe persistentvolumeclaim clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe service clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe namespacedsshauth clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe namespacedbasicauth clusterunavailable cluster management cattle io c mvvxv not found failed on subscribe configmap clusterunavailable cluster management cattle io c mvvxv not found other details that may be helpful environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui rancher master installation option single install ha single install if the reported issue is regarding a created cluster please provide requested info below cluster information cluster type hosted infrastructure provider custom imported machine type cloud vm metal and specifications cpu memory kubernetes version use kubectl version paste the output here docker version use docker version paste the output here | 1 |
66,519 | 20,256,205,782 | IssuesEvent | 2022-02-14 23:37:46 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Weird design when having mixed state events with and without state keys | T-Defect S-Minor A-DevTools O-Uncommon | ### Steps to reproduce
Put state events of the same `event_type` to any given room. One of them should not have a `state_key`
Example code for nio:
```
await self.async_client.room_put_state(
room_id="!iuasdhfau:matrix.org",
event_type="org.example.custom_event",
content={"key": "val"},
state_key="")
await self.async_client.room_put_state(
room_id="!iuasdhfau:matrix.org",
event_type="org.example.custom_event",
content={"key": "val"},
state_key="testtest")
```
Now, in the Element devtools, use the "explore room states" function and check the state for `org.example.custom_event`
### Outcome
#### What did you expect?
Two buttons with "testtest", resp. _"empty"_ or "None"
#### What happened instead?
We see a weird empty button.

### Operating system
Fedora
### Application version
1.10.1
### How did you install the app?
Flathub
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Weird design when having mixed state events with and without state keys - ### Steps to reproduce
Put state events of the same `event_type` to any given room. One of them should not have a `state_key`
Example code for nio:
```
await self.async_client.room_put_state(
room_id="!iuasdhfau:matrix.org",
event_type="org.example.custom_event",
content={"key": "val"},
state_key="")
await self.async_client.room_put_state(
room_id="!iuasdhfau:matrix.org",
event_type="org.example.custom_event",
content={"key": "val"},
state_key="testtest")
```
Now, in the Element devtools, use the "explore room states" function and check the state for `org.example.custom_event`
### Outcome
#### What did you expect?
Two buttons with "testtest", resp. _"empty"_ or "None"
#### What happened instead?
We see a weird empty button.

### Operating system
Fedora
### Application version
1.10.1
### How did you install the app?
Flathub
### Homeserver
_No response_
### Will you send logs?
No | non_test | weird design when having mixed state events with and without state keys steps to reproduce put state events of the same event type to any given room one of them should not have a state key example code for nio await self async client room put state room id iuasdhfau matrix org event type org example custom event content key val state key await self async client room put state room id iuasdhfau matrix org event type org example custom event content key val state key testtest now in the element devtools use the explore room states function and check the state for org example custom event outcome what did you expect two buttons with testtest resp empty or none what happened instead we see a weird empty button operating system fedora application version how did you install the app flathub homeserver no response will you send logs no | 0 |
487,243 | 14,021,013,449 | IssuesEvent | 2020-10-29 20:35:17 | Kedyn/fusliez-notes | https://api.github.com/repos/Kedyn/fusliez-notes | closed | Option to use detailed map | Priority: Low Status: Available Type: Enhancement | Option to swap the maps out to detailed versions that have been made using screenshots of the map. For newer players still trying to learn the maps, being able to see what it looks like on the map could be helpful.
Sorry if this isn't the place to post this, I remember there was a feedback thing on the website but I can't find it anymore. | 1.0 | Option to use detailed map - Option to swap the maps out to detailed versions that have been made using screenshots of the map. For newer players still trying to learn the maps, being able to see what it looks like on the map could be helpful.
Sorry if this isn't the place to post this, I remember there was a feedback thing on the website but I can't find it anymore. | non_test | option to use detailed map option to swap the maps out to detailed versions that have been made using screenshots of the map for newer players still trying to learn the maps being able to see what it looks like on the map could be helpful sorry if this isn t the place to post this i remember there was a feedback thing on the website but i can t find it anymore | 0 |
40,191 | 10,469,937,712 | IssuesEvent | 2019-09-23 00:47:47 | rust-lang/cargo | https://api.github.com/repos/rust-lang/cargo | closed | RUSTFLAGS="..." cargo .. should recompile std lib with flags enabled | C-feature-request Z-build-std | This is required for `rust-san` to work properly, and also for enabling some more efficient algorithms in `std::simd` that require specific target features (`SSE4`, `AVX`, `NEON`, etc.). Adding `Iterator::is_sorted` would also benefit from `SSE4` and `AVX`. It currently does run-time feature detection but it could avoid that if `std` was compiled with the appropriate flags. | 1.0 | RUSTFLAGS="..." cargo .. should recompile std lib with flags enabled - This is required for `rust-san` to work properly, and also for enabling some more efficient algorithms in `std::simd` that require specific target features (`SSE4`, `AVX`, `NEON`, etc.). Adding `Iterator::is_sorted` would also benefit from `SSE4` and `AVX`. It currently does run-time feature detection but it could avoid that if `std` was compiled with the appropriate flags. | non_test | rustflags cargo should recompile std lib with flags enabled this is required for rust san to work properly and also for enabling some more efficient algorithms in std simd that require specific target features avx neon etc adding iterator is sorted would also benefit from and avx it currently does run time feature detection but it could avoid that if std was compiled with the appropriate flags | 0 |
93,273 | 11,762,237,000 | IssuesEvent | 2020-03-14 00:29:05 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | themeMode: ThemeMode.light, Not Working //Disabling dark theme in iOS apps | f: material design framework ⌺ platform-ios | Im trying to disable dark theme in my app by setting
`themeMode: ThemeMode.light` in `MaterialApp`,
this does not seem to enforce light mode or the theme i'm using. Instead system setting is being used.
```
final ThemeData _themeData = ThemeData(
brightness: Brightness.light,
primarySwatch: Colors.blue,
accentColor: Colors.blue,
primaryColor: Colors.blue,
splashColor: Colors.transparent,
highlightColor: Colors.transparent,
);
MaterialApp(
themeMode: ThemeMode.light,
navigatorKey: navigatorKey,
navigatorObservers: <NavigatorObserver>[observer],
builder: (context, child) {
return ScrollConfiguration(
behavior:
MyBehavior(), //To remove the overflow glow effect in list view.
child: child,
);
},
title: 'XYZ',
debugShowCheckedModeBanner: false,
theme: _themeData,
routes: routes,
home: MyHomePage(),
),
``` | 1.0 | themeMode: ThemeMode.light, Not Working //Disabling dark theme in iOS apps - Im trying to disable dark theme in my app by setting
`themeMode: ThemeMode.light` in `MaterialApp`,
this does not seem to enforce light mode or the theme i'm using. Instead system setting is being used.
```
final ThemeData _themeData = ThemeData(
brightness: Brightness.light,
primarySwatch: Colors.blue,
accentColor: Colors.blue,
primaryColor: Colors.blue,
splashColor: Colors.transparent,
highlightColor: Colors.transparent,
);
MaterialApp(
themeMode: ThemeMode.light,
navigatorKey: navigatorKey,
navigatorObservers: <NavigatorObserver>[observer],
builder: (context, child) {
return ScrollConfiguration(
behavior:
MyBehavior(), //To remove the overflow glow effect in list view.
child: child,
);
},
title: 'XYZ',
debugShowCheckedModeBanner: false,
theme: _themeData,
routes: routes,
home: MyHomePage(),
),
``` | non_test | thememode thememode light not working disabling dark theme in ios apps im trying to disable dark theme in my app by setting thememode thememode light in materialapp this does not seem to enforce light mode or the theme i m using instead system setting is being used final themedata themedata themedata brightness brightness light primaryswatch colors blue accentcolor colors blue primarycolor colors blue splashcolor colors transparent highlightcolor colors transparent materialapp thememode thememode light navigatorkey navigatorkey navigatorobservers builder context child return scrollconfiguration behavior mybehavior to remove the overflow glow effect in list view child child title xyz debugshowcheckedmodebanner false theme themedata routes routes home myhomepage | 0 |
72,147 | 7,286,109,403 | IssuesEvent | 2018-02-23 08:23:55 | EyeSeeTea/SurveillanceMyanmarApp | https://api.github.com/repos/EyeSeeTea/SurveillanceMyanmarApp | closed | rmelia@knowming.com on build #5: One screen to go... Counter problem | buddybug complexity - med (1-5hr) priority - medium testing type - bug | Feedback from rmelia@knowming.com : One screen to go... Counter problem
<img src="https://s3-us-west-2.amazonaws.com/buddybuild-screenshots/5854527e2d2ddd0100822e74/58976a9cec303001007bad40/3821926b-430f-491c-a98d-40a095fa0829.jpg" width="33%" height="33%" /><table><tr><td>Created</td><td>Sun Feb 05 2017 22:38:24 GMT+0000 (UTC)</td></tr><tr><td>Build</td><td>5</td></tr><tr><td>Device type</td><td>ZTE Blade A460</td></tr><tr><td>Screen size</td><td>480</td></tr><tr><td>Screen size</td><td>480px by 854px</td></tr><tr><td>Battery</td><td>42% Unplugged</td></tr><tr><td>Memory free</td><td>263 MB / 900 MB</td></tr></table>
[Link to buddybuild feedback from build 5](https://dashboard.buddybuild.com/apps/5854527e2d2ddd0100822e74/feedback?fid=5897a96040f39b0100e33bf8&bnum=5) | 1.0 | rmelia@knowming.com on build #5: One screen to go... Counter problem - Feedback from rmelia@knowming.com : One screen to go... Counter problem
<img src="https://s3-us-west-2.amazonaws.com/buddybuild-screenshots/5854527e2d2ddd0100822e74/58976a9cec303001007bad40/3821926b-430f-491c-a98d-40a095fa0829.jpg" width="33%" height="33%" /><table><tr><td>Created</td><td>Sun Feb 05 2017 22:38:24 GMT+0000 (UTC)</td></tr><tr><td>Build</td><td>5</td></tr><tr><td>Device type</td><td>ZTE Blade A460</td></tr><tr><td>Screen size</td><td>480</td></tr><tr><td>Screen size</td><td>480px by 854px</td></tr><tr><td>Battery</td><td>42% Unplugged</td></tr><tr><td>Memory free</td><td>263 MB / 900 MB</td></tr></table>
[Link to buddybuild feedback from build 5](https://dashboard.buddybuild.com/apps/5854527e2d2ddd0100822e74/feedback?fid=5897a96040f39b0100e33bf8&bnum=5) | test | rmelia knowming com on build one screen to go counter problem feedback from rmelia knowming com one screen to go counter problem created sun feb gmt utc build device type zte blade screen size screen size by battery unplugged memory free mb mb | 1 |
661,817 | 22,089,885,258 | IssuesEvent | 2022-06-01 04:31:34 | PokemonAutomation/Arduino-Source | https://api.github.com/repos/PokemonAutomation/Arduino-Source | opened | Support ARM for M1 Macs. | enhancement P4 - Low Priority Serial Programs | - [ ] Cleanup all the x86-specific stuff so that they are locked behind x86 flags. This will require implementing C++-only defaults for everything.
- [ ] Get project to build for ARM.
- [ ] Re-implement the necessary kernels for ARM. | 1.0 | Support ARM for M1 Macs. - - [ ] Cleanup all the x86-specific stuff so that they are locked behind x86 flags. This will require implementing C++-only defaults for everything.
- [ ] Get project to build for ARM.
- [ ] Re-implement the necessary kernels for ARM. | non_test | support arm for macs cleanup all the specific stuff so that they are locked behind flags this will require implementing c only defaults for everything get project to build for arm re implement the necessary kernels for arm | 0 |
235,706 | 18,056,649,192 | IssuesEvent | 2021-09-20 09:06:12 | girlscript/winter-of-contributing | https://api.github.com/repos/girlscript/winter-of-contributing | opened | Java 2.3: | documentation GWOC21 Video Audio Java | <hr>
## Description 📜
Add audio, documentation, and video regarding the **Blocks and lexicals**.
<hr>
## Domain of Contribution 📊
<!----Please delete options that are not relevant. And in order to tick the check box just but x inside them for example [x] like this----->
- [x] Java
<hr>
## Location of File to be added
The files should be added inside the `Blocks and lexicals` folder which is inside the `Fundamental` folder present in the `Java` main folder. The documentation file can be in the `.md`/`.py`/`.ipynb` format. Audio should be in `.mp3` format and to be submitted through a Google drive link. Video can be in any format and should be submitted through a Google Drive link.
**Note:**
- You are required to comment with `\assign CONTENT_TYPE`, so that other content types can be created at the same time.
- Failing to comment in the above manner will lead to the removal of assignment from the issue even if the bot assigns.
- You are required to do a PR within 7 days otherwise you will be unassigned and others will be assigned.
- Issues will be assigned on a first come first serve basis.
| 1.0 | Java 2.3: - <hr>
## Description 📜
Add audio, documentation, and video regarding the **Blocks and lexicals**.
<hr>
## Domain of Contribution 📊
<!----Please delete options that are not relevant. And in order to tick the check box just but x inside them for example [x] like this----->
- [x] Java
<hr>
## Location of File to be added
The files should be added inside the `Blocks and lexicals` folder which is inside the `Fundamental` folder present in the `Java` main folder. The documentation file can be in the `.md`/`.py`/`.ipynb` format. Audio should be in `.mp3` format and to be submitted through a Google drive link. Video can be in any format and should be submitted through a Google Drive link.
**Note:**
- You are required to comment with `\assign CONTENT_TYPE`, so that other content types can be created at the same time.
- Failing to comment in the above manner will lead to the removal of assignment from the issue even if the bot assigns.
- You are required to do a PR within 7 days otherwise you will be unassigned and others will be assigned.
- Issues will be assigned on a first come first serve basis.
| non_test | java description 📜 add audio documentation and video regarding the blocks and lexicals domain of contribution 📊 java location of file to be added the files should be added inside the blocks and lexicals folder which is inside the fundamental folder present in the java main folder the documentation file can be in the md py ipynb format audio should be in format and to be submitted through a google drive link video can be in any format and should be submitted through a google drive link note you are required to comment with assign content type so that other content types can be created at the same time failing to comment in the above manner will lead to the removal of assignment from the issue even if the bot assigns you are required to do a pr within days otherwise you will be unassigned and others will be assigned issues will be assigned on a first come first serve basis | 0 |
84,110 | 7,890,797,972 | IssuesEvent | 2018-06-28 09:54:24 | btc-ag/service-idl | https://api.github.com/repos/btc-ag/service-idl | opened | Identify set of basic features that works in all technologies | testing | - [ ] Identify set of basic features (with @GerrietReents)
- [ ] Add "good" test cases for these basic features
- [ ] Ensure that the integration tests for these basic features run
All other features will only be tested/analyzed after 1.0.0. | 1.0 | Identify set of basic features that works in all technologies - - [ ] Identify set of basic features (with @GerrietReents)
- [ ] Add "good" test cases for these basic features
- [ ] Ensure that the integration tests for these basic features run
All other features will only be tested/analyzed after 1.0.0. | test | identify set of basic features that works in all technologies identify set of basic features with gerrietreents add good test cases for these basic features ensure that the integration tests for these basic features run all other features will only be tested analyzed after | 1 |
64,913 | 7,850,157,461 | IssuesEvent | 2018-06-20 07:36:59 | greatnewcls/E55Q3NOULHJ77TLQU57Z4QQM | https://api.github.com/repos/greatnewcls/E55Q3NOULHJ77TLQU57Z4QQM | reopened | N/HQ/uPvWYi4xqxf1fCHFL3aOD2AJmlhegktKPddcXWiICli7ibSP0Cx5cy5sYVw2VUcofGrUmRE4V5JFxOBmMP+zHY2iIcOZrypMaLHXYTPiQZg3PfQfs4UaXRdGdBIef+k6Krky+bcPNI//htNXZ8Mb2Nj6ELXCKQI8v+Eu1Y= | design | BoEaaeU1lOWXm1JmVso+mwlmNEseXtmq0IxFB0KX9TGGam7Bv0bbcALtj9IluYEeP6FdGia9/XLUO3prppxf0TmxRfPpFdQ13PyD2pDbyWkdK56hJtulQ8m1KCCHUnnZ9Rff64iuj5jFYA5wb69k7C7Bgv/jC0dwUDAl8fCXd/AfRMSdf3qvv8jEZbNNO4pB+Yu5djYYDx7KMRDnDANlFbI277Lio6ZzvyUG+UpNdc0A6jmjJO87rMTlJagW3DzA+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRVbMvirlQmWu302XvdpteEGvlSk8etSNhXZqumRImXglPmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVQNoeFUNY84+llCrNUEX7KGOBy6oqxW12tOHV+BgJGZ4t4DoV4oZ9Ex8KvaaYmrP/txUPHutyl3MwOSuOKzdzg4lyCiehRF74zXhtjLmzRDf5i7l2NhgPHsoxEOcMA2UV30Tt/78In6S/WJDEYbTnT9WwnukxDsujxCnCgkA59Kyu9ciPRYEyrzruAEGNDZPX+Yu5djYYDx7KMRDnDANlFa/0OYhi35w3laYRulGm1QnIm4WGEJGBGTjfSJLPDPPO+Yu5djYYDx7KMRDnDANlFc/Id4Y2sN+QYY55/4Dor2zxZIK9PvlLHhHl5aAkbWW1dpINV3pfxxmJRwj7Mb9mfZ7ld3sNSleL3BpU32tcUwWtTpogjKjFLZrfrgsIzgNTSer5GdxSPC8a1fY9j4TqT62XzU+iDSS6rvs30WHcf2UL/QYpSisROEct5IfesKr+MqzstJN1MLFej12ImUkySA72TpaSGNORQb3QsR0AGcUcnjtQnN/NHa+24kvnJxew/fjvhlOvye1yfIClERU0MrZbjEMv4y5pmvLKu0/Et6uFqoPPTNMhdwz59FVSk85qM8xGVb6TnYRlHfyeb6mjMPmLuXY2GA8eyjEQ5wwDZRW0fRrzxA7K9Km7kHDeJKPQFzW9cnlw0uJzzrV6oFDaV9aEso24GGUgtfQl/hGtx1ZnjjuZRvh1G9EeBL4CqRNbs21LF9VIH/tka0MaonHmRiypYJ+ttJwKsm0UdteJ/O4Z2iA1R+9vYXNW1kqCAstZasxO7IBfEhFMYDMXt3i5XFxB7GRwyNkbIVaUoe8V4Xmh52bPtZF2DXQACYDPPKAspWCmIGNWrzSIkauwEH4wEitXRXICV52MQFXUbFXIFwG2qFdeONPp+/RP6qLSLAbw+Yu5djYYDx7KMRDnDANlFSpjEo8d1Orlm3ErKNFTHTS8k7IOzNOECG2CX3VF9jn/+Yu5djYYDx7KMRDnDANlFZFxC+WPJZke5xUO1G5trknexCqXo0rMKB644qdLeuvO+Yu5djYYDx7KMRDnDANlFeZSIVlJa/nJ1DwNpiQu4cA/Jm/5baA86Fbn72TXRdz1VRdY4hiVreoQV6gSPEJM9u8D0jWU55ylVNOrPaGNRur5i7l2NhgPHsoxEOcMA2UV/7e+nUULI8UL4yjl2fXGNnq/z2ddZkLCP7mOsKAH2f67b1+k6g8AcRy9U+jbsk6o+Yu5djYYDx7KMRDnDANlFfy78Qk1CDLB91xI/DV5l6o3DYhF5JMjjT4WwfcloiGE+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRWNjDjfwvxN+nXFMUvIbAj5QharGAsDdoell6dIiOX5dWQ6oVezC65DmAJETF1WMpj5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFVMU2Yo/LxhpZHDBD2vOvVYHktY3bmSUjXANigIN95wRykRcffDgHXdSiyRu+/yt5ZQRZ7MP2MXyhjpMJPRLoDGV7sTInYXj3YD7DSqEqYF6mbAD78osf5O3Dq7xrjE+onGpH7wwwdL1NDOXNQkaYYH5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFQG++sa3v18MWY1PEhZX/0vGoBT5Z5rle1dh8Rlq3tfF+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRX6zhPYFyxF95vk4V27FSUoX1k5xV548sO/74byVdQErKnaPBX9wh8IxMvZwb4z75P5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFeBANv8TG3T+mGfM/NWmZVT+jv8iyUwiBssJSbJZamQRxw0+Qng2cPUHjzoc4eUNbvmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVs0qwetPPvlLZ8p6K2cQRwdXn00c1LQFKGezJYFm8BED5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFakLf8yDXMIORYrWI3hQDM+ZIxvbPjKvpi31tOhfTIRHo+skO8ieBhISvsQh9KtS+/mLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVA0+Vr6mZmMmI7WbPY1NbR/atQ7Zs4hlS0zszt8AbQv75i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFf07tl4mHkEW/PYpjLUXKujj+jp2lm6iylEl2qb4TeoaSo8xWy5youGX4d0Zt7FEn/mLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVVsqLbOSfLS1ukqFvEtB0GHxm3eP6/6z0zdda8cSj2iT5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFfRWi0Qpmc/B/0LMxqa9xH7my7CfYjfZ3HlwClnEZ+5dSwjYtXvrSlgRBbwhVk142PmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVwdZU0MmeUJ5SDrlcI668yP1nzkj+JpHyen+EZeywn2x8Qv1TPp7pAWVzVXMUTZXk+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRX46jcU1y6i3/sNVBk5kx64NdYebjVd+67khZNZkw8M0vmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVvx9o096sQBuan7mCa+9qokZ6PJMdrs4JbsGhsZNKMORmnMpJCLZXLSxg2Nvb4Hle+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRXpdgBtLgqAECupLmhqJjo+iMrlusj8/TRXcMIPzPMDUvmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVz2IxnbJ/aBhsf6S1VJwn4eS9JZRIzj4jX+9SG9yurzAG0+fNMmRn1b6GNgURSzxu+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRUgM1t2A00Yc2RTjRgdb8P7QZFkMo8/ry/AGkL07LV93/mLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVkM99kWIzWJNz8HcxbPvMiOgJrMPwIqW+Eyo+khLya3RJYV0oM0DWuhDOIhp0F6Ko+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRUju/s8SLWNa+ocWEa/BVdaDPepo8A/5xmDy/3fJOrBQvmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UV+s4T2BcsRfeb5OFduxUlKFtESPz/k55AZVY/waLrQXHBg+Ggzlop6Ragw+WLJ7ld+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRVPIkmsRc1EQ4XqbnrBaET8N7eyCOSZ0+jMxO8NUMEJb8cNPkJ4NnD1B486HOHlDW75i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFVVQyqlayy1X5GqR+p+rhnFkOEtBTubUFtYmK/HtRPYZ+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRWJ/9zt0IAi6TJ8MLl6wvZQjjTaaF9anBCSsT+JyaT7CqPrJDvIngYSEr7EIfSrUvv5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFRk075ZYypt+W8etCSNfjkhz3jFzzFtyk0XDLbCMfFwm+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRU2mjAPkrorPj0Iv7DSnIRfD+4GlIXXkcC6g72M25uzngT+W5mlA/9TOM4IqtFAHGT5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFfKrY/Ti2TIi2ASjJOrMacFhRFebobFQHFDSZJxfMBIc+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRWCJhpJI96mr0LTL40YVhNpqGZt9uVTUNbNUdNAVNOru3ik8286r8hMXId+Kg3mp8T5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFTeY3uBdeyMo9DrA7sN4jZ+B9YLk+oJB1N0bkBq03LS/fEL9Uz6e6QFlc1VzFE2V5PmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVeWMrMLy4xNAzj1+oMZaUniJyAOZJemO5I45rnxNNSf35i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFf7ANTqTA3mdIW299iPXdaKVE2ZaXLku0bHiO3hikg7GZpzKSQi2Vy0sYNjb2+B5XvmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVhHhpAS0EpR6Dgrw3xTN1aRa8iwqo7BU+JSQoiNE6OQX5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFc9iMZ2yf2gYbH+ktVScJ+HyGG+ujlMG3ydqKF5/GfELTLHeBSe//nBpk5dTjLm3B/mLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVuhpiarMuuh4TqvRNo2pGZrAojPc4AZuGF8YAcRXoY335i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFV4oMhnFnDRrE5mIFhk33eObQGiIc7f+/G0BIWIJuVUs/MrGv+ml3e4XSSiI7buPNfmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UV1JFqJTXPUkcf2aEto5Fskv7AvisvL2G87k2nLtOVMvn5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFfrOE9gXLEX3m+ThXbsVJShl+WxQ6CYIM1W+6yA9Z0f7Kso9HZcv8L1UsmOwlyUh1fmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVTyJJrEXNREOF6m56wWhE/BWP92NTAfZ/K7k0AwCuH33HDT5CeDZw9QePOhzh5Q1u+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRUqmK9Os/66eExzp27Lr+ijOpg1GH6tzsxJ9hEz4TrzuPmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UV8lBma82jpG6MkNWKP9HCcJ2shDj/nrGjx2LBC0Wj2V2j6yQ7yJ4GEhK+xCH0q1L7+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRXtnjf0100hgTSMR/4smVBVwiSequ1ySpE7PrXZpGQY/PmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVQ1ZTY6BJRkUiZCChBhOrEqeq6tchYrNOJsrbUYIx6/1KjzFbLnKi4Zfh3Rm3sUSf+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRUkT4Iwu5VkgdejDxztZmOXFWLX2CMIXY7kQ7Gn/a6FSfmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVdXekB2kmclrqWJ9AurTpOUj2tHMa2gmuMS/2OcBzNckOcxCylWWI2Go5ZXoUtiv++Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRW52cqfG4qFkJK6ulsk0Ul/zpFdzcb35YxpdwSNsK+T53xC/VM+nukBZXNVcxRNleT5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFX0PlS1fN8PRQ63Gm9xlOqP1bCMpxHXAVnT1XtEcMNMr+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRX+wDU6kwN5nSFtvfYj13WiIHeZ4O8yqEXCHLWstA/WEWacykkItlctLGDY29vgeV75i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFf08MmhqPb1220TCiXzQU44flRuRUs39giJEjz/EnIsC+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRUTAHoDByEKOV0SvJ7Kaen5qCk1NIQfKP1k/96iEeRl0kyx3gUnv/5waZOXU4y5twf5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFWlqCv726CVRf+mweHTUvJPZYcejONkqTEt3bjoAmXZr+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRU53nc2FXcCYnfTYQ1FajfrzZ7QSNlWayuwRIq1Q4qsvIBwfpWMg84W8VXPc4MT6UhVzys6ZN0czLh6d6ug8M2tdIVn9i2NtWkzSqMi+mLIN0QpdXq3w1uUb6Y8OUR9Jeo2f+JGZfBq573tf9e0TYwMLdO4qBgfLtwydRU2An2VtpQYPSNo7qxHG32lLG8DP2I= | 1.0 | N/HQ/uPvWYi4xqxf1fCHFL3aOD2AJmlhegktKPddcXWiICli7ibSP0Cx5cy5sYVw2VUcofGrUmRE4V5JFxOBmMP+zHY2iIcOZrypMaLHXYTPiQZg3PfQfs4UaXRdGdBIef+k6Krky+bcPNI//htNXZ8Mb2Nj6ELXCKQI8v+Eu1Y= - BoEaaeU1lOWXm1JmVso+mwlmNEseXtmq0IxFB0KX9TGGam7Bv0bbcALtj9IluYEeP6FdGia9/XLUO3prppxf0TmxRfPpFdQ13PyD2pDbyWkdK56hJtulQ8m1KCCHUnnZ9Rff64iuj5jFYA5wb69k7C7Bgv/jC0dwUDAl8fCXd/AfRMSdf3qvv8jEZbNNO4pB+Yu5djYYDx7KMRDnDANlFbI277Lio6ZzvyUG+UpNdc0A6jmjJO87rMTlJagW3DzA+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRVbMvirlQmWu302XvdpteEGvlSk8etSNhXZqumRImXglPmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVQNoeFUNY84+llCrNUEX7KGOBy6oqxW12tOHV+BgJGZ4t4DoV4oZ9Ex8KvaaYmrP/txUPHutyl3MwOSuOKzdzg4lyCiehRF74zXhtjLmzRDf5i7l2NhgPHsoxEOcMA2UV30Tt/78In6S/WJDEYbTnT9WwnukxDsujxCnCgkA59Kyu9ciPRYEyrzruAEGNDZPX+Yu5djYYDx7KMRDnDANlFa/0OYhi35w3laYRulGm1QnIm4WGEJGBGTjfSJLPDPPO+Yu5djYYDx7KMRDnDANlFc/Id4Y2sN+QYY55/4Dor2zxZIK9PvlLHhHl5aAkbWW1dpINV3pfxxmJRwj7Mb9mfZ7ld3sNSleL3BpU32tcUwWtTpogjKjFLZrfrgsIzgNTSer5GdxSPC8a1fY9j4TqT62XzU+iDSS6rvs30WHcf2UL/QYpSisROEct5IfesKr+MqzstJN1MLFej12ImUkySA72TpaSGNORQb3QsR0AGcUcnjtQnN/NHa+24kvnJxew/fjvhlOvye1yfIClERU0MrZbjEMv4y5pmvLKu0/Et6uFqoPPTNMhdwz59FVSk85qM8xGVb6TnYRlHfyeb6mjMPmLuXY2GA8eyjEQ5wwDZRW0fRrzxA7K9Km7kHDeJKPQFzW9cnlw0uJzzrV6oFDaV9aEso24GGUgtfQl/hGtx1ZnjjuZRvh1G9EeBL4CqRNbs21LF9VIH/tka0MaonHmRiypYJ+ttJwKsm0UdteJ/O4Z2iA1R+9vYXNW1kqCAstZasxO7IBfEhFMYDMXt3i5XFxB7GRwyNkbIVaUoe8V4Xmh52bPtZF2DXQACYDPPKAspWCmIGNWrzSIkauwEH4wEitXRXICV52MQFXUbFXIFwG2qFdeONPp+/RP6qLSLAbw+Yu5djYYDx7KMRDnDANlFSpjEo8d1Orlm3ErKNFTHTS8k7IOzNOECG2CX3VF9jn/+Yu5djYYDx7KMRDnDANlFZFxC+WPJZke5xUO1G5trknexCqXo0rMKB644qdLeuvO+Yu5djYYDx7KMRDnDANlFeZSIVlJa/nJ1DwNpiQu4cA/Jm/5baA86Fbn72TXRdz1VRdY4hiVreoQV6gSPEJM9u8D0jWU55ylVNOrPaGNRur5i7l2NhgPHsoxEOcMA2UV/7e+nUULI8UL4yjl2fXGNnq/z2ddZkLCP7mOsKAH2f67b1+k6g8AcRy9U+jbsk6o+Yu5djYYDx7KMRDnDANlFfy78Qk1CDLB91xI/DV5l6o3DYhF5JMjjT4WwfcloiGE+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRWNjDjfwvxN+nXFMUvIbAj5QharGAsDdoell6dIiOX5dWQ6oVezC65DmAJETF1WMpj5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFVMU2Yo/LxhpZHDBD2vOvVYHktY3bmSUjXANigIN95wRykRcffDgHXdSiyRu+/yt5ZQRZ7MP2MXyhjpMJPRLoDGV7sTInYXj3YD7DSqEqYF6mbAD78osf5O3Dq7xrjE+onGpH7wwwdL1NDOXNQkaYYH5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFQG++sa3v18MWY1PEhZX/0vGoBT5Z5rle1dh8Rlq3tfF+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRX6zhPYFyxF95vk4V27FSUoX1k5xV548sO/74byVdQErKnaPBX9wh8IxMvZwb4z75P5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFeBANv8TG3T+mGfM/NWmZVT+jv8iyUwiBssJSbJZamQRxw0+Qng2cPUHjzoc4eUNbvmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVs0qwetPPvlLZ8p6K2cQRwdXn00c1LQFKGezJYFm8BED5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFakLf8yDXMIORYrWI3hQDM+ZIxvbPjKvpi31tOhfTIRHo+skO8ieBhISvsQh9KtS+/mLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVA0+Vr6mZmMmI7WbPY1NbR/atQ7Zs4hlS0zszt8AbQv75i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFf07tl4mHkEW/PYpjLUXKujj+jp2lm6iylEl2qb4TeoaSo8xWy5youGX4d0Zt7FEn/mLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVVsqLbOSfLS1ukqFvEtB0GHxm3eP6/6z0zdda8cSj2iT5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFfRWi0Qpmc/B/0LMxqa9xH7my7CfYjfZ3HlwClnEZ+5dSwjYtXvrSlgRBbwhVk142PmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVwdZU0MmeUJ5SDrlcI668yP1nzkj+JpHyen+EZeywn2x8Qv1TPp7pAWVzVXMUTZXk+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRX46jcU1y6i3/sNVBk5kx64NdYebjVd+67khZNZkw8M0vmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVvx9o096sQBuan7mCa+9qokZ6PJMdrs4JbsGhsZNKMORmnMpJCLZXLSxg2Nvb4Hle+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRXpdgBtLgqAECupLmhqJjo+iMrlusj8/TRXcMIPzPMDUvmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVz2IxnbJ/aBhsf6S1VJwn4eS9JZRIzj4jX+9SG9yurzAG0+fNMmRn1b6GNgURSzxu+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRUgM1t2A00Yc2RTjRgdb8P7QZFkMo8/ry/AGkL07LV93/mLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVkM99kWIzWJNz8HcxbPvMiOgJrMPwIqW+Eyo+khLya3RJYV0oM0DWuhDOIhp0F6Ko+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRUju/s8SLWNa+ocWEa/BVdaDPepo8A/5xmDy/3fJOrBQvmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UV+s4T2BcsRfeb5OFduxUlKFtESPz/k55AZVY/waLrQXHBg+Ggzlop6Ragw+WLJ7ld+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRVPIkmsRc1EQ4XqbnrBaET8N7eyCOSZ0+jMxO8NUMEJb8cNPkJ4NnD1B486HOHlDW75i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFVVQyqlayy1X5GqR+p+rhnFkOEtBTubUFtYmK/HtRPYZ+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRWJ/9zt0IAi6TJ8MLl6wvZQjjTaaF9anBCSsT+JyaT7CqPrJDvIngYSEr7EIfSrUvv5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFRk075ZYypt+W8etCSNfjkhz3jFzzFtyk0XDLbCMfFwm+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRU2mjAPkrorPj0Iv7DSnIRfD+4GlIXXkcC6g72M25uzngT+W5mlA/9TOM4IqtFAHGT5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFfKrY/Ti2TIi2ASjJOrMacFhRFebobFQHFDSZJxfMBIc+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRWCJhpJI96mr0LTL40YVhNpqGZt9uVTUNbNUdNAVNOru3ik8286r8hMXId+Kg3mp8T5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFTeY3uBdeyMo9DrA7sN4jZ+B9YLk+oJB1N0bkBq03LS/fEL9Uz6e6QFlc1VzFE2V5PmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVeWMrMLy4xNAzj1+oMZaUniJyAOZJemO5I45rnxNNSf35i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFf7ANTqTA3mdIW299iPXdaKVE2ZaXLku0bHiO3hikg7GZpzKSQi2Vy0sYNjb2+B5XvmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVhHhpAS0EpR6Dgrw3xTN1aRa8iwqo7BU+JSQoiNE6OQX5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFc9iMZ2yf2gYbH+ktVScJ+HyGG+ujlMG3ydqKF5/GfELTLHeBSe//nBpk5dTjLm3B/mLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVuhpiarMuuh4TqvRNo2pGZrAojPc4AZuGF8YAcRXoY335i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFV4oMhnFnDRrE5mIFhk33eObQGiIc7f+/G0BIWIJuVUs/MrGv+ml3e4XSSiI7buPNfmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UV1JFqJTXPUkcf2aEto5Fskv7AvisvL2G87k2nLtOVMvn5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFfrOE9gXLEX3m+ThXbsVJShl+WxQ6CYIM1W+6yA9Z0f7Kso9HZcv8L1UsmOwlyUh1fmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVTyJJrEXNREOF6m56wWhE/BWP92NTAfZ/K7k0AwCuH33HDT5CeDZw9QePOhzh5Q1u+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRUqmK9Os/66eExzp27Lr+ijOpg1GH6tzsxJ9hEz4TrzuPmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UV8lBma82jpG6MkNWKP9HCcJ2shDj/nrGjx2LBC0Wj2V2j6yQ7yJ4GEhK+xCH0q1L7+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRXtnjf0100hgTSMR/4smVBVwiSequ1ySpE7PrXZpGQY/PmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVQ1ZTY6BJRkUiZCChBhOrEqeq6tchYrNOJsrbUYIx6/1KjzFbLnKi4Zfh3Rm3sUSf+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRUkT4Iwu5VkgdejDxztZmOXFWLX2CMIXY7kQ7Gn/a6FSfmLuXY2GA8eyjEQ5wwDZRX5i7l2NhgPHsoxEOcMA2UVdXekB2kmclrqWJ9AurTpOUj2tHMa2gmuMS/2OcBzNckOcxCylWWI2Go5ZXoUtiv++Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRW52cqfG4qFkJK6ulsk0Ul/zpFdzcb35YxpdwSNsK+T53xC/VM+nukBZXNVcxRNleT5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFX0PlS1fN8PRQ63Gm9xlOqP1bCMpxHXAVnT1XtEcMNMr+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRX+wDU6kwN5nSFtvfYj13WiIHeZ4O8yqEXCHLWstA/WEWacykkItlctLGDY29vgeV75i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFf08MmhqPb1220TCiXzQU44flRuRUs39giJEjz/EnIsC+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRUTAHoDByEKOV0SvJ7Kaen5qCk1NIQfKP1k/96iEeRl0kyx3gUnv/5waZOXU4y5twf5i7l2NhgPHsoxEOcMA2UV+Yu5djYYDx7KMRDnDANlFWlqCv726CVRf+mweHTUvJPZYcejONkqTEt3bjoAmXZr+Yu5djYYDx7KMRDnDANlFfmLuXY2GA8eyjEQ5wwDZRU53nc2FXcCYnfTYQ1FajfrzZ7QSNlWayuwRIq1Q4qsvIBwfpWMg84W8VXPc4MT6UhVzys6ZN0czLh6d6ug8M2tdIVn9i2NtWkzSqMi+mLIN0QpdXq3w1uUb6Y8OUR9Jeo2f+JGZfBq573tf9e0TYwMLdO4qBgfLtwydRU2An2VtpQYPSNo7qxHG32lLG8DP2I= | non_test | n hq bcpni nha jm mgfm nwmzvt pypjluxkujj b jphyen ry eyo ocwea walrqxhbg p rhnfkoetbtubuftymk htrpyz ktvscj hygg gfeltlhebse mrgv thxbsvjshl vm enisc | 0 |
230,805 | 18,716,899,424 | IssuesEvent | 2021-11-03 06:48:47 | SAP/ui5-webcomponents | https://api.github.com/repos/SAP/ui5-webcomponents | closed | ui5-table: setting min-width without demand-popin should hide data and headers | bug Medium Prio TOPIC RL 1.0 Release Testing | ### **Bug Description**
Check the example below: a table, two of the columns have `min-width="320"`, but no `demand-popin`. Resize below 320px. Headers disappear, but the data columns remain. If `demand-popin` is set, they correctly drop and are not separate columns.
### **Expected Behavior**
If a `min-width` is set, and `demand-popin` is not present, both header and data should disappear.
### **Isolated Example**
https://codesandbox.io/s/ui5-webcomponents-forked-3gwo5?file=/index.html
| 1.0 | ui5-table: setting min-width without demand-popin should hide data and headers - ### **Bug Description**
Check the example below: a table, two of the columns have `min-width="320"`, but no `demand-popin`. Resize below 320px. Headers disappear, but the data columns remain. If `demand-popin` is set, they correctly drop and are not separate columns.
### **Expected Behavior**
If a `min-width` is set, and `demand-popin` is not present, both header and data should disappear.
### **Isolated Example**
https://codesandbox.io/s/ui5-webcomponents-forked-3gwo5?file=/index.html
| test | table setting min width without demand popin should hide data and headers bug description check the example below a table two of the columns have min width but no demand popin resize below headers disappear but the data columns remain if demand popin is set they correctly drop and are not separate columns expected behavior if a min width is set and demand popin is not present both header and data should disappear isolated example | 1 |
85,222 | 24,544,673,621 | IssuesEvent | 2022-10-12 07:51:57 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [SB] [VAPT] Buttons are blinking when admin clicks on the buttons | Bug P2 Study builder Process: Fixed Process: Tested dev | Buttons are blinking when the admin clicks on the buttons (eg: Save, Cancel, Mark as completed, Upload)
**Note:**
1. Issue should be fixed in all the places
2. Issue observed after resolving #4991 issue
https://user-images.githubusercontent.com/71445210/192081980-bf00ff71-2900-4865-8367-d35058c7b4db.mp4
| 1.0 | [SB] [VAPT] Buttons are blinking when admin clicks on the buttons - Buttons are blinking when the admin clicks on the buttons (eg: Save, Cancel, Mark as completed, Upload)
**Note:**
1. Issue should be fixed in all the places
2. Issue observed after resolving #4991 issue
https://user-images.githubusercontent.com/71445210/192081980-bf00ff71-2900-4865-8367-d35058c7b4db.mp4
| non_test | buttons are blinking when admin clicks on the buttons buttons are blinking when the admin clicks on the buttons eg save cancel mark as completed upload note issue should be fixed in all the places issue observed after resolving issue | 0 |
53,928 | 11,163,452,934 | IssuesEvent | 2019-12-26 22:44:17 | ssm-deepcove/website | https://api.github.com/repos/ssm-deepcove/website | opened | Fix CMS component sizing issue | Investigate UI enhancement improve code | We need to address the way media and text components work. Issues:
- Don't render components in view mode if they have no content
- Template 4 image is way too large; this is an issue with the way we size images
- Code is a bit messy can we look at cleaning it up
This is something I want us to tackle, but not until mobile CMS is done.
| 1.0 | Fix CMS component sizing issue - We need to address the way media and text components work. Issues:
- Don't render components in view mode if they have no content
- Template 4 image is way too large; this is an issue with the way we size images
- Code is a bit messy can we look at cleaning it up
This is something I want us to tackle, but not until mobile CMS is done.
| non_test | fix cms component sizing issue we need to address the way media and text components work issues don t render components in view mode if they have no content template image is way too large this is an issue with the way we size images code is a bit messy can we look at cleaning it up this is something i want us to tackle but not until mobile cms is done | 0 |
13,100 | 8,120,043,848 | IssuesEvent | 2018-08-16 00:14:36 | webcomponents/shadydom | https://api.github.com/repos/webcomponents/shadydom | closed | Implement synchronous ShadyCSS scoping when inserting and removing nodes | enhancement performance | Using webcomponents/shadycss#200, we can synchronously scope nodes when inserting and removing from the document, which is more correct and possibly faster than the Mutation Observer currently used. | True | Implement synchronous ShadyCSS scoping when inserting and removing nodes - Using webcomponents/shadycss#200, we can synchronously scope nodes when inserting and removing from the document, which is more correct and possibly faster than the Mutation Observer currently used. | non_test | implement synchronous shadycss scoping when inserting and removing nodes using webcomponents shadycss we can synchronously scope nodes when inserting and removing from the document which is more correct and possibly faster than the mutation observer currently used | 0 |
779,948 | 27,373,156,146 | IssuesEvent | 2023-02-28 02:13:29 | UNopenGIS/7 | https://api.github.com/repos/UNopenGIS/7 | closed | スマート地図伝習所(smart maps training school) | priority/MAY 伝習 | # ペイン・ポイント
データを触る作業をしているときに、メモを兼ねてチャット程度にログを残していれば、後からタイムラインを確認したりノウハウを共有できたりして良いのだが、そのために良い場所がない。
データを触っていると、横振れしたりするので、テーマを決めすぎるとオフトピック話題が増えてしまったりする。
# やってみたいこと
Matrix または Discord の上にスマート地図伝習所(smart maps training school)を作り、そこに情報をダンプしたいだけダンプできるようにする。
# ビジョン・ステートメント
What happens here, stays here.
# Next Actions
- [x] Matrix 上に雑に作ってみて、ワークするか検討する。 | 1.0 | スマート地図伝習所(smart maps training school) - # ペイン・ポイント
データを触る作業をしているときに、メモを兼ねてチャット程度にログを残していれば、後からタイムラインを確認したりノウハウを共有できたりして良いのだが、そのために良い場所がない。
データを触っていると、横振れしたりするので、テーマを決めすぎるとオフトピック話題が増えてしまったりする。
# やってみたいこと
Matrix または Discord の上にスマート地図伝習所(smart maps training school)を作り、そこに情報をダンプしたいだけダンプできるようにする。
# ビジョン・ステートメント
What happens here, stays here.
# Next Actions
- [x] Matrix 上に雑に作ってみて、ワークするか検討する。 | non_test | スマート地図伝習所(smart maps training school) ペイン・ポイント データを触る作業をしているときに、メモを兼ねてチャット程度にログを残していれば、後からタイムラインを確認したりノウハウを共有できたりして良いのだが、そのために良い場所がない。 データを触っていると、横振れしたりするので、テーマを決めすぎるとオフトピック話題が増えてしまったりする。 やってみたいこと matrix または discord の上にスマート地図伝習所(smart maps training school)を作り、そこに情報をダンプしたいだけダンプできるようにする。 ビジョン・ステートメント what happens here stays here next actions matrix 上に雑に作ってみて、ワークするか検討する。 | 0 |
64,657 | 6,916,634,981 | IssuesEvent | 2017-11-29 03:46:06 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | AzureMinimumMasterNodesTests @AwaitsFix because of flakiness | :Plugin Discovery Azure Classic test v6.2.0 v7.0.0 | [These tests](https://github.com/elastic/elasticsearch/blob/7ac361d86ecae47052b4132e63c5c3e97a09ccc7/plugins/discovery-azure-classic/src/test/java/org/elasticsearch/discovery/azure/classic/AzureMinimumMasterNodesTests.java#L44) don't seem to have run since May 2015 due to the `@AwaitsFix` annotation, but the referenced issue (https://github.com/elastic/elasticsearch-cloud-azure/issues/89) is closed.
Additionally, `discovery-azure-classic` [is deprecated](https://github.com/elastic/elasticsearch-cloud-azure/issues/91#issuecomment-229113595).
I've updated the annotation to point at this issue for now, but feel that the best thing to do is just to get rid of this test. | 1.0 | AzureMinimumMasterNodesTests @AwaitsFix because of flakiness - [These tests](https://github.com/elastic/elasticsearch/blob/7ac361d86ecae47052b4132e63c5c3e97a09ccc7/plugins/discovery-azure-classic/src/test/java/org/elasticsearch/discovery/azure/classic/AzureMinimumMasterNodesTests.java#L44) don't seem to have run since May 2015 due to the `@AwaitsFix` annotation, but the referenced issue (https://github.com/elastic/elasticsearch-cloud-azure/issues/89) is closed.
Additionally, `discovery-azure-classic` [is deprecated](https://github.com/elastic/elasticsearch-cloud-azure/issues/91#issuecomment-229113595).
I've updated the annotation to point at this issue for now, but feel that the best thing to do is just to get rid of this test. | test | azureminimummasternodestests awaitsfix because of flakiness don t seem to have run since may due to the awaitsfix annotation but the referenced issue is closed additionally discovery azure classic i ve updated the annotation to point at this issue for now but feel that the best thing to do is just to get rid of this test | 1 |
321,830 | 23,874,191,724 | IssuesEvent | 2022-09-07 17:22:48 | ModerNews/MAL-API-Client-Upgraded | https://api.github.com/repos/ModerNews/MAL-API-Client-Upgraded | closed | Setup logging module | documentation enhancement | Create and setup logging for all modules using logging.py, create additional page in docs informing about logging module setup | 1.0 | Setup logging module - Create and setup logging for all modules using logging.py, create additional page in docs informing about logging module setup | non_test | setup logging module create and setup logging for all modules using logging py create additional page in docs informing about logging module setup | 0 |
401,348 | 11,789,072,264 | IssuesEvent | 2020-03-17 16:33:17 | OregonDigital/OD2 | https://api.github.com/repos/OregonDigital/OD2 | closed | Fix postgres environment vars | Bug Priority - High | ### Descriptive summary
I started from scratch today and postgres wouldn't start:
```
db-dev_1 | Error: Database is uninitialized and superuser password is not specified.
db-dev_1 | You must specify POSTGRES_PASSWORD to a non-empty value for the
db-dev_1 | superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
db-dev_1 |
db-dev_1 | You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
db-dev_1 | connections without a password. This is *not* recommended.
db-dev_1 |
db-dev_1 | See PostgreSQL documentation about "trust":
db-dev_1 | https://www.postgresql.org/docs/current/auth-trust.html
```
I hacked in the easy fix, `POSTGRES_HOST_AUTH_METHOD=trust`, and all's well again. But pg ***really*** doesn't like that, so I'm thinking we want to set a password. Since that password would be in our repo, we don't want the same password in dev/staging as production.
### Expected behavior
Postgres... um... actually starts up? I mean, do we really need "Expected behavior" here?
### Accessibility Concerns
Actually, maybe we shouldn't fix this. Having no database is a big accessibility win. If the website is inaccessible to *everybody*, it's equally usable by all! | 1.0 | Fix postgres environment vars - ### Descriptive summary
I started from scratch today and postgres wouldn't start:
```
db-dev_1 | Error: Database is uninitialized and superuser password is not specified.
db-dev_1 | You must specify POSTGRES_PASSWORD to a non-empty value for the
db-dev_1 | superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
db-dev_1 |
db-dev_1 | You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
db-dev_1 | connections without a password. This is *not* recommended.
db-dev_1 |
db-dev_1 | See PostgreSQL documentation about "trust":
db-dev_1 | https://www.postgresql.org/docs/current/auth-trust.html
```
I hacked in the easy fix, `POSTGRES_HOST_AUTH_METHOD=trust`, and all's well again. But pg ***really*** doesn't like that, so I'm thinking we want to set a password. Since that password would be in our repo, we don't want the same password in dev/staging as production.
### Expected behavior
Postgres... um... actually starts up? I mean, do we really need "Expected behavior" here?
### Accessibility Concerns
Actually, maybe we shouldn't fix this. Having no database is a big accessibility win. If the website is inaccessible to *everybody*, it's equally usable by all! | non_test | fix postgres environment vars descriptive summary i started from scratch today and postgres wouldn t start db dev error database is uninitialized and superuser password is not specified db dev you must specify postgres password to a non empty value for the db dev superuser for example e postgres password password on docker run db dev db dev you may also use postgres host auth method trust to allow all db dev connections without a password this is not recommended db dev db dev see postgresql documentation about trust db dev i hacked in the easy fix postgres host auth method trust and all s well again but pg really doesn t like that so i m thinking we want to set a password since that password would be in our repo we don t want the same password in dev staging as production expected behavior postgres um actually starts up i mean do we really need expected behavior here accessibility concerns actually maybe we shouldn t fix this having no database is a big accessibility win if the website is inaccessible to everybody it s equally usable by all | 0 |
18,362 | 10,226,936,504 | IssuesEvent | 2019-08-16 19:15:11 | pcrane70/hadoop | https://api.github.com/repos/pcrane70/hadoop | opened | CVE-2018-11693 (High) detected in opennms-opennms-source-23.0.0-1 | security vulnerability | ## CVE-2018-11693 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-23.0.0-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in HEAD commit: <a href="https://github.com/pcrane70/hadoop/commit/9996d65feb6ec3d97f72187616daad5418f51db5">9996d65feb6ec3d97f72187616daad5418f51db5</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (62)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/expand.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/expand.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/factory.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/boolean.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/util.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/value.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/emitter.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/callback_bridge.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/file.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/sass.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/operation.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/operators.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/constants.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/custom_importer_bridge.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/parser.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/constants.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/list.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/cssize.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/functions.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/util.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/custom_function_bridge.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/custom_importer_bridge.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/bind.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/eval.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/backtrace.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/extend.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/sass_value_wrapper.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_context_wrapper.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/debugger.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/emitter.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/number.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/color.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/ast.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/output.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/check_nesting.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/null.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/cssize.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/functions.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/ast.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/to_c.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/to_value.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/color.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/inspect.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/values.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_context_wrapper.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/list.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/check_nesting.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/to_value.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/context.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/map.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/string.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/context.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/prelexer.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/boolean.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/eval.cpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::skip_over_scopes which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693>CVE-2018-11693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 3.5.5</p>
</p>
</details>
<p></p>
| True | CVE-2018-11693 (High) detected in opennms-opennms-source-23.0.0-1 - ## CVE-2018-11693 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-23.0.0-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in HEAD commit: <a href="https://github.com/pcrane70/hadoop/commit/9996d65feb6ec3d97f72187616daad5418f51db5">9996d65feb6ec3d97f72187616daad5418f51db5</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (62)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/expand.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/expand.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/factory.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/boolean.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/util.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/value.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/emitter.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/callback_bridge.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/file.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/sass.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/operation.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/operators.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/constants.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/custom_importer_bridge.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/parser.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/constants.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/list.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/cssize.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/functions.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/util.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/custom_function_bridge.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/custom_importer_bridge.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/bind.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/eval.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/backtrace.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/extend.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/sass_value_wrapper.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_context_wrapper.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/debugger.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/emitter.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/number.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/color.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/ast.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/output.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/check_nesting.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/null.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/cssize.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/functions.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/ast.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/to_c.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/to_value.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/color.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/inspect.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/values.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_context_wrapper.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/list.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/check_nesting.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/to_value.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/context.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/map.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/string.cpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/context.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/prelexer.hpp
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/sass_types/boolean.h
- /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/node-sass/src/libsass/src/eval.cpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::skip_over_scopes which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693>CVE-2018-11693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11693</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 3.5.5</p>
</p>
</details>
<p></p>
| non_test | cve high detected in opennms opennms source cve high severity vulnerability vulnerable library opennmsopennms source a java based fault and performance management system library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src expand hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src expand cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types factory cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types boolean cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src util hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types value h hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src emitter hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src callback bridge h hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src file cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src sass cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src operation hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src operators hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src constants hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src error handling hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src custom importer bridge cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src parser hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src constants cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types list cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src cssize cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src functions hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src util cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src custom function bridge cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src custom importer bridge h hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src bind cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src eval hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src backtrace cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src extend cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types sass value wrapper h hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src error handling cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass context wrapper h hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src debugger hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src emitter cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types number cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types color h hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src ast hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src sass values cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src output cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src check nesting cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types null cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src ast def macros hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src cssize hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src functions cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src prelexer cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src ast cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src to c cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src to value hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src ast fwd decl hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types color cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src inspect hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src values cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass context wrapper cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types list h hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src check nesting hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src to value cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src context cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types map cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src sass context cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types string cpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src context hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src prelexer hpp hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src sass types boolean h hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules node sass src libsass src eval cpp vulnerability details an issue was discovered in libsass through an out of bounds read of a memory region was found in the function sass prelexer skip over scopes which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
33,271 | 14,020,458,767 | IssuesEvent | 2020-10-29 19:43:13 | Azure/azure-sdk-for-go | https://api.github.com/repos/Azure/azure-sdk-for-go | closed | Using Cosmos DB with Table Storage API | Cosmos Mgmt Service Attention Storage customer-reported | ### Bug Report
I would like to use Cosmos DB as key-value store, and therefore I chose to use the Table Storage API, because according to the documentation of the "multi-model" capabilities the Table Storage API is the key-value store model.
But:
1. The documentation regarding the use of Table Storage with this Go SDK is not very extensive. For example:
- I couldn't find any examples of how to do simple CRUD operations
- Parameters are undocumented, for example there's `func (e *Entity) Get(timeout uint, ml MetadataLevel, options *GetEntityOptions) error`, but what should `timeout` be? Seconds or milliseconds? It's not in the GoDoc or anywhere else. Then, in the `GetEntityOptions`, there's a `RequestID` field. Does that need to be set? With which value?
2. I can probably figure out the above things with trial-and-error, but I don't even get that far because of the **following problem, which is the reason I open this GitHub issue**: In the Azure Portal on the Cosmos DB page the connection string is shown, which looks similar to this: `DefaultEndpointsProtocol=https;AccountName=foo;AccountKey=Abc123def456==;TableEndpoint=https://foo.table.cosmosdb.azure.com:443/;`. There are multiple issues:
- When creating a storage client with `storage.NewClientFromConnectionString(connString)` **I get the following error**: `azure: base storage service url required`. Now clearly the connection string contains a `TableEndpoint`, but looking at the code ([here](https://github.com/Azure/azure-sdk-for-go/blob/fbe7db0e3f9793ba3e5704efbab84f51436c136e/storage/client.go#L254)), it's only used when the connection string contains a `sharedaccesssignature`? Why does the connection string in the Azure portal not work in the SDK by default? Or why doesn't the GoDoc contain a note about any specific changes required to the connection string?
- Maybe `NewClient` can be used instead? What's meant to be passed as `serviceBaseURL`? Also, there's a `useHTTPS` parameter. Does that imply this client only works with HTTP(S)? On one documentation page about Cosmos DB or Table Storage I read that TCP should be used for better performance. Is this not possible with the SDK?
Info:
- Imported package: `"github.com/Azure/azure-sdk-for-go/storage"`
- Package version: `v23.0.0`, Commit `07f918ba2d513bbc5b75bc4caac845e10f27449e`
- Go version: `go version go1.10.1 windows/amd64`
| 1.0 | Using Cosmos DB with Table Storage API - ### Bug Report
I would like to use Cosmos DB as key-value store, and therefore I chose to use the Table Storage API, because according to the documentation of the "multi-model" capabilities the Table Storage API is the key-value store model.
But:
1. The documentation regarding the use of Table Storage with this Go SDK is not very extensive. For example:
- I couldn't find any examples of how to do simple CRUD operations
- Parameters are undocumented, for example there's `func (e *Entity) Get(timeout uint, ml MetadataLevel, options *GetEntityOptions) error`, but what should `timeout` be? Seconds or milliseconds? It's not in the GoDoc or anywhere else. Then, in the `GetEntityOptions`, there's a `RequestID` field. Does that need to be set? With which value?
2. I can probably figure out the above things with trial-and-error, but I don't even get that far because of the **following problem, which is the reason I open this GitHub issue**: In the Azure Portal on the Cosmos DB page the connection string is shown, which looks similar to this: `DefaultEndpointsProtocol=https;AccountName=foo;AccountKey=Abc123def456==;TableEndpoint=https://foo.table.cosmosdb.azure.com:443/;`. There are multiple issues:
- When creating a storage client with `storage.NewClientFromConnectionString(connString)` **I get the following error**: `azure: base storage service url required`. Now clearly the connection string contains a `TableEndpoint`, but looking at the code ([here](https://github.com/Azure/azure-sdk-for-go/blob/fbe7db0e3f9793ba3e5704efbab84f51436c136e/storage/client.go#L254)), it's only used when the connection string contains a `sharedaccesssignature`? Why does the connection string in the Azure portal not work in the SDK by default? Or why doesn't the GoDoc contain a note about any specific changes required to the connection string?
- Maybe `NewClient` can be used instead? What's meant to be passed as `serviceBaseURL`? Also, there's a `useHTTPS` parameter. Does that imply this client only works with HTTP(S)? On one documentation page about Cosmos DB or Table Storage I read that TCP should be used for better performance. Is this not possible with the SDK?
Info:
- Imported package: `"github.com/Azure/azure-sdk-for-go/storage"`
- Package version: `v23.0.0`, Commit `07f918ba2d513bbc5b75bc4caac845e10f27449e`
- Go version: `go version go1.10.1 windows/amd64`
| non_test | using cosmos db with table storage api bug report i would like to use cosmos db as key value store and therefore i chose to use the table storage api because according to the documentation of the multi model capabilities the table storage api is the key value store model but the documentation regarding the use of table storage with this go sdk is not very extensive for example i couldn t find any examples of how to do simple crud operations parameters are undocumented for example there s func e entity get timeout uint ml metadatalevel options getentityoptions error but what should timeout be seconds or milliseconds it s not in the godoc or anywhere else then in the getentityoptions there s a requestid field does that need to be set with which value i can probably figure out the above things with trial and error but i don t even get that far because of the following problem which is the reason i open this github issue in the azure portal on the cosmos db page the connection string is shown which looks similar to this defaultendpointsprotocol https accountname foo accountkey tableendpoint there are multiple issues when creating a storage client with storage newclientfromconnectionstring connstring i get the following error azure base storage service url required now clearly the connection string contains a tableendpoint but looking at the code it s only used when the connection string contains a sharedaccesssignature why does the connection string in the azure portal not work in the sdk by default or why doesn t the godoc contain a note about any specific changes required to the connection string maybe newclient can be used instead what s meant to be passed as servicebaseurl also there s a usehttps parameter does that imply this client only works with http s on one documentation page about cosmos db or table storage i read that tcp should be used for better performance is this not possible with the sdk info imported package github com azure azure sdk for go storage package version commit go version go version windows | 0 |
206,905 | 15,782,975,258 | IssuesEvent | 2021-04-01 13:27:39 | internetee/registry | https://api.github.com/repos/internetee/registry | closed | User is not able to apply registry lock if any of the lock associated statuses are already present | Tests prepared bug | Registrant is not able to lock a domain If it has any of the registry lock statuses (serverUpdateProhibited, serverTransferProhibited, serverDeleteProhibited) already applied.
Case1: domain has any number of registry lock statuses applied (bot not the lock it self)
* user can see the lock button in registrant portal, is able to press it but the lock is not applied, no statuses are changed and user can press the button again and again
Case2: user applied registry lock, but some of the three statuses were removed by server
* user can see the lock button in registrant portal, but pressing it does not restore the missing status, nothing is changed and user can press the button again
| 1.0 | User is not able to apply registry lock if any of the lock associated statuses are already present - Registrant is not able to lock a domain If it has any of the registry lock statuses (serverUpdateProhibited, serverTransferProhibited, serverDeleteProhibited) already applied.
Case1: domain has any number of registry lock statuses applied (bot not the lock it self)
* user can see the lock button in registrant portal, is able to press it but the lock is not applied, no statuses are changed and user can press the button again and again
Case2: user applied registry lock, but some of the three statuses were removed by server
* user can see the lock button in registrant portal, but pressing it does not restore the missing status, nothing is changed and user can press the button again
| test | user is not able to apply registry lock if any of the lock associated statuses are already present registrant is not able to lock a domain if it has any of the registry lock statuses serverupdateprohibited servertransferprohibited serverdeleteprohibited already applied domain has any number of registry lock statuses applied bot not the lock it self user can see the lock button in registrant portal is able to press it but the lock is not applied no statuses are changed and user can press the button again and again user applied registry lock but some of the three statuses were removed by server user can see the lock button in registrant portal but pressing it does not restore the missing status nothing is changed and user can press the button again | 1 |
543,256 | 15,879,216,772 | IssuesEvent | 2021-04-09 12:09:18 | mit-cml/appinventor-sources | https://api.github.com/repos/mit-cml/appinventor-sources | opened | Explore adding an OAuth/OIDC component | affects: ucr feature request issue: noted for future Work priority: low status: needs discussion status: new | **Describe the desired feature**
<!--
Describe the feature that you'd like to see implemented for App Inventor. More detail is useful as it allows us to better understand the complexity of the task.
-->
We often see people building apps with login capabilities. It would be good if we provided a component for building OAuth / Open ID Connect workflows into apps to more readily address login.
**Give an example of how this feature would be used**
<!--
How would a teacher or student use this feature?
-->
Rather than having people build their own auth workflow with user/pass inputs, we could enable apps to integrate with existing OAuth providers, such as a Google, Microsoft, or Apple login.
**Why doesn't the current App Inventor system address this use case?**
<!--
Explain why the use case cannot be completed using the features of the current system.
-->
App Inventor currently provides no built-in mechanism to authenticate app users.
**Why is this feature beneficial to App Inventor's educational mission?**
<!--
Because MIT App Inventor is aimed at educational use, we prioritize development of features with an educational benefit. Help us understand how your feature request relates to our mission.
-->
A teacher may wish to build an attendance app where students auth with their school credentials, for example, and this is not really possible with App Inventor. | 1.0 | Explore adding an OAuth/OIDC component - **Describe the desired feature**
<!--
Describe the feature that you'd like to see implemented for App Inventor. More detail is useful as it allows us to better understand the complexity of the task.
-->
We often see people building apps with login capabilities. It would be good if we provided a component for building OAuth / Open ID Connect workflows into apps to more readily address login.
**Give an example of how this feature would be used**
<!--
How would a teacher or student use this feature?
-->
Rather than having people build their own auth workflow with user/pass inputs, we could enable apps to integrate with existing OAuth providers, such as a Google, Microsoft, or Apple login.
**Why doesn't the current App Inventor system address this use case?**
<!--
Explain why the use case cannot be completed using the features of the current system.
-->
App Inventor currently provides no built-in mechanism to authenticate app users.
**Why is this feature beneficial to App Inventor's educational mission?**
<!--
Because MIT App Inventor is aimed at educational use, we prioritize development of features with an educational benefit. Help us understand how your feature request relates to our mission.
-->
A teacher may wish to build an attendance app where students auth with their school credentials, for example, and this is not really possible with App Inventor. | non_test | explore adding an oauth oidc component describe the desired feature describe the feature that you d like to see implemented for app inventor more detail is useful as it allows us to better understand the complexity of the task we often see people building apps with login capabilities it would be good if we provided a component for building oauth open id connect workflows into apps to more readily address login give an example of how this feature would be used how would a teacher or student use this feature rather than having people build their own auth workflow with user pass inputs we could enable apps to integrate with existing oauth providers such as a google microsoft or apple login why doesn t the current app inventor system address this use case explain why the use case cannot be completed using the features of the current system app inventor currently provides no built in mechanism to authenticate app users why is this feature beneficial to app inventor s educational mission because mit app inventor is aimed at educational use we prioritize development of features with an educational benefit help us understand how your feature request relates to our mission a teacher may wish to build an attendance app where students auth with their school credentials for example and this is not really possible with app inventor | 0 |
155,884 | 19,803,090,227 | IssuesEvent | 2022-01-19 01:27:55 | Chiencc/Sample_Webgoat | https://api.github.com/repos/Chiencc/Sample_Webgoat | opened | CVE-2022-23302 (High) detected in log4j-1.2.14.jar | security vulnerability | ## CVE-2022-23302 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.14.jar</b></p></summary>
<p>Log4j</p>
<p>Library home page: <a href="http://logging.apache.org/log4j/">http://logging.apache.org/log4j/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /sitory/log4j/log4j/1.2.14/log4j-1.2.14.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-1.2.14.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JMSSink in all versions of Log4j 1.x is vulnerable to deserialization of untrusted data when the attacker has write access to the Log4j configuration or if the configuration references an LDAP service the attacker has access to. The attacker can provide a TopicConnectionFactoryBindingName configuration causing JMSSink to perform JNDI requests that result in remote code execution in a similar fashion to CVE-2021-4104. Note this issue only affects Log4j 1.x when specifically configured to use JMSSink, which is not the default. Apache Log4j 1.2 reached end of life in August 2015. Users should upgrade to Log4j 2 as it addresses numerous other issues from the previous versions.
<p>Publish Date: 2022-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-23302>CVE-2022-23302</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-23302 (High) detected in log4j-1.2.14.jar - ## CVE-2022-23302 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.14.jar</b></p></summary>
<p>Log4j</p>
<p>Library home page: <a href="http://logging.apache.org/log4j/">http://logging.apache.org/log4j/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /sitory/log4j/log4j/1.2.14/log4j-1.2.14.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-1.2.14.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JMSSink in all versions of Log4j 1.x is vulnerable to deserialization of untrusted data when the attacker has write access to the Log4j configuration or if the configuration references an LDAP service the attacker has access to. The attacker can provide a TopicConnectionFactoryBindingName configuration causing JMSSink to perform JNDI requests that result in remote code execution in a similar fashion to CVE-2021-4104. Note this issue only affects Log4j 1.x when specifically configured to use JMSSink, which is not the default. Apache Log4j 1.2 reached end of life in August 2015. Users should upgrade to Log4j 2 as it addresses numerous other issues from the previous versions.
<p>Publish Date: 2022-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-23302>CVE-2022-23302</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in jar cve high severity vulnerability vulnerable library jar library home page a href path to dependency file pom xml path to vulnerable library sitory jar dependency hierarchy x jar vulnerable library found in base branch master vulnerability details jmssink in all versions of x is vulnerable to deserialization of untrusted data when the attacker has write access to the configuration or if the configuration references an ldap service the attacker has access to the attacker can provide a topicconnectionfactorybindingname configuration causing jmssink to perform jndi requests that result in remote code execution in a similar fashion to cve note this issue only affects x when specifically configured to use jmssink which is not the default apache reached end of life in august users should upgrade to as it addresses numerous other issues from the previous versions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
374,191 | 26,104,798,182 | IssuesEvent | 2022-12-27 11:51:06 | bounswe/bounswe2022group3 | https://api.github.com/repos/bounswe/bounswe2022group3 | closed | Tracing Weekly Efforts - Week #9 | documentation | ### Issue
All the team members need to keep track of their weekly efforts. You can find the created page [here](https://github.com/bounswe/bounswe2022group3/wiki/451-Week-9-Personal-Efforts)
### Task(s)
- [x] Arif Akbaba
- [x] Bilal Aytekin
- [x] Mehmet Gökberk Arslan
- [x] Furkan Akkurt
- [x] Nurlan Dadashov
- [x] Hatice Şule Erkul
- [x] Kadir Ersoy
- [x] Berke Özdemir
- [x] Mertcan Özkan
- [x] Muhammet Şen
- [x] Salim Kemal Tirit
- [x] Burak Yılmaz
### Deliverable(s)
* The [451 Week 9 Personal Efforts](https://github.com/bounswe/bounswe2022group3/wiki/451-Week-9-Personal-Efforts) page.
### Acceptance Criteria
* Weekly effort of each team member must be documented.
### Deadline of the issue
09.12.2022 15.00
### Reviewer
@bilalcim
### Deadline for Review
_No response_ | 1.0 | Tracing Weekly Efforts - Week #9 - ### Issue
All the team members need to keep track of their weekly efforts. You can find the created page [here](https://github.com/bounswe/bounswe2022group3/wiki/451-Week-9-Personal-Efforts)
### Task(s)
- [x] Arif Akbaba
- [x] Bilal Aytekin
- [x] Mehmet Gökberk Arslan
- [x] Furkan Akkurt
- [x] Nurlan Dadashov
- [x] Hatice Şule Erkul
- [x] Kadir Ersoy
- [x] Berke Özdemir
- [x] Mertcan Özkan
- [x] Muhammet Şen
- [x] Salim Kemal Tirit
- [x] Burak Yılmaz
### Deliverable(s)
* The [451 Week 9 Personal Efforts](https://github.com/bounswe/bounswe2022group3/wiki/451-Week-9-Personal-Efforts) page.
### Acceptance Criteria
* Weekly effort of each team member must be documented.
### Deadline of the issue
09.12.2022 15.00
### Reviewer
@bilalcim
### Deadline for Review
_No response_ | non_test | tracing weekly efforts week issue all the team members need to keep track of their weekly efforts you can find the created page task s arif akbaba bilal aytekin mehmet gökberk arslan furkan akkurt nurlan dadashov hatice şule erkul kadir ersoy berke özdemir mertcan özkan muhammet şen salim kemal tirit burak yılmaz deliverable s the page acceptance criteria weekly effort of each team member must be documented deadline of the issue reviewer bilalcim deadline for review no response | 0 |
60,441 | 14,544,497,377 | IssuesEvent | 2020-12-15 18:15:59 | mwilliams7197/lodash | https://api.github.com/repos/mwilliams7197/lodash | opened | CVE-2020-8244 (Medium) detected in bl-0.9.5.tgz | security vulnerability | ## CVE-2020-8244 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bl-0.9.5.tgz</b></p></summary>
<p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p>
<p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-0.9.5.tgz">https://registry.npmjs.org/bl/-/bl-0.9.5.tgz</a></p>
<p>Path to dependency file: lodash/package.json</p>
<p>Path to vulnerable library: lodash/node_modules/bl/package.json</p>
<p>
Dependency Hierarchy:
- codecov.io-0.1.6.tgz (Root Library)
- request-2.42.0.tgz
- :x: **bl-0.9.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mwilliams7197/lodash/commit/0f00cc90f9124fd42d547321357fd4483c6ac0b3">0f00cc90f9124fd42d547321357fd4483c6ac0b3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A buffer over-read vulnerability exists in bl <4.0.3, <3.0.1, <2.2.1, and <1.2.3 which could allow an attacker to supply user input (even typed) that if it ends up in consume() argument and can become negative, the BufferList state can be corrupted, tricking it into exposing uninitialized memory via regular .slice() calls.
<p>Publish Date: 2020-08-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8244>CVE-2020-8244</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8244">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8244</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 2.2.1,3.0.1,4.0.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"bl","packageVersion":"0.9.5","isTransitiveDependency":true,"dependencyTree":"codecov.io:0.1.6;request:2.42.0;bl:0.9.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.2.1,3.0.1,4.0.3"}],"vulnerabilityIdentifier":"CVE-2020-8244","vulnerabilityDetails":"A buffer over-read vulnerability exists in bl \u003c4.0.3, \u003c3.0.1, \u003c2.2.1, and \u003c1.2.3 which could allow an attacker to supply user input (even typed) that if it ends up in consume() argument and can become negative, the BufferList state can be corrupted, tricking it into exposing uninitialized memory via regular .slice() calls.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8244","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-8244 (Medium) detected in bl-0.9.5.tgz - ## CVE-2020-8244 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bl-0.9.5.tgz</b></p></summary>
<p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p>
<p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-0.9.5.tgz">https://registry.npmjs.org/bl/-/bl-0.9.5.tgz</a></p>
<p>Path to dependency file: lodash/package.json</p>
<p>Path to vulnerable library: lodash/node_modules/bl/package.json</p>
<p>
Dependency Hierarchy:
- codecov.io-0.1.6.tgz (Root Library)
- request-2.42.0.tgz
- :x: **bl-0.9.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mwilliams7197/lodash/commit/0f00cc90f9124fd42d547321357fd4483c6ac0b3">0f00cc90f9124fd42d547321357fd4483c6ac0b3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A buffer over-read vulnerability exists in bl <4.0.3, <3.0.1, <2.2.1, and <1.2.3 which could allow an attacker to supply user input (even typed) that if it ends up in consume() argument and can become negative, the BufferList state can be corrupted, tricking it into exposing uninitialized memory via regular .slice() calls.
<p>Publish Date: 2020-08-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8244>CVE-2020-8244</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8244">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8244</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 2.2.1,3.0.1,4.0.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"bl","packageVersion":"0.9.5","isTransitiveDependency":true,"dependencyTree":"codecov.io:0.1.6;request:2.42.0;bl:0.9.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.2.1,3.0.1,4.0.3"}],"vulnerabilityIdentifier":"CVE-2020-8244","vulnerabilityDetails":"A buffer over-read vulnerability exists in bl \u003c4.0.3, \u003c3.0.1, \u003c2.2.1, and \u003c1.2.3 which could allow an attacker to supply user input (even typed) that if it ends up in consume() argument and can become negative, the BufferList state can be corrupted, tricking it into exposing uninitialized memory via regular .slice() calls.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8244","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in bl tgz cve medium severity vulnerability vulnerable library bl tgz buffer list collect buffers and access with a standard readable buffer interface streamable too library home page a href path to dependency file lodash package json path to vulnerable library lodash node modules bl package json dependency hierarchy codecov io tgz root library request tgz x bl tgz vulnerable library found in head commit a href found in base branch master vulnerability details a buffer over read vulnerability exists in bl and which could allow an attacker to supply user input even typed that if it ends up in consume argument and can become negative the bufferlist state can be corrupted tricking it into exposing uninitialized memory via regular slice calls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a buffer over read vulnerability exists in bl and which could allow an attacker to supply user input even typed that if it ends up in consume argument and can become negative the bufferlist state can be corrupted tricking it into exposing uninitialized memory via regular slice calls vulnerabilityurl | 0 |
573,772 | 17,023,705,802 | IssuesEvent | 2021-07-03 03:24:19 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Prioritize old render requests | Component: tilesathome Priority: major Resolution: wontfix Type: enhancement | **[Submitted to the original trac issue database at 6.10pm, Monday, 25th April 2011]**
While having a look at the list of open requests I saw that there are some requests that are more than two months old. Having a closer look at those tiles I saw furthermore that most of them weren't all them complex either. This suggests that the queue is basically some heap which is known to not work that well in regards to guaranteeing any deadline for the processing.
Wouldn't it be much better if instead of simply throwing things on a heap if the tile selection algorithm for selecting which tileset to render would take aging of requests into account, gradually increasing their priority?
Some simple way to do this would be something like:
RenderPrio = BasePrio * log_A(1+Age)/log_B(Complexity)
BasePrio in this case is 1 for aged tiles (tiles that are older than e.g. 2 months), 2 for render requests due to changes to the map and 3 for manual requests
Age is the Age of the request in seconds.
Complexity is the tile's complexity as shown by the client.
A is some integer base for weighting the age in the priority, suggested values would be between 2 and 8.
B is some integer base for weighting the complexity in the priority, suggested values would be between 10 and 16.
Higher render priorities should be preferred | 1.0 | Prioritize old render requests - **[Submitted to the original trac issue database at 6.10pm, Monday, 25th April 2011]**
While having a look at the list of open requests I saw that there are some requests that are more than two months old. Having a closer look at those tiles I saw furthermore that most of them weren't all them complex either. This suggests that the queue is basically some heap which is known to not work that well in regards to guaranteeing any deadline for the processing.
Wouldn't it be much better if instead of simply throwing things on a heap if the tile selection algorithm for selecting which tileset to render would take aging of requests into account, gradually increasing their priority?
Some simple way to do this would be something like:
RenderPrio = BasePrio * log_A(1+Age)/log_B(Complexity)
BasePrio in this case is 1 for aged tiles (tiles that are older than e.g. 2 months), 2 for render requests due to changes to the map and 3 for manual requests
Age is the Age of the request in seconds.
Complexity is the tile's complexity as shown by the client.
A is some integer base for weighting the age in the priority, suggested values would be between 2 and 8.
B is some integer base for weighting the complexity in the priority, suggested values would be between 10 and 16.
Higher render priorities should be preferred | non_test | prioritize old render requests while having a look at the list of open requests i saw that there are some requests that are more than two months old having a closer look at those tiles i saw furthermore that most of them weren t all them complex either this suggests that the queue is basically some heap which is known to not work that well in regards to guaranteeing any deadline for the processing wouldn t it be much better if instead of simply throwing things on a heap if the tile selection algorithm for selecting which tileset to render would take aging of requests into account gradually increasing their priority some simple way to do this would be something like renderprio baseprio log a age log b complexity baseprio in this case is for aged tiles tiles that are older than e g months for render requests due to changes to the map and for manual requests age is the age of the request in seconds complexity is the tile s complexity as shown by the client a is some integer base for weighting the age in the priority suggested values would be between and b is some integer base for weighting the complexity in the priority suggested values would be between and higher render priorities should be preferred | 0 |
317,143 | 27,216,608,675 | IssuesEvent | 2023-02-20 22:41:32 | acikkaynak/deprem-yardim-frontend | https://api.github.com/repos/acikkaynak/deprem-yardim-frontend | closed | bug: Translation problems on `Uydu` layer. | bug discussion approved emergency tested later | ## Bug Definition
There are two problem with the translation on the `Uydu` Layer:
1. Layer Name: Should be `Satellite` instead of Uydu in **EN** version.
2. There is a classification type named `Destroyed` which should be renamed as `collapsed`.
There should be a better translation for the 1st item since we do not provide the Satellite imagery. Later on we need to find a better naming to tell the user, this is a result/prediction of the status of the selected building based on Satellite imagery.

** discord username: @sercanerhan#0543 **
## Bug environment
> Describe the environment produces the bug.
afetharita.com
## Describe how you are producing the bug step by step
1. Go to '...'
3. Click '....'
4. Scroll to '....'
5. Bug appears
## Expected Behaviour
A clear and short text to decribe the expected behaviour.
## Screen shots
If possible, add screenshots to describe your bug.
## Desktop Information
- Operating System: [for example iOS]
- Browser [for example, safari]
- Version [for example 22]
## Mobile Phone Information
- Devıce: [for example iPhone6]
- Operating System (with the version): [for example iOS8.1]
- Version [for example default browser, safari]
- Browser Version [for example 22]
## Additional Context
Add any other context about the bug here
| 1.0 | bug: Translation problems on `Uydu` layer. - ## Bug Definition
There are two problem with the translation on the `Uydu` Layer:
1. Layer Name: Should be `Satellite` instead of Uydu in **EN** version.
2. There is a classification type named `Destroyed` which should be renamed as `collapsed`.
There should be a better translation for the 1st item since we do not provide the Satellite imagery. Later on we need to find a better naming to tell the user, this is a result/prediction of the status of the selected building based on Satellite imagery.

** discord username: @sercanerhan#0543 **
## Bug environment
> Describe the environment produces the bug.
afetharita.com
## Describe how you are producing the bug step by step
1. Go to '...'
3. Click '....'
4. Scroll to '....'
5. Bug appears
## Expected Behaviour
A clear and short text to decribe the expected behaviour.
## Screen shots
If possible, add screenshots to describe your bug.
## Desktop Information
- Operating System: [for example iOS]
- Browser [for example, safari]
- Version [for example 22]
## Mobile Phone Information
- Devıce: [for example iPhone6]
- Operating System (with the version): [for example iOS8.1]
- Version [for example default browser, safari]
- Browser Version [for example 22]
## Additional Context
Add any other context about the bug here
| test | bug translation problems on uydu layer bug definition there are two problem with the translation on the uydu layer layer name should be satellite instead of uydu in en version there is a classification type named destroyed which should be renamed as collapsed there should be a better translation for the item since we do not provide the satellite imagery later on we need to find a better naming to tell the user this is a result prediction of the status of the selected building based on satellite imagery discord username sercanerhan bug environment describe the environment produces the bug afetharita com describe how you are producing the bug step by step go to click scroll to bug appears expected behaviour a clear and short text to decribe the expected behaviour screen shots if possible add screenshots to describe your bug desktop information operating system browser version mobile phone information devıce operating system with the version version browser version additional context add any other context about the bug here | 1 |
13,602 | 8,601,254,240 | IssuesEvent | 2018-11-16 10:18:05 | Microsoft/BotFramework-WebChat | https://api.github.com/repos/Microsoft/BotFramework-WebChat | opened | 'SurfacePro4 and Surface Pro 4 (2)' controls define as a button but visually looks like an Image on 'Receipt card' content. | A11yUsable AccSelfLime Accessibility Bug HCL_BotFramework_WebChat Nov-18 UnderReview | **Actual Result:**
'SurfacePro4 and Surface Pro 4 (2)' Buttons are visually looks like an Image.
**Expected Result:**
'SurfacePro4 and Surface Pro 4 (2)' Buttons should not be looks like an Image.
Repro Steps:
1. Open URL [https://microsoft.github.io/BotFramework-WebChat/full-bundle/](https://microsoft.github.io/BotFramework-WebChat/full-bundle/) in Edge browser.
2. Navigate to 'Type your message' text box present in bottom of screen, type 'help' and press enter key.
3. Navigate to "Receipt card " button by using Tab key and press Enter on it to activate it.
4. New content will appear on the bottom of the screen.
5. Navigate to 'SurfacePro4 and Surface Pro 4 (2)' controls.
6. Press F12 to open developer tool and check the role for SurfacePro4 and Surface Pro 4 (2)'
7. Observe that controls define as a button but looks like an image.
User Impact:
User will get confuse if controls define as a button but visually looks like an Image, User will not able to perform the action.
**Test Environment:**
**OS:** Windows 10
**OS Build:** 17763.107
**OS Version:** 1809
**Browser**: Edge
**Attachment:**

| True | 'SurfacePro4 and Surface Pro 4 (2)' controls define as a button but visually looks like an Image on 'Receipt card' content. - **Actual Result:**
'SurfacePro4 and Surface Pro 4 (2)' Buttons are visually looks like an Image.
**Expected Result:**
'SurfacePro4 and Surface Pro 4 (2)' Buttons should not be looks like an Image.
Repro Steps:
1. Open URL [https://microsoft.github.io/BotFramework-WebChat/full-bundle/](https://microsoft.github.io/BotFramework-WebChat/full-bundle/) in Edge browser.
2. Navigate to 'Type your message' text box present in bottom of screen, type 'help' and press enter key.
3. Navigate to "Receipt card " button by using Tab key and press Enter on it to activate it.
4. New content will appear on the bottom of the screen.
5. Navigate to 'SurfacePro4 and Surface Pro 4 (2)' controls.
6. Press F12 to open developer tool and check the role for SurfacePro4 and Surface Pro 4 (2)'
7. Observe that controls define as a button but looks like an image.
User Impact:
User will get confuse if controls define as a button but visually looks like an Image, User will not able to perform the action.
**Test Environment:**
**OS:** Windows 10
**OS Build:** 17763.107
**OS Version:** 1809
**Browser**: Edge
**Attachment:**

| non_test | and surface pro controls define as a button but visually looks like an image on receipt card content actual result and surface pro buttons are visually looks like an image expected result and surface pro buttons should not be looks like an image repro steps open url in edge browser navigate to type your message text box present in bottom of screen type help and press enter key navigate to receipt card button by using tab key and press enter on it to activate it new content will appear on the bottom of the screen navigate to and surface pro controls press to open developer tool and check the role for and surface pro observe that controls define as a button but looks like an image user impact user will get confuse if controls define as a button but visually looks like an image user will not able to perform the action test environment os windows os build os version browser edge attachment | 0 |
15,550 | 3,475,248,986 | IssuesEvent | 2015-12-25 12:25:52 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | reopened | MapDestroyTest.destroyAllReplicasIncludingBackups | Team: Core Type: Test-Failure | ```
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at com.hazelcast.map.MapDestroyTest.assertAllPartitionContainersAreEmpty(MapDestroyTest.java:86)
at com.hazelcast.map.MapDestroyTest.access$000(MapDestroyTest.java:45)
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-IbmJDK1.6/com.hazelcast$hazelcast/782/testReport/junit/com.hazelcast.map/MapDestroyTest/destroyAllReplicasIncludingBackups/ | 1.0 | MapDestroyTest.destroyAllReplicasIncludingBackups - ```
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at com.hazelcast.map.MapDestroyTest.assertAllPartitionContainersAreEmpty(MapDestroyTest.java:86)
at com.hazelcast.map.MapDestroyTest.access$000(MapDestroyTest.java:45)
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-IbmJDK1.6/com.hazelcast$hazelcast/782/testReport/junit/com.hazelcast.map/MapDestroyTest/destroyAllReplicasIncludingBackups/ | test | mapdestroytest destroyallreplicasincludingbackups java lang assertionerror null at org junit assert fail assert java at org junit assert asserttrue assert java at org junit assert asserttrue assert java at com hazelcast map mapdestroytest assertallpartitioncontainersareempty mapdestroytest java at com hazelcast map mapdestroytest access mapdestroytest java | 1 |
28,191 | 11,598,705,204 | IssuesEvent | 2020-02-24 23:53:12 | LevyForchh/semantic-release-slack | https://api.github.com/repos/LevyForchh/semantic-release-slack | opened | CVE-2020-8116 (Medium) detected in dot-prop-4.2.0.tgz, dot-prop-3.0.0.tgz | security vulnerability | ## CVE-2020-8116 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>dot-prop-4.2.0.tgz</b>, <b>dot-prop-3.0.0.tgz</b></p></summary>
<p>
<details><summary><b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/semantic-release-slack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/semantic-release-slack/node_modules/npm/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- npm-5.3.5.tgz (Root Library)
- npm-6.13.7.tgz
- update-notifier-2.5.0.tgz
- configstore-3.1.2.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>dot-prop-3.0.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-3.0.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-3.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/semantic-release-slack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/semantic-release-slack/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- config-conventional-8.3.4.tgz (Root Library)
- conventional-changelog-conventionalcommits-4.2.1.tgz
- compare-func-1.3.2.tgz
- :x: **dot-prop-3.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/LevyForchh/semantic-release-slack/commit/470d7a5bb9932443939672850db615068976e540">470d7a5bb9932443939672850db615068976e540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"dot-prop","packageVersion":"4.2.0","isTransitiveDependency":true,"dependencyTree":"@semantic-release/npm:5.3.5;npm:6.13.7;update-notifier:2.5.0;configstore:3.1.2;dot-prop:4.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"dot-prop - 5.1.1"},{"packageType":"javascript/Node.js","packageName":"dot-prop","packageVersion":"3.0.0","isTransitiveDependency":true,"dependencyTree":"@commitlint/config-conventional:8.3.4;conventional-changelog-conventionalcommits:4.2.1;compare-func:1.3.2;dot-prop:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"dot-prop - 5.1.1"}],"vulnerabilityIdentifier":"CVE-2020-8116","vulnerabilityDetails":"Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> --> | True | CVE-2020-8116 (Medium) detected in dot-prop-4.2.0.tgz, dot-prop-3.0.0.tgz - ## CVE-2020-8116 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>dot-prop-4.2.0.tgz</b>, <b>dot-prop-3.0.0.tgz</b></p></summary>
<p>
<details><summary><b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/semantic-release-slack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/semantic-release-slack/node_modules/npm/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- npm-5.3.5.tgz (Root Library)
- npm-6.13.7.tgz
- update-notifier-2.5.0.tgz
- configstore-3.1.2.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>dot-prop-3.0.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-3.0.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-3.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/semantic-release-slack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/semantic-release-slack/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- config-conventional-8.3.4.tgz (Root Library)
- conventional-changelog-conventionalcommits-4.2.1.tgz
- compare-func-1.3.2.tgz
- :x: **dot-prop-3.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/LevyForchh/semantic-release-slack/commit/470d7a5bb9932443939672850db615068976e540">470d7a5bb9932443939672850db615068976e540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"dot-prop","packageVersion":"4.2.0","isTransitiveDependency":true,"dependencyTree":"@semantic-release/npm:5.3.5;npm:6.13.7;update-notifier:2.5.0;configstore:3.1.2;dot-prop:4.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"dot-prop - 5.1.1"},{"packageType":"javascript/Node.js","packageName":"dot-prop","packageVersion":"3.0.0","isTransitiveDependency":true,"dependencyTree":"@commitlint/config-conventional:8.3.4;conventional-changelog-conventionalcommits:4.2.1;compare-func:1.3.2;dot-prop:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"dot-prop - 5.1.1"}],"vulnerabilityIdentifier":"CVE-2020-8116","vulnerabilityDetails":"Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in dot prop tgz dot prop tgz cve medium severity vulnerability vulnerable libraries dot prop tgz dot prop tgz dot prop tgz get set or delete a property from a nested object using a dot path library home page a href path to dependency file tmp ws scm semantic release slack package json path to vulnerable library tmp ws scm semantic release slack node modules npm node modules dot prop package json dependency hierarchy npm tgz root library npm tgz update notifier tgz configstore tgz x dot prop tgz vulnerable library dot prop tgz get set or delete a property from a nested object using a dot path library home page a href path to dependency file tmp ws scm semantic release slack package json path to vulnerable library tmp ws scm semantic release slack node modules dot prop package json dependency hierarchy config conventional tgz root library conventional changelog conventionalcommits tgz compare func tgz x dot prop tgz vulnerable library found in head commit a href vulnerability details prototype pollution vulnerability in dot prop npm package version and earlier allows an attacker to add arbitrary properties to javascript language constructs such as objects publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution dot prop isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails prototype pollution vulnerability in dot prop npm package version and earlier allows an attacker to add arbitrary properties to javascript language constructs such as objects vulnerabilityurl | 0 |
56,237 | 14,078,405,947 | IssuesEvent | 2020-11-04 13:31:43 | themagicalmammal/android_kernel_samsung_a5xelte | https://api.github.com/repos/themagicalmammal/android_kernel_samsung_a5xelte | opened | CVE-2014-9090 (Medium) detected in linuxv3.10 | security vulnerability | ## CVE-2014-9090 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.10</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_a5xelte/commit/738375813823cb33918102af385bdd5d82225e17">738375813823cb33918102af385bdd5d82225e17</a></p>
<p>Found in base branch: <b>cosmic-1.6-experimental</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_a5xelte/arch/x86/include/asm/traps.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_a5xelte/arch/x86/include/asm/traps.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The do_double_fault function in arch/x86/kernel/traps.c in the Linux kernel through 3.17.4 does not properly handle faults associated with the Stack Segment (SS) segment register, which allows local users to cause a denial of service (panic) via a modify_ldt system call, as demonstrated by sigreturn_32 in the linux-clock-tests test suite.
<p>Publish Date: 2014-11-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-9090>CVE-2014-9090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.9</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9090">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9090</a></p>
<p>Release Date: 2014-11-30</p>
<p>Fix Resolution: v3.18-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2014-9090 (Medium) detected in linuxv3.10 - ## CVE-2014-9090 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.10</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_a5xelte/commit/738375813823cb33918102af385bdd5d82225e17">738375813823cb33918102af385bdd5d82225e17</a></p>
<p>Found in base branch: <b>cosmic-1.6-experimental</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_a5xelte/arch/x86/include/asm/traps.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_a5xelte/arch/x86/include/asm/traps.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The do_double_fault function in arch/x86/kernel/traps.c in the Linux kernel through 3.17.4 does not properly handle faults associated with the Stack Segment (SS) segment register, which allows local users to cause a denial of service (panic) via a modify_ldt system call, as demonstrated by sigreturn_32 in the linux-clock-tests test suite.
<p>Publish Date: 2014-11-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-9090>CVE-2014-9090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.9</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9090">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9090</a></p>
<p>Release Date: 2014-11-30</p>
<p>Fix Resolution: v3.18-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch cosmic experimental vulnerable source files android kernel samsung arch include asm traps h android kernel samsung arch include asm traps h vulnerability details the do double fault function in arch kernel traps c in the linux kernel through does not properly handle faults associated with the stack segment ss segment register which allows local users to cause a denial of service panic via a modify ldt system call as demonstrated by sigreturn in the linux clock tests test suite publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
439,184 | 12,678,828,669 | IssuesEvent | 2020-06-19 10:30:21 | pingcap/dumpling | https://api.github.com/repos/pingcap/dumpling | closed | Usability improvement with cmd line parameter `-t` and `-F` | difficulty/2-medium priority/P1 | ## Background
- `-t` controls how many tables can be exported concurrently. If user provides `-r` in parameter, each table could be separated into some concurrent export jobs. So the `-t` doesn't mean exactly the upper limit of export threads.
- `-F` only accepts an integer, which represents the size of the output file in bytes. User may want to provide a more human-readable parameter such as `64MB`, `1GB`.
## Requirement
- `-t` controls the global upper limit of export threads, or provides another parameter to limit the concurrency of single table export, then the global concurrency can be controlled.
- `-F` accepts human readable file size | 1.0 | Usability improvement with cmd line parameter `-t` and `-F` - ## Background
- `-t` controls how many tables can be exported concurrently. If user provides `-r` in parameter, each table could be separated into some concurrent export jobs. So the `-t` doesn't mean exactly the upper limit of export threads.
- `-F` only accepts an integer, which represents the size of the output file in bytes. User may want to provide a more human-readable parameter such as `64MB`, `1GB`.
## Requirement
- `-t` controls the global upper limit of export threads, or provides another parameter to limit the concurrency of single table export, then the global concurrency can be controlled.
- `-F` accepts human readable file size | non_test | usability improvement with cmd line parameter t and f background t controls how many tables can be exported concurrently if user provides r in parameter each table could be separated into some concurrent export jobs so the t doesn t mean exactly the upper limit of export threads f only accepts an integer which represents the size of the output file in bytes user may want to provide a more human readable parameter such as requirement t controls the global upper limit of export threads or provides another parameter to limit the concurrency of single table export then the global concurrency can be controlled f accepts human readable file size | 0 |
228,855 | 18,267,146,808 | IssuesEvent | 2021-10-04 09:45:57 | mattermost/mattermost-server | https://api.github.com/repos/mattermost/mattermost-server | opened | Write Webapp E2E with Cypress: "MM-T575 /invite-people" | Difficulty/1:Easy Up For Grabs Hacktoberfest Area/E2E Tests Help Wanted Tech/Automation |
See our [documentation for Webapp end-to-end testing with Cypress](https://developers.mattermost.com/contribute/webapp/end-to-end-tests/) for reference.
<article>
<h1>MM-T575 /invite-people</h1>
<div>
<div>
<h3>Steps </h3>/invite_people<br>–––––––––––––––––––––––––<ol><li>Type /invite_people followed by two email addresses (can use your Mattermost email with a plus sign: you+1@mattermost.com, you+2@mattermost.com)</li><li>Press Enter</li></ol><h3>Expected</h3><ul><li>Both email addresses receive an invite to the team</li></ul><hr>
</div>
</div>
</article>
**Test Folder:** ``/cypress/integration/integrations``
**Test code arrangement:**
```
describe('Integrations', () => {
it('MM-T575 /invite-people', () => {
// code
});
});
```
Notes:
1. Do not add ``@prod`` label in a spec file
- If you're writing script into a newly created test file, ``@prod`` label should not be included.
- If you're adding script into an existing test file, ``@prod`` label should removed.
2. Use [queries from testing-library](https://testing-library.com/docs/dom-testing-library/api-queries) whenever possible. We share the same philosophy as the [testing-library](https://testing-library.com/) when doing UI automation like "Interact with your app the same way as your users" and so, please follow their guidelines especially when querying an element.
If you're interested, please comment here and come [join our "Contributors" community channel](https://community.mattermost.com/core/channels/tickets) on our daily build server, where you can discuss questions with community members and the Mattermost core team. For technical advice or questions, please [join our "Developers" community channel](https://community.mattermost.com/core/channels/developers).
New contributors please see our [Developer's Guide](https://developers.mattermost.com/contribute/getting-started/).
| 1.0 | Write Webapp E2E with Cypress: "MM-T575 /invite-people" -
See our [documentation for Webapp end-to-end testing with Cypress](https://developers.mattermost.com/contribute/webapp/end-to-end-tests/) for reference.
<article>
<h1>MM-T575 /invite-people</h1>
<div>
<div>
<h3>Steps </h3>/invite_people<br>–––––––––––––––––––––––––<ol><li>Type /invite_people followed by two email addresses (can use your Mattermost email with a plus sign: you+1@mattermost.com, you+2@mattermost.com)</li><li>Press Enter</li></ol><h3>Expected</h3><ul><li>Both email addresses receive an invite to the team</li></ul><hr>
</div>
</div>
</article>
**Test Folder:** ``/cypress/integration/integrations``
**Test code arrangement:**
```
describe('Integrations', () => {
it('MM-T575 /invite-people', () => {
// code
});
});
```
Notes:
1. Do not add ``@prod`` label in a spec file
- If you're writing script into a newly created test file, ``@prod`` label should not be included.
- If you're adding script into an existing test file, ``@prod`` label should removed.
2. Use [queries from testing-library](https://testing-library.com/docs/dom-testing-library/api-queries) whenever possible. We share the same philosophy as the [testing-library](https://testing-library.com/) when doing UI automation like "Interact with your app the same way as your users" and so, please follow their guidelines especially when querying an element.
If you're interested, please comment here and come [join our "Contributors" community channel](https://community.mattermost.com/core/channels/tickets) on our daily build server, where you can discuss questions with community members and the Mattermost core team. For technical advice or questions, please [join our "Developers" community channel](https://community.mattermost.com/core/channels/developers).
New contributors please see our [Developer's Guide](https://developers.mattermost.com/contribute/getting-started/).
| test | write webapp with cypress mm invite people see our for reference mm invite people steps invite people ––––––––––––––––––––––––– type invite people followed by two email addresses can use your mattermost email with a plus sign you mattermost com you mattermost com press enter expected both email addresses receive an invite to the team test folder cypress integration integrations test code arrangement describe integrations it mm invite people code notes do not add prod label in a spec file if you re writing script into a newly created test file prod label should not be included if you re adding script into an existing test file prod label should removed use whenever possible we share the same philosophy as the when doing ui automation like interact with your app the same way as your users and so please follow their guidelines especially when querying an element if you re interested please comment here and come on our daily build server where you can discuss questions with community members and the mattermost core team for technical advice or questions please new contributors please see our | 1 |
87,103 | 25,034,099,626 | IssuesEvent | 2022-11-04 14:41:19 | dotnet/arcade | https://api.github.com/repos/dotnet/arcade | closed | Build failed: arcade-services-internal-ci/main #20221104.1 | Build Failed | Build [#20221104.1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=2037187) failed
## :x: : internal / arcade-services-internal-ci failed
### Summary
**Finished** - Fri, 04 Nov 2022 14:22:27 GMT
**Duration** - 222 minutes
**Requested for** - DotNet Bot
**Reason** - batchedCI
### Details
#### Validate deployment
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/302) - The job running on agent NetCore1ESPool-Internal 4 ran longer than the maximum time of 90 minutes. For more information, see https://go.microsoft.com/fwlink/?linkid=2077134
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/279) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/279) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/279) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/280) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/280) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/280) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/281) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/281) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/281) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/282) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/282) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/282) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/299) - The operation was canceled.
#### Validate Build Assets
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/97) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/97) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/97) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/98) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/98) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/98) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/99) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/99) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/99) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/100) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/100) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/100) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/101) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/101) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/101) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/102) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/102) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/102) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/103) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/103) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/103) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/104) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/104) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/104) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/105) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/105) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/105) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/106) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/106) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/106) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/110) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/110) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/110) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/113) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/113) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/113) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/114) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/114) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/114) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/115) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/115) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/115) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/116) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/116) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/116) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/118) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/118) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/118) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/119) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/119) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/119) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/120) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/120) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/120) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/121) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/121) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/121) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/122) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/122) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/122) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/123) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/123) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/123) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/124) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/124) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/124) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
#### Deploy
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/212) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/212) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/212) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/213) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/213) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/213) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/230) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/230) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/230) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/231) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/231) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/231) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/191) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/191) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/191) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/192) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/192) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/192) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/185) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/185) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/185) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/186) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/186) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/186) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/244) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/244) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/244) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/245) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/245) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/245) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
#### Post-Deployment
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/267) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/267) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/267) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/268) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/268) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/268) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/269) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/269) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/269) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/270) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/270) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/270) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/271) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/271) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/271) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/272) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/272) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/272) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
#### Build
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/5) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/5) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/5) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/6) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/6) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/6) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/7) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/7) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/7) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/8) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/8) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/8) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/9) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/9) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/9) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/10) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/10) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/10) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/11) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/11) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/11) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/12) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/12) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/12) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/29) - Component Governance detected 2 security related alerts at or above 'High' severity. Microsoft’s Open Source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components. Vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/29) - Component Governance detected 2 security alert(s) at or above 'High' severity that need to be resolved. On their Due date these alerts will break the build.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/29) - The Component Detection tool partially succeeded. See the logs for more information.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/61) - Component Governance detected 2 security related alerts at or above 'High' severity. Microsoft’s Open Source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components. Vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/61) - Component Governance detected 2 security alert(s) at or above 'High' severity that need to be resolved. On their Due date these alerts will break the build.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/61) - The Component Detection tool partially succeeded. See the logs for more information.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/72) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/72) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/72) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/73) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/73) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/73) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/74) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/74) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/74) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/75) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/75) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/75) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
#### Publish using Darc
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/169) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/169) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/169) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/170) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/170) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/170) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/171) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/171) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/171) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/172) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/172) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/172) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/173) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/173) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/173) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/174) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/174) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/174) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/175) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/175) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/175) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/176) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/176) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/176) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/177) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/177) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/177) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/178) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/178) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/178) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
### Changes
- [0de63510](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/0de63510204195ad9806cffd7e37b215a98942b9) - Přemek Vysoký - Keep `tarball/content` and VMR's root synchronized always (#2086)
- [624325c7](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/624325c776856052bb708da6b255448e714b15e3) - Chad Nedzlek - Expose Console.In for some command line apps
- [d2e30ce1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/d2e30ce1f0e158b0ae2de179b9307eb0b8fb8f42) - Chad Nedzlek - Expose Console.In for some command line apps
- [785cfc67](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/785cfc67c1c01cc6d465e4824ebf3a64fd75e2b3) - Kyaw Thant - Use --config-env to pass in user and pat (#2081)
- [51918ca4](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/51918ca4d38bbd917ef47d610e95c4cb038e3e76) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20221031.5 (#2084)
- [cc8a2aea](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/cc8a2aea1dc604f3572d32e464f17fbae3a0bcd6) - Přemek Vysoký - Generate a list of components for VMR's main README (#2083)
- [8f3e0e5f](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/8f3e0e5ff5625245c5c092f53d110f3b302aebe6) - Chad Nedzlek - Add aliases to commands
- [65928566](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/659285662336369ae6caee3f77a3680884914dfe) - Chad Nedzlek - -p to --project because of warnings
- [36aa6bcd](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/36aa6bcd00a0acb3ad24ab08f26089d651242c9e) - Chad Nedzlek - -p to --project because of warnings
- [274a18c0](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/274a18c07ce5b0ff425c8a225b4a5db6f510c0a6) - Chad Nedzlek - Wrong helper overload
| 1.0 | Build failed: arcade-services-internal-ci/main #20221104.1 - Build [#20221104.1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=2037187) failed
## :x: : internal / arcade-services-internal-ci failed
### Summary
**Finished** - Fri, 04 Nov 2022 14:22:27 GMT
**Duration** - 222 minutes
**Requested for** - DotNet Bot
**Reason** - batchedCI
### Details
#### Validate deployment
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/302) - The job running on agent NetCore1ESPool-Internal 4 ran longer than the maximum time of 90 minutes. For more information, see https://go.microsoft.com/fwlink/?linkid=2077134
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/279) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/279) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/279) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/280) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/280) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/280) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/281) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/281) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/281) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/282) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/282) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/282) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/299) - The operation was canceled.
#### Validate Build Assets
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/97) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/97) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/97) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/98) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/98) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/98) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/99) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/99) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/99) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/100) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/100) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/100) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/101) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/101) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/101) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/102) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/102) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/102) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/103) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/103) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/103) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/104) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/104) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/104) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/105) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/105) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/105) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/106) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/106) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/106) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/110) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/110) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/110) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/113) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/113) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/113) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/114) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/114) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/114) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/115) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/115) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/115) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/116) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/116) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/116) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/118) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/118) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/118) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/119) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/119) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/119) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/120) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/120) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/120) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/121) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/121) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/121) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/122) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/122) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/122) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/123) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/123) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/123) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/124) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/124) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/124) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
#### Deploy
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/212) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/212) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/212) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/213) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/213) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/213) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/230) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/230) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/230) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/231) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/231) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/231) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/191) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/191) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/191) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/192) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/192) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/192) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/185) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/185) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/185) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/186) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/186) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/186) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/244) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/244) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/244) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/245) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/245) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/245) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
#### Post-Deployment
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/267) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/267) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/267) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/268) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/268) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/268) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/269) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/269) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/269) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/270) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/270) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/270) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/271) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/271) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/271) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/272) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/272) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/272) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
#### Build
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/5) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/5) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/5) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/6) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/6) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/6) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/7) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/7) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/7) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/8) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/8) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/8) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/9) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/9) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/9) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/10) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/10) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/10) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/11) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/11) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/11) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/12) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/12) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/12) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/29) - Component Governance detected 2 security related alerts at or above 'High' severity. Microsoft’s Open Source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components. Vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/29) - Component Governance detected 2 security alert(s) at or above 'High' severity that need to be resolved. On their Due date these alerts will break the build.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/29) - The Component Detection tool partially succeeded. See the logs for more information.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/61) - Component Governance detected 2 security related alerts at or above 'High' severity. Microsoft’s Open Source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components. Vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/61) - Component Governance detected 2 security alert(s) at or above 'High' severity that need to be resolved. On their Due date these alerts will break the build.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/61) - The Component Detection tool partially succeeded. See the logs for more information.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/72) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/72) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/72) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/73) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/73) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/73) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/74) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/74) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/74) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/75) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/75) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/75) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
#### Publish using Darc
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/169) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/169) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/169) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/170) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/170) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/170) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/171) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/171) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/171) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/172) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/172) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/172) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/173) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/173) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/173) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/174) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/174) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/174) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/175) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/175) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/175) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/176) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/176) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/176) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/177) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/177) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/177) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/178) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/178) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2037187/logs/178) - Resource file has already set to: D:\a\_work\_tasks\AzureKeyVault_1e244d32-2dd4-4165-96fb-b7441ca9331e\1.212.0\node_modules\azure-pipelines-tasks-azure-arm-rest-v2\module.json
### Changes
- [0de63510](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/0de63510204195ad9806cffd7e37b215a98942b9) - Přemek Vysoký - Keep `tarball/content` and VMR's root synchronized always (#2086)
- [624325c7](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/624325c776856052bb708da6b255448e714b15e3) - Chad Nedzlek - Expose Console.In for some command line apps
- [d2e30ce1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/d2e30ce1f0e158b0ae2de179b9307eb0b8fb8f42) - Chad Nedzlek - Expose Console.In for some command line apps
- [785cfc67](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/785cfc67c1c01cc6d465e4824ebf3a64fd75e2b3) - Kyaw Thant - Use --config-env to pass in user and pat (#2081)
- [51918ca4](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/51918ca4d38bbd917ef47d610e95c4cb038e3e76) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20221031.5 (#2084)
- [cc8a2aea](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/cc8a2aea1dc604f3572d32e464f17fbae3a0bcd6) - Přemek Vysoký - Generate a list of components for VMR's main README (#2083)
- [8f3e0e5f](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/8f3e0e5ff5625245c5c092f53d110f3b302aebe6) - Chad Nedzlek - Add aliases to commands
- [65928566](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/659285662336369ae6caee3f77a3680884914dfe) - Chad Nedzlek - -p to --project because of warnings
- [36aa6bcd](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/36aa6bcd00a0acb3ad24ab08f26089d651242c9e) - Chad Nedzlek - -p to --project because of warnings
- [274a18c0](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/ec29e881-6ad7-4427-832d-ce639ccba518/commit/274a18c07ce5b0ff425c8a225b4a5db6f510c0a6) - Chad Nedzlek - Wrong helper overload
| non_test | build failed arcade services internal ci main build failed x internal arcade services internal ci failed summary finished fri nov gmt duration minutes requested for dotnet bot reason batchedci details validate deployment x the job running on agent internal ran longer than the maximum time of minutes for more information see warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json x the operation was canceled validate build assets warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json deploy warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json post deployment warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json build warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning component governance detected security related alerts at or above high severity microsoft’s open source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency warning component governance detected security alert s at or above high severity that need to be resolved on their due date these alerts will break the build warning the component detection tool partially succeeded see the logs for more information warning component governance detected security related alerts at or above high severity microsoft’s open source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency warning component governance detected security alert s at or above high severity that need to be resolved on their due date these alerts will break the build warning the component detection tool partially succeeded see the logs for more information warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json publish using darc warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json warning resource file has already set to d a work tasks azurekeyvault node modules azure pipelines tasks azure arm rest module json changes přemek vysoký keep tarball content and vmr s root synchronized always chad nedzlek expose console in for some command line apps chad nedzlek expose console in for some command line apps kyaw thant use config env to pass in user and pat dotnet maestro update dependencies from build přemek vysoký generate a list of components for vmr s main readme chad nedzlek add aliases to commands chad nedzlek p to project because of warnings chad nedzlek p to project because of warnings chad nedzlek wrong helper overload | 0 |
49,883 | 26,385,734,478 | IssuesEvent | 2023-01-12 12:08:17 | tarantool/tarantool | https://api.github.com/repos/tarantool/tarantool | reopened | sql: implement threaded interpretation for VDBE | sql performance good first issue optimization | Overall, problem is discussed here: https://github.com/tarantool/tarantool/issues/3330
Implementation of threaded interpretation is quite simple. However, it requires support of computed goto's by compiler. Most of mainstream compilers (gcc, clang, icc) provide that feature.
There are several examples of implementation:
SQLite https://github.com/AlexKashuba/SQLite_JIT/blob/master/versions/sqlite_disp/jitsrc/vdbe.c#L646
PostgreSQL https://github.com/postgres/postgres/blob/master/src/backend/executor/execExprInterp.c#L117
Plan is following:
1. Patch mkopcodec.sh and mkopcodeh.sh to produce array of goto addresses - it must be organised in the same order as corresponding OP_ values are defined.
2. Define macros DISPATCH/CASE to hide internal implementation of dispatching: if compiler doesn't support precomputed labels, then we should use common switched technique.
3. Replace break and case keywords with mentioned macros.
4. Benchmark results using TPC-H set of queries. | True | sql: implement threaded interpretation for VDBE - Overall, problem is discussed here: https://github.com/tarantool/tarantool/issues/3330
Implementation of threaded interpretation is quite simple. However, it requires support of computed goto's by compiler. Most of mainstream compilers (gcc, clang, icc) provide that feature.
There are several examples of implementation:
SQLite https://github.com/AlexKashuba/SQLite_JIT/blob/master/versions/sqlite_disp/jitsrc/vdbe.c#L646
PostgreSQL https://github.com/postgres/postgres/blob/master/src/backend/executor/execExprInterp.c#L117
Plan is following:
1. Patch mkopcodec.sh and mkopcodeh.sh to produce array of goto addresses - it must be organised in the same order as corresponding OP_ values are defined.
2. Define macros DISPATCH/CASE to hide internal implementation of dispatching: if compiler doesn't support precomputed labels, then we should use common switched technique.
3. Replace break and case keywords with mentioned macros.
4. Benchmark results using TPC-H set of queries. | non_test | sql implement threaded interpretation for vdbe overall problem is discussed here implementation of threaded interpretation is quite simple however it requires support of computed goto s by compiler most of mainstream compilers gcc clang icc provide that feature there are several examples of implementation sqlite postgresql plan is following patch mkopcodec sh and mkopcodeh sh to produce array of goto addresses it must be organised in the same order as corresponding op values are defined define macros dispatch case to hide internal implementation of dispatching if compiler doesn t support precomputed labels then we should use common switched technique replace break and case keywords with mentioned macros benchmark results using tpc h set of queries | 0 |
252,027 | 21,553,202,684 | IssuesEvent | 2022-04-30 01:49:44 | zero88/jooqx | https://api.github.com/repos/zero88/jooqx | closed | Stabilize test | CL: Simple Medium P: High T: Improvement C: testing !release-note | - [x] One schema for testing data type
- [x] One table has all column datatypes with default jOOQ
- [x] One table has same columns as above with some data type converter to Vertx type
- [x] Use sakila schema for a testing relationship, CRUD
- [x] https://github.com/jOOQ/sakila
- ~https://github.com/ivanceras/sakila~
| 1.0 | Stabilize test - - [x] One schema for testing data type
- [x] One table has all column datatypes with default jOOQ
- [x] One table has same columns as above with some data type converter to Vertx type
- [x] Use sakila schema for a testing relationship, CRUD
- [x] https://github.com/jOOQ/sakila
- ~https://github.com/ivanceras/sakila~
| test | stabilize test one schema for testing data type one table has all column datatypes with default jooq one table has same columns as above with some data type converter to vertx type use sakila schema for a testing relationship crud | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.