Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
120,438 | 25,794,599,223 | IssuesEvent | 2022-12-10 12:16:31 | grafana/grafana | https://api.github.com/repos/grafana/grafana | closed | Generate core kind TS types at versioned paths using an intermediate jenny | type/codegen area/kindsys | Currently, the core kinds generator hardcodes the [`TSTypesJenny`](https://github.com/grafana/grafana/blob/5ea077c44026709147a9992bde47aea0c5f9a9c0/pkg/codegen/jenny_tstypes.go#L21) to generate each kind's typescript types under an experimental `x` subpath:
https://github.com/grafana/grafana/blob/5ea077c44026709147a9992bde47aea0c5f9a9c0/kinds/gen.go#L44-L49
Hardcoding this is clearly wrong. The `x` is fine for now, while we're not allowing any kinds to advance past `experimental` maturity. But we'd like to get to the point - soon! - where we lift that restriction and, for `stable` or later maturities, generate their types under versioned paths (`v0`, `v1`, `v2`...).
This is a pattern we're going to need in a lot of places, so it's worth making some kind of intermediate/meta-jenny to do the job. | 1.0 | Generate core kind TS types at versioned paths using an intermediate jenny - Currently, the core kinds generator hardcodes the [`TSTypesJenny`](https://github.com/grafana/grafana/blob/5ea077c44026709147a9992bde47aea0c5f9a9c0/pkg/codegen/jenny_tstypes.go#L21) to generate each kind's typescript types under an experimental `x` subpath:
https://github.com/grafana/grafana/blob/5ea077c44026709147a9992bde47aea0c5f9a9c0/kinds/gen.go#L44-L49
Hardcoding this is clearly wrong. The `x` is fine for now, while we're not allowing any kinds to advance past `experimental` maturity. But we'd like to get to the point - soon! - where we lift that restriction and, for `stable` or later maturities, generate their types under versioned paths (`v0`, `v1`, `v2`...).
This is a pattern we're going to need in a lot of places, so it's worth making some kind of intermediate/meta-jenny to do the job. | non_priority | generate core kind ts types at versioned paths using an intermediate jenny currently the core kinds generator hardcodes the to generate each kind s typescript types under an experimental x subpath hardcoding this is clearly wrong the x is fine for now while we re not allowing any kinds to advance past experimental maturity but we d like to get to the point soon where we lift that restriction and for stable or later maturities generate their types under versioned paths this is a pattern we re going to need in a lot of places so it s worth making some kind of intermediate meta jenny to do the job | 0 |
37,385 | 9,996,213,326 | IssuesEvent | 2019-07-11 22:34:19 | microsoft/terminal | https://api.github.com/repos/microsoft/terminal | closed | README.md still refers to VS2017 and v14.1 toolchain | Area-Build In-PR Issue-Docs Product-Terminal | <!-- Briefly describe which document needs to be corrected and why. -->
The new requirements to build the project after #1012 are not documented.
I have problems trying to build with VS2019, but want to be sure I have the correct setup before reporting those issues. | 1.0 | README.md still refers to VS2017 and v14.1 toolchain - <!-- Briefly describe which document needs to be corrected and why. -->
The new requirements to build the project after #1012 are not documented.
I have problems trying to build with VS2019, but want to be sure I have the correct setup before reporting those issues. | non_priority | readme md still refers to and toolchain the new requirements to build the project after are not documented i have problems trying to build with but want to be sure i have the correct setup before reporting those issues | 0 |
178,112 | 29,499,290,084 | IssuesEvent | 2023-06-02 19:58:41 | PrefectHQ/prefect | https://api.github.com/repos/PrefectHQ/prefect | closed | "red green" bars are missing in the new Flows screen | status:accepted status:as-designed ui | ### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the Prefect documentation for this issue.
- [X] I checked that this issue is related to Prefect and not one of its dependencies.
### Bug summary
In the new Flows screens, the "red green" bars that represent the frequency of all the Flows/Deplyments is gone.

This is a super useful view since it allows in one quick page scroll to make sure all Flows are running as needed.
There is no other way in Prefect's UI to get such information in such a quick way.
### Reproduction
```python3
Happens in the Prefect Cloud UI in the new Flows screen.
```
### Error
_No response_
### Versions
```Text
Prefect cloud.
```
### Additional context
_No response_ | 1.0 | "red green" bars are missing in the new Flows screen - ### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the Prefect documentation for this issue.
- [X] I checked that this issue is related to Prefect and not one of its dependencies.
### Bug summary
In the new Flows screens, the "red green" bars that represent the frequency of all the Flows/Deplyments is gone.

This is a super useful view since it allows in one quick page scroll to make sure all Flows are running as needed.
There is no other way in Prefect's UI to get such information in such a quick way.
### Reproduction
```python3
Happens in the Prefect Cloud UI in the new Flows screen.
```
### Error
_No response_
### Versions
```Text
Prefect cloud.
```
### Additional context
_No response_ | non_priority | red green bars are missing in the new flows screen first check i added a descriptive title to this issue i used the github search to find a similar issue and didn t find it i searched the prefect documentation for this issue i checked that this issue is related to prefect and not one of its dependencies bug summary in the new flows screens the red green bars that represent the frequency of all the flows deplyments is gone this is a super useful view since it allows in one quick page scroll to make sure all flows are running as needed there is no other way in prefect s ui to get such information in such a quick way reproduction happens in the prefect cloud ui in the new flows screen error no response versions text prefect cloud additional context no response | 0 |
23,300 | 4,927,233,368 | IssuesEvent | 2016-11-26 16:32:28 | vincentmorneau/material-apex | https://api.github.com/repos/vincentmorneau/material-apex | closed | Add available substitution strings in report templates | documentation | - [x] Card Basic
- [x] Card Image
- [x] Card Reveal
- [x] Chips
- [x] Collapsible
- [x] Collection
- [x] Dropdown
- [x] Slider
- [x] Staggered List
- [x] Timeline
Like UT

| 1.0 | Add available substitution strings in report templates - - [x] Card Basic
- [x] Card Image
- [x] Card Reveal
- [x] Chips
- [x] Collapsible
- [x] Collection
- [x] Dropdown
- [x] Slider
- [x] Staggered List
- [x] Timeline
Like UT

| non_priority | add available substitution strings in report templates card basic card image card reveal chips collapsible collection dropdown slider staggered list timeline like ut | 0 |
77,896 | 9,636,547,940 | IssuesEvent | 2019-05-16 06:22:19 | ManageIQ/manageiq-v2v | https://api.github.com/repos/ManageIQ/manageiq-v2v | closed | Conversion Host Enablement - Configuration Wizard - Add tooltip/popover explaining VDDK library path field | bz enhancement hammer/yes needs-design v1.2 z-stream | ~~Part of the Conversion Hosts UI feature BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1693339
(or maybe a followup BZ)~~ Associated BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1705620
As a stopgap before we can implement https://github.com/ManageIQ/manageiq-v2v/issues/881, we agreed to add an info icon with a popover explaining the VDDK Library Path field on the Authentication step of the wizard.
@vconzola, I don't think a mockup is necessary, but can you help me figure out the text that should be in this popover? | 1.0 | Conversion Host Enablement - Configuration Wizard - Add tooltip/popover explaining VDDK library path field - ~~Part of the Conversion Hosts UI feature BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1693339
(or maybe a followup BZ)~~ Associated BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1705620
As a stopgap before we can implement https://github.com/ManageIQ/manageiq-v2v/issues/881, we agreed to add an info icon with a popover explaining the VDDK Library Path field on the Authentication step of the wizard.
@vconzola, I don't think a mockup is necessary, but can you help me figure out the text that should be in this popover? | non_priority | conversion host enablement configuration wizard add tooltip popover explaining vddk library path field part of the conversion hosts ui feature bz or maybe a followup bz associated bz as a stopgap before we can implement we agreed to add an info icon with a popover explaining the vddk library path field on the authentication step of the wizard vconzola i don t think a mockup is necessary but can you help me figure out the text that should be in this popover | 0 |
342,878 | 30,642,609,550 | IssuesEvent | 2023-07-25 00:02:02 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix general_functions.test_tensorflow_reverse | TensorFlow Frontend Sub Task Failing Test | | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5650465643/job/15306852454"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5650465643/job/15306852454"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5650465643/job/15306852454"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5650465643/job/15306852454"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5650465643/job/15306852454"><img src=https://img.shields.io/badge/-failure-red></a>
| 1.0 | Fix general_functions.test_tensorflow_reverse - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5650465643/job/15306852454"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5650465643/job/15306852454"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5650465643/job/15306852454"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5650465643/job/15306852454"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5650465643/job/15306852454"><img src=https://img.shields.io/badge/-failure-red></a>
| non_priority | fix general functions test tensorflow reverse numpy a href src tensorflow a href src jax a href src torch a href src paddle a href src | 0 |
101,371 | 8,786,776,995 | IssuesEvent | 2018-12-20 16:37:08 | mozilla-mobile/focus-android | https://api.github.com/repos/mozilla-mobile/focus-android | closed | On tablet simulator, cached webpage with trackers never completes loading | P2 needs investigation testing | ## Steps to reproduce
Run ToggleBlockTest.java on Nexus 9 x86 simulator in Webview (debug build)
### Expected behavior
Test passes
### Actual behavior
Test fails, when the 'Trackers blocked' switch is turned off and the page is reloading. adb log says onSessionLoadIdlingResource is not being set to idle, and the progressbar does not complete.
This doesn't happen on phone devices, will also try out on nexus 9 device next week, and determine whether this is a new test framework issue or app issue.
### Device information
* Android device: Nexus 9 API 25 Android simulator
* Focus version: master
| 1.0 | On tablet simulator, cached webpage with trackers never completes loading - ## Steps to reproduce
Run ToggleBlockTest.java on Nexus 9 x86 simulator in Webview (debug build)
### Expected behavior
Test passes
### Actual behavior
Test fails, when the 'Trackers blocked' switch is turned off and the page is reloading. adb log says onSessionLoadIdlingResource is not being set to idle, and the progressbar does not complete.
This doesn't happen on phone devices, will also try out on nexus 9 device next week, and determine whether this is a new test framework issue or app issue.
### Device information
* Android device: Nexus 9 API 25 Android simulator
* Focus version: master
| non_priority | on tablet simulator cached webpage with trackers never completes loading steps to reproduce run toggleblocktest java on nexus simulator in webview debug build expected behavior test passes actual behavior test fails when the trackers blocked switch is turned off and the page is reloading adb log says onsessionloadidlingresource is not being set to idle and the progressbar does not complete this doesn t happen on phone devices will also try out on nexus device next week and determine whether this is a new test framework issue or app issue device information android device nexus api android simulator focus version master | 0 |
54,593 | 30,265,246,145 | IssuesEvent | 2023-07-07 11:19:09 | resqiar/resqiar.com | https://api.github.com/repos/resqiar/resqiar.com | closed | Compress & Optimize Images! | performance | As part of optimizing website's performance, we need to compress the images used throughout the site. This will help reduce the overall page size and improve the loading time for our users.
### Goals
- Reduce the file size of images without compromising their quality.
- Implement a compression strategy that can be easily integrated into our existing image handling workflow.
- Test and validate the compressed images to ensure they maintain visual integrity.
### Acceptance Criteria
- The compressed images should maintain a visually acceptable quality.
- The overall file size of the website should be significantly reduced after implementing image compression.
- The compression process should be automated and integrated into our existing workflow. | True | Compress & Optimize Images! - As part of optimizing website's performance, we need to compress the images used throughout the site. This will help reduce the overall page size and improve the loading time for our users.
### Goals
- Reduce the file size of images without compromising their quality.
- Implement a compression strategy that can be easily integrated into our existing image handling workflow.
- Test and validate the compressed images to ensure they maintain visual integrity.
### Acceptance Criteria
- The compressed images should maintain a visually acceptable quality.
- The overall file size of the website should be significantly reduced after implementing image compression.
- The compression process should be automated and integrated into our existing workflow. | non_priority | compress optimize images as part of optimizing website s performance we need to compress the images used throughout the site this will help reduce the overall page size and improve the loading time for our users goals reduce the file size of images without compromising their quality implement a compression strategy that can be easily integrated into our existing image handling workflow test and validate the compressed images to ensure they maintain visual integrity acceptance criteria the compressed images should maintain a visually acceptable quality the overall file size of the website should be significantly reduced after implementing image compression the compression process should be automated and integrated into our existing workflow | 0 |
856 | 2,759,104,220 | IssuesEvent | 2015-04-28 00:09:29 | piwik/piwik | https://api.github.com/repos/piwik/piwik | opened | Tracker: Faster visitor recognition | c: Performance | As discussed with @quba I think the union here is not needed: https://github.com/piwik/piwik/blob/2.13.0-rc2/core/Tracker/Model.php#L382-L386
Instead we could perform one query first to check if there is a `idvisitor = ?` and if not execute a second query with `config_id = ? AND user_id IS NULL`
We noticed on a DB with many log entries that this visitor recognition query is rather slow and this should improve it as in most cases a visitor based on `idvisitor` will be found. | True | Tracker: Faster visitor recognition - As discussed with @quba I think the union here is not needed: https://github.com/piwik/piwik/blob/2.13.0-rc2/core/Tracker/Model.php#L382-L386
Instead we could perform one query first to check if there is a `idvisitor = ?` and if not execute a second query with `config_id = ? AND user_id IS NULL`
We noticed on a DB with many log entries that this visitor recognition query is rather slow and this should improve it as in most cases a visitor based on `idvisitor` will be found. | non_priority | tracker faster visitor recognition as discussed with quba i think the union here is not needed instead we could perform one query first to check if there is a idvisitor and if not execute a second query with config id and user id is null we noticed on a db with many log entries that this visitor recognition query is rather slow and this should improve it as in most cases a visitor based on idvisitor will be found | 0 |
69,436 | 30,281,274,864 | IssuesEvent | 2023-07-08 05:01:46 | FernandoValero/Trabajo-Practico-Final | https://api.github.com/repos/FernandoValero/Trabajo-Practico-Final | opened | Implementación de la capa service para IMC | Capa service | - Crear la Interface IIndiceMasaCorporal y sus métodos abstractos
- En el paquete service.imp crear la clase IndiceMasaCorporalImp que implemente la Interface anterior. | 1.0 | Implementación de la capa service para IMC - - Crear la Interface IIndiceMasaCorporal y sus métodos abstractos
- En el paquete service.imp crear la clase IndiceMasaCorporalImp que implemente la Interface anterior. | non_priority | implementación de la capa service para imc crear la interface iindicemasacorporal y sus métodos abstractos en el paquete service imp crear la clase indicemasacorporalimp que implemente la interface anterior | 0 |
405,217 | 27,507,694,571 | IssuesEvent | 2023-03-06 05:48:25 | hypeboyyy/fastcampus-project-board | https://api.github.com/repos/hypeboyyy/fastcampus-project-board | closed | 깃헙 프로젝트와 이슈 정리하기 | documentation | 깃헙 프로젝트를 세팅하고, 카드를 만들어 정리하자.
* [x] 프로젝트 베타 만들기
* [x] 카드 목록 만들기 - 강의 커리큘럼 참고
* [x] 이슈로 적절히 바꾸기 | 1.0 | 깃헙 프로젝트와 이슈 정리하기 - 깃헙 프로젝트를 세팅하고, 카드를 만들어 정리하자.
* [x] 프로젝트 베타 만들기
* [x] 카드 목록 만들기 - 강의 커리큘럼 참고
* [x] 이슈로 적절히 바꾸기 | non_priority | 깃헙 프로젝트와 이슈 정리하기 깃헙 프로젝트를 세팅하고 카드를 만들어 정리하자 프로젝트 베타 만들기 카드 목록 만들기 강의 커리큘럼 참고 이슈로 적절히 바꾸기 | 0 |
71,813 | 7,261,724,506 | IssuesEvent | 2018-02-18 23:34:54 | vgstation-coders/vgstation13 | https://api.github.com/repos/vgstation-coders/vgstation13 | closed | Some fish icon errors | 100% tested Bug / Fix Sprites | Mostly as a note to myself to fix. This was probably inevitable given what that PR went through, since the sprite commit was originally lost. Basically, these food items didn't get their icon set to seafood.dmi so they're invisible. Easy fix.
fried shrimp
citrus baked salmon
smoked salmon | 1.0 | Some fish icon errors - Mostly as a note to myself to fix. This was probably inevitable given what that PR went through, since the sprite commit was originally lost. Basically, these food items didn't get their icon set to seafood.dmi so they're invisible. Easy fix.
fried shrimp
citrus baked salmon
smoked salmon | non_priority | some fish icon errors mostly as a note to myself to fix this was probably inevitable given what that pr went through since the sprite commit was originally lost basically these food items didn t get their icon set to seafood dmi so they re invisible easy fix fried shrimp citrus baked salmon smoked salmon | 0 |
123,234 | 10,257,911,043 | IssuesEvent | 2019-08-21 21:17:58 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | [UI] Cluster Templates - Remove filled in "First revision" text for Template Revision Name field | [zube]: To Test kind/bug-qa team/ui | Version: master-head (v2.3) (8/13/19)
**What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
- Create a CT
**Result:**
Notice for some reason there is text filled in in the Template Revision Name field.

This used to not be there (I'm not aware of an issue to add this).
This could be helper text (that grayed out helper text that isn't actual text content) but I don't even like the idea of "First revision" revisions should be descriptive and very briefly explain what the revision is in my opinion. Also spaces are not allowed.
| 1.0 | [UI] Cluster Templates - Remove filled in "First revision" text for Template Revision Name field - Version: master-head (v2.3) (8/13/19)
**What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
- Create a CT
**Result:**
Notice for some reason there is text filled in in the Template Revision Name field.

This used to not be there (I'm not aware of an issue to add this).
This could be helper text (that grayed out helper text that isn't actual text content) but I don't even like the idea of "First revision" revisions should be descriptive and very briefly explain what the revision is in my opinion. Also spaces are not allowed.
| non_priority | cluster templates remove filled in first revision text for template revision name field version master head what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible create a ct result notice for some reason there is text filled in in the template revision name field this used to not be there i m not aware of an issue to add this this could be helper text that grayed out helper text that isn t actual text content but i don t even like the idea of first revision revisions should be descriptive and very briefly explain what the revision is in my opinion also spaces are not allowed | 0 |
163,228 | 25,779,048,078 | IssuesEvent | 2022-12-09 14:26:22 | kubeshop/tracetest | https://api.github.com/repos/kubeshop/tracetest | reopened | [In-App Config] Setup Wizard | design frontend backend | The setup wizard is the main in-app config feature that allows users to configure tracetest to start supporting traces.
Acceptance Criteria:
AC1:
As a user looking at the setup wizard
I should be able to see a brief explanation about why a data store is needed
And a way to select the data store of my preference
AC2 (Special Case for Otel Collector):
As a user looking at the UI setup wizard
And I select the Otel Collector as my data store
I should see a small tutorial with the steps I need to take to configure it
AC3 (Data Store Step):
https://github.com/kubeshop/tracetest/issues/1627
AC4:
As a user that just fill in the data store information
I should be able to see an option to add the exporter configuration
AC5:
As a user looking at the wizard setup
And I have clicked adding the exporter configuration
I should be able to see the following fields:
- serviceName
- sampling
- type
- collector
- endpoint
AC6:
As a user looking at the UI setup wizard
After filling in the details to configure the exporter section
And I choose the test connection option
I should be able to see the result (error or success)
And details about the results
AC7:
As a user looking at the UI setup wizard
After filling in the details to configure the data store
And I have tested that the connection is successful
I should be able to save the changes
Without testing the connection
AC8:
As a user looking at the UI setup wizard
And I have clicked save changes
I should see a prompt message letting me know that Tracetest will be restarted and there will be some downtime after confirmation.
AC9:
As a user that just went through the setup process
I should see a change in the UI that will tell me that Tracetest is ready to support traces
AC10:
As a user trying to update the setup setting
If there are some tests or transactions in progress
I should see a warning telling me about what is going to happen after the change | 1.0 | [In-App Config] Setup Wizard - The setup wizard is the main in-app config feature that allows users to configure tracetest to start supporting traces.
Acceptance Criteria:
AC1:
As a user looking at the setup wizard
I should be able to see a brief explanation about why a data store is needed
And a way to select the data store of my preference
AC2 (Special Case for Otel Collector):
As a user looking at the UI setup wizard
And I select the Otel Collector as my data store
I should see a small tutorial with the steps I need to take to configure it
AC3 (Data Store Step):
https://github.com/kubeshop/tracetest/issues/1627
AC4:
As a user that just fill in the data store information
I should be able to see an option to add the exporter configuration
AC5:
As a user looking at the wizard setup
And I have clicked adding the exporter configuration
I should be able to see the following fields:
- serviceName
- sampling
- type
- collector
- endpoint
AC6:
As a user looking at the UI setup wizard
After filling in the details to configure the exporter section
And I choose the test connection option
I should be able to see the result (error or success)
And details about the results
AC7:
As a user looking at the UI setup wizard
After filling in the details to configure the data store
And I have tested that the connection is successful
I should be able to save the changes
Without testing the connection
AC8:
As a user looking at the UI setup wizard
And I have clicked save changes
I should see a prompt message letting me know that Tracetest will be restarted and there will be some downtime after confirmation.
AC9:
As a user that just went through the setup process
I should see a change in the UI that will tell me that Tracetest is ready to support traces
AC10:
As a user trying to update the setup setting
If there are some tests or transactions in progress
I should see a warning telling me about what is going to happen after the change | non_priority | setup wizard the setup wizard is the main in app config feature that allows users to configure tracetest to start supporting traces acceptance criteria as a user looking at the setup wizard i should be able to see a brief explanation about why a data store is needed and a way to select the data store of my preference special case for otel collector as a user looking at the ui setup wizard and i select the otel collector as my data store i should see a small tutorial with the steps i need to take to configure it data store step as a user that just fill in the data store information i should be able to see an option to add the exporter configuration as a user looking at the wizard setup and i have clicked adding the exporter configuration i should be able to see the following fields servicename sampling type collector endpoint as a user looking at the ui setup wizard after filling in the details to configure the exporter section and i choose the test connection option i should be able to see the result error or success and details about the results as a user looking at the ui setup wizard after filling in the details to configure the data store and i have tested that the connection is successful i should be able to save the changes without testing the connection as a user looking at the ui setup wizard and i have clicked save changes i should see a prompt message letting me know that tracetest will be restarted and there will be some downtime after confirmation as a user that just went through the setup process i should see a change in the ui that will tell me that tracetest is ready to support traces as a user trying to update the setup setting if there are some tests or transactions in progress i should see a warning telling me about what is going to happen after the change | 0 |
213,047 | 16,507,958,403 | IssuesEvent | 2021-05-25 22:01:31 | facebookresearch/detectron2 | https://api.github.com/repos/facebookresearch/detectron2 | closed | Vínculos | documentation | ## 📚 Documentation Improvements
* Provide a link to the relevant documentation/comment/tutorial:
* How should the above documentation/comment/tutorial improve:
| 1.0 | Vínculos - ## 📚 Documentation Improvements
* Provide a link to the relevant documentation/comment/tutorial:
* How should the above documentation/comment/tutorial improve:
| non_priority | vínculos 📚 documentation improvements provide a link to the relevant documentation comment tutorial how should the above documentation comment tutorial improve | 0 |
7,701 | 3,594,867,831 | IssuesEvent | 2016-02-02 02:04:59 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | opened | Update BenchI and BenchF benchmarks to validate results | CodeGen performance | These benchmarks (tests/src/JIT/performance/codequality) don't all consistently check for correctness of results. They've been useful in the past without it but would be a nice additional validation measure to do this. | 1.0 | Update BenchI and BenchF benchmarks to validate results - These benchmarks (tests/src/JIT/performance/codequality) don't all consistently check for correctness of results. They've been useful in the past without it but would be a nice additional validation measure to do this. | non_priority | update benchi and benchf benchmarks to validate results these benchmarks tests src jit performance codequality don t all consistently check for correctness of results they ve been useful in the past without it but would be a nice additional validation measure to do this | 0 |
67,628 | 27,972,252,693 | IssuesEvent | 2023-03-25 06:15:40 | hashicorp/terraform-provider-azurerm | https://api.github.com/repos/hashicorp/terraform-provider-azurerm | closed | azurerm_cdn_frontdoor_endpoint: Get: Failure sending request: StatusCode=429 | waiting-response service/cdn | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.3.4
### AzureRM Provider Version
3.31.0
### Affected Resource(s)/Data Source(s)
azurerm_cdn_frontdoor_endpoint
### Terraform Configuration Files
```hcl
variable "environmentNames" {
type = list(string)
default = []
}
environmentNames = ["xxx-int01", "xxx-int01", "xxx-uat01", "xxx-prd01"]
// AKS Config
variable "aksClusters" {
type = map(object({
env = string
instance = string
location = string
vnetAddressSpace = string
subnetAddressSpace = string
privateLinkAddressSpace = string
sku = string
allowGatewayTransit = bool
useRemoteGateway = bool
detachAKSSystemNode = bool
aksAzureManaged = bool
aksRBACEnabled = bool
applicationNodeCount = number
spotNodeCount = number
kubeVersion = string
aksBackupInstanceEnabled = bool
aksBackupInstanceLocation = string
aksBackupInstanceSpotNodeCount = number
aksBackupVnetAddressSpace = string
aksBackupSubnetAddressSpace = string
aksBackupPrivateLinkAddressSpace= string
hosts = list(string)
}))
default = {}
}
aksClusters = {
"int-01" = {
env = "int",
instance = "01"
location = "Canada Central"
vnetAddressSpace = ""
subnetAddressSpace = ""
privateLinkAddressSpace = ""
sku = "Free"
allowGatewayTransit = true
useRemoteGateway = true
detachAKSSystemNode = false
aksAzureManaged = true
aksRBACEnabled = false
isBackupInstance = false
applicationNodeCount = 3
spotNodeCount = 0
kubeVersion = "1.23.8"
aksBackupInstanceEnabled = false
aksBackupInstanceLocation = "France Central"
aksBackupInstanceSpotNodeCount = 1
aksBackupVnetAddressSpace = ""
aksBackupSubnetAddressSpace = ""
aksBackupPrivateLinkAddressSpace = ""
hosts = [ "xxx-int01", "xxx-int01" ]
},
"uat-01" = {
env = "uat",
instance = "01"
location = "Canada Central"
vnetAddressSpace = ""
subnetAddressSpace = ""
privateLinkAddressSpace = ""
sku = "Free"
allowGatewayTransit = true
useRemoteGateway = true
detachAKSSystemNode = false
aksAzureManaged = true
aksRBACEnabled = false
isBackupInstance = false
applicationNodeCount = 3
spotNodeCount = 0
kubeVersion = "1.23.8"
aksBackupInstanceEnabled = false
aksBackupInstanceLocation = "France Central"
aksBackupInstanceSpotNodeCount = 1
aksBackupVnetAddressSpace = ""
aksBackupSubnetAddressSpace = ""
aksBackupPrivateLinkAddressSpace = ""
hosts = [ "xxx-uat01" ]
},
"prod-01" = {
env = "prod",
instance = "01"
location = "Canada Central"
vnetAddressSpace = ""
subnetAddressSpace = ""
privateLinkAddressSpace = ""
sku = "Paid"
allowGatewayTransit = true
useRemoteGateway = true
detachAKSSystemNode = true
aksAzureManaged = true
aksRBACEnabled = true
isBackupInstance = false
applicationNodeCount = 6
spotNodeCount = 0
kubeVersion = "1.23.8"
aksBackupInstanceEnabled = false
aksBackupInstanceLocation = "France Central"
aksBackupInstanceSpotNodeCount = 1
aksBackupVnetAddressSpace = ""
aksBackupSubnetAddressSpace = ""
aksBackupPrivateLinkAddressSpace = ""
hosts = [ "xxx-prd01" ]
}
}
// DNS Zone
variable "dnsZone" {
type = string
default = "domain.app"
}
locals {
frontdoorBackendWebHosts = flatten([
for clusterKey, cluster in var.aksClusters : [
for env in toset(cluster.hosts) : [
for frontend in toset(var.frontdoorFrontEnd) : {
env = env
host = frontend
clusterKey = clusterKey
backupOriginEnabled = cluster.aksBackupInstanceEnabled
hostNameBckp = format("%s.%sbckp.private.%s", frontend, env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
hostName = format("%s.%s.private.%s", frontend, env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
}
]
]
])
frontdoorBackendApiHosts = concat(flatten([
for clusterKey, cluster in var.aksClusters : [
for env in toset(cluster.hosts) : {
env = env
host = "api"
clusterKey = clusterKey
frontend = ""
backupOriginEnabled = cluster.aksBackupInstanceEnabled
hostNameBckp = format("api.%sbckp.private.%s", env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
hostName = format("api.%s.private.%s", env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
}
]
]),
flatten([
for clusterKey, cluster in var.aksClusters : [
for env in toset(cluster.hosts) : [
for frontend in toset(var.frontdoorFrontEnd) : {
env = env
host = "mobile"
frontend = frontend
clusterKey = clusterKey
backupOriginEnabled = cluster.aksBackupInstanceEnabled
hostNameBckp = format("mobile.%s.%sbckp.private.%s", frontend, env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
hostName = format("mobile.%s.%s.private.%s", frontend, env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
}
]
]
]))
frontdoorDomain = flatten([
for clusterKey, cluster in var.aksClusters : [
for env in toset(cluster.hosts) : env
]
])
}
# ----------------- Front Door CDN ----------------------- #
resource "azurerm_cdn_frontdoor_profile" "frontdoor" {
for_each = toset(local.frontdoorDomain)
name = "${var.organization}-${each.key}-frontdoor-cdn"
resource_group_name = azurerm_resource_group.gateway.name
sku_name = "Premium_AzureFrontDoor"
}
# ------------------ Origin group ---------------------- #
resource "azurerm_cdn_frontdoor_origin_group" "conforigin" {
for_each = var.aksClusters
name = "${var.organization}-${each.value.env}${each.value.instance}-aks-conf-origin"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.hosts[0]].id
health_probe {
interval_in_seconds = 30
path = "/qxhealthprobe"
protocol = "Https"
request_type = "GET"
}
load_balancing {
additional_latency_in_milliseconds = 0
sample_size = 16
successful_samples_required = 3
}
depends_on = [azurerm_private_link_service.aks]
}
/* Conf default origin */
resource "azurerm_cdn_frontdoor_origin" "conforigin" {
for_each = var.aksClusters
name = "${each.value.env}${each.value.instance}-aks-conf-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.conforigin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = "conf.${each.value.env}${each.value.instance}.private.${var.dnsZone}"
origin_host_header = "conf.${each.value.env}${each.value.instance}.private.${var.dnsZone}"
priority = 1
weight = 1000
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aks[each.key].location
private_link_target_id = azurerm_private_link_service.aks[each.key].id
}
}
/* Conf backup origin */
resource "azurerm_cdn_frontdoor_origin" "conforiginbckp" {
for_each = { for key, cluster in var.aksClusters : key => cluster if cluster.aksBackupInstanceEnabled == true }
name = "${each.value.env}${each.value.instance}-aksbckp-conf-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.conforigin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = "conf.${each.value.env}${each.value.instance}bckp.private.${var.dnsZone}"
origin_host_header = "conf.${each.value.env}${each.value.instance}bckp.private.${var.dnsZone}"
priority = 2
weight = 100
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aksbckp[each.key].location
private_link_target_id = azurerm_private_link_service.aksbckp[each.key].id
}
}
# ------------------ Origin group ---------------------- #
resource "azurerm_cdn_frontdoor_origin_group" "origin" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.host}-origin"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
health_probe {
interval_in_seconds = 30
path = "/qxhealthprobe"
protocol = "Https"
request_type = "GET"
}
load_balancing {
additional_latency_in_milliseconds = 0
sample_size = 16
successful_samples_required = 3
}
depends_on = [azurerm_private_link_service.aks]
}
/* Web default origin */
resource "azurerm_cdn_frontdoor_origin" "weborigin" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.host}-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.origin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = each.value.hostName
origin_host_header = each.value.hostName
priority = 1
weight = 1000
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aks[each.value.clusterKey].location
private_link_target_id = azurerm_private_link_service.aks[each.value.clusterKey].id
}
}
/* Web backup origin */
resource "azurerm_cdn_frontdoor_origin" "weboriginbckp" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.backupOriginEnabled == true }
name = "${each.value.env}-aksbckp-${each.value.host}-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.origin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = each.value.hostNameBckp
origin_host_header = each.value.hostNameBckp
priority = 2
weight = 100
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aksbckp[each.value.clusterKey].location
private_link_target_id = azurerm_private_link_service.aksbckp[each.value.clusterKey].id
}
}
# ------------------ Origin group ---------------------- #
resource "azurerm_cdn_frontdoor_origin_group" "apiorigin" {
for_each = { for index, backendhost in local.frontdoorBackendApiHosts : "${backendhost.env}${backendhost.frontend}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.frontend}${each.value.host}-origin"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
health_probe {
interval_in_seconds = 30
path = "/qxhealthprobe"
protocol = "Https"
request_type = "GET"
}
load_balancing {
additional_latency_in_milliseconds = 0
sample_size = 16
successful_samples_required = 3
}
depends_on = [azurerm_private_link_service.aks]
}
/* Origin for Web Backend API */
resource "azurerm_cdn_frontdoor_origin" "apiorigin" {
for_each = { for index, backendhost in local.frontdoorBackendApiHosts : "${backendhost.env}${backendhost.frontend}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.frontend}${each.value.host}-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.apiorigin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = each.value.hostName
origin_host_header = each.value.hostName
priority = 1
weight = 1000
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aks[each.value.clusterKey].location
private_link_target_id = azurerm_private_link_service.aks[each.value.clusterKey].id
}
}
/* Origin for Web Backend API backup */
resource "azurerm_cdn_frontdoor_origin" "apioriginbckp" {
for_each = { for index, backendhost in local.frontdoorBackendApiHosts : "${backendhost.env}${backendhost.frontend}${backendhost.host}" => backendhost if backendhost.backupOriginEnabled == true }
name = "${each.value.env}-aksbckp-${each.value.frontend}${each.value.host}-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.apiorigin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = each.value.hostNameBckp
origin_host_header = each.value.hostNameBckp
priority = 2
weight = 100
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aksbckp[each.value.clusterKey].location
private_link_target_id = azurerm_private_link_service.aksbckp[each.value.clusterKey].id
}
}
# # ------------------ Endpoint Conf ---------------------- #
resource "azurerm_cdn_frontdoor_endpoint" "conf" {
for_each = var.aksClusters
name = "${each.value.env}${each.value.instance}conf"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.hosts[0]].id
}
resource "azurerm_cdn_frontdoor_custom_domain" "confdomain" {
for_each = var.aksClusters
name = "${each.value.env}${each.value.instance}-aks-conf-domain"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.hosts[0]].id
dns_zone_id = data.azurerm_dns_zone.quartzx.id
host_name = each.value.env == "prod" ? format("conf.public.%s", var.dnsZone) : format("conf.%s%s.public.%s", each.value.env, each.value.instance, var.dnsZone)
tls {
certificate_type = "CustomerCertificate"
minimum_tls_version = "TLS12"
cdn_frontdoor_secret_id = azurerm_cdn_frontdoor_secret.conf[each.key].id
}
}
resource "azurerm_dns_cname_record" "confdomaindns" {
provider = azurerm.production
for_each = var.aksClusters
name = each.value.env == "prod" ? format("conf.public.%s", var.dnsZone) : format("conf.%s%s.public", each.value.env, each.value.instance)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record = azurerm_cdn_frontdoor_endpoint.conf[each.key].host_name
depends_on = [azurerm_cdn_frontdoor_route.confroute]
}
resource "azurerm_dns_txt_record" "confdomaindnstxt" {
provider = azurerm.production
for_each = var.aksClusters
name = each.value.env == "prod" ? "_dnsauth.conf.public" : format("_dnsauth.conf.%s%s.public", each.value.env, each.value.instance)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record {
value = azurerm_cdn_frontdoor_custom_domain.confdomain[each.key].validation_token
}
}
resource "azurerm_cdn_frontdoor_route" "confroute" {
for_each = var.aksClusters
name = "${each.value.env}${each.value.instance}-aks-conf-route"
cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.conf[each.key].id
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.conforigin[each.key].id
cdn_frontdoor_origin_ids = concat([azurerm_cdn_frontdoor_origin.conforigin[each.key].id], each.value.aksBackupInstanceEnabled ? [azurerm_cdn_frontdoor_origin.conforiginbckp[each.key].id] : [])
cdn_frontdoor_rule_set_ids = []
enabled = true
forwarding_protocol = "HttpsOnly"
https_redirect_enabled = true
patterns_to_match = ["/*"]
supported_protocols = ["Http", "Https"]
cdn_frontdoor_custom_domain_ids = [azurerm_cdn_frontdoor_custom_domain.confdomain[each.key].id]
link_to_default_domain = false
}
resource "azurerm_cdn_frontdoor_custom_domain_association" "confdomain" {
for_each = var.aksClusters
cdn_frontdoor_custom_domain_id = azurerm_cdn_frontdoor_custom_domain.confdomain[each.key].id
cdn_frontdoor_route_ids = [azurerm_cdn_frontdoor_route.confroute[each.key].id ]
}
# ------------------ Endpoint Web ---------------------- #
resource "azurerm_cdn_frontdoor_endpoint" "web" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = "${each.value.env}${each.value.host}"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
tags = {}
}
resource "azurerm_cdn_frontdoor_custom_domain" "webdomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.host}-domain"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
dns_zone_id = data.azurerm_dns_zone.quartzx.id
host_name = format("%s.%s.public.%s", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env, var.dnsZone)
tls {
certificate_type = "CustomerCertificate"
minimum_tls_version = "TLS12"
cdn_frontdoor_secret_id = azurerm_cdn_frontdoor_secret.wildcard[each.value.env].id
}
}
resource "azurerm_dns_cname_record" "webdomaindns" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = format("%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record = azurerm_cdn_frontdoor_endpoint.web[each.key].host_name
depends_on = [azurerm_cdn_frontdoor_route.webroute]
}
resource "azurerm_dns_txt_record" "webdomaindnstxt" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = format("_dnsauth.%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record {
value = azurerm_cdn_frontdoor_custom_domain.webdomain[each.key].validation_token
}
}
resource "azurerm_cdn_frontdoor_route" "webroute" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.host}-route"
cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.web[each.key].id
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.origin[each.key].id
cdn_frontdoor_origin_ids = concat([azurerm_cdn_frontdoor_origin.weborigin[each.key].id], each.value.backupOriginEnabled ? [azurerm_cdn_frontdoor_origin.weboriginbckp[each.key].id] : [])
cdn_frontdoor_rule_set_ids = []
enabled = true
forwarding_protocol = "HttpsOnly"
https_redirect_enabled = true
patterns_to_match = ["/*"]
supported_protocols = ["Http", "Https"]
cdn_frontdoor_custom_domain_ids = [azurerm_cdn_frontdoor_custom_domain.webdomain[each.key].id]
link_to_default_domain = false
}
resource "azurerm_cdn_frontdoor_custom_domain_association" "webdomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
cdn_frontdoor_custom_domain_id = azurerm_cdn_frontdoor_custom_domain.webdomain[each.key].id
cdn_frontdoor_route_ids = [azurerm_cdn_frontdoor_route.webroute[each.key].id ]
}
# ------------------ Endpoint API ---------------------- #
resource "azurerm_cdn_frontdoor_custom_domain" "apidomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = "${each.value.env}-aks-${each.value.host}-api-domain"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
dns_zone_id = data.azurerm_dns_zone.quartzx.id
host_name = format("api.%s.%s.public.%s", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env, var.dnsZone)
tls {
certificate_type = "CustomerCertificate"
minimum_tls_version = "TLS12"
cdn_frontdoor_secret_id = azurerm_cdn_frontdoor_secret.api[each.value.env].id
}
}
resource "azurerm_cdn_frontdoor_route" "apiroute" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = "${each.value.env}-aks-${each.value.host}-api-route"
cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.web[each.key].id
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.apiorigin["${each.value.env}api"].id
cdn_frontdoor_origin_ids = concat([azurerm_cdn_frontdoor_origin.apiorigin["${each.value.env}api"].id], each.value.backupOriginEnabled ? [azurerm_cdn_frontdoor_origin.apioriginbckp["${each.value.env}api"].id] : [])
cdn_frontdoor_rule_set_ids = []
enabled = true
forwarding_protocol = "HttpsOnly"
https_redirect_enabled = true
patterns_to_match = ["/*"]
supported_protocols = ["Http", "Https"]
cdn_frontdoor_custom_domain_ids = [azurerm_cdn_frontdoor_custom_domain.apidomain[each.key].id]
link_to_default_domain = false
}
resource "azurerm_dns_cname_record" "apidomaindns" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = format("api.%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record = azurerm_cdn_frontdoor_endpoint.web[each.key].host_name
depends_on = [azurerm_cdn_frontdoor_route.apiroute]
}
resource "azurerm_dns_txt_record" "apidomaindnstxt" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = format("_dnsauth.api.%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record {
value = azurerm_cdn_frontdoor_custom_domain.apidomain[each.key].validation_token
}
}
resource "azurerm_cdn_frontdoor_custom_domain_association" "apidomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
cdn_frontdoor_custom_domain_id = azurerm_cdn_frontdoor_custom_domain.apidomain[each.key].id
cdn_frontdoor_route_ids = [azurerm_cdn_frontdoor_route.apiroute[each.key].id ]
}
# ------------------ Endpoint API ---------------------- #
resource "azurerm_cdn_frontdoor_custom_domain" "mobiledomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = "${each.value.env}-aks-${each.value.host}-mobile-domain"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
dns_zone_id = data.azurerm_dns_zone.quartzx.id
host_name = format("mobile.%s.%s.public.%s", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env, var.dnsZone)
tls {
certificate_type = "CustomerCertificate"
minimum_tls_version = "TLS12"
cdn_frontdoor_secret_id = azurerm_cdn_frontdoor_secret.api[each.value.env].id
}
}
resource "azurerm_cdn_frontdoor_route" "mobileroute" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = "${each.value.env}-aks-${each.value.host}-mobile-route"
cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.web[each.key].id
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.apiorigin["${each.value.env}${each.value.host}mobile"].id
cdn_frontdoor_origin_ids = concat([azurerm_cdn_frontdoor_origin.apiorigin["${each.value.env}${each.value.host}mobile"].id], each.value.backupOriginEnabled ? [azurerm_cdn_frontdoor_origin.apioriginbckp["${each.value.env}${each.value.host}mobile"].id] : [])
cdn_frontdoor_rule_set_ids = []
enabled = true
forwarding_protocol = "HttpsOnly"
https_redirect_enabled = true
patterns_to_match = ["/*"]
supported_protocols = ["Http", "Https"]
cdn_frontdoor_custom_domain_ids = [azurerm_cdn_frontdoor_custom_domain.mobiledomain[each.key].id]
link_to_default_domain = false
}
resource "azurerm_dns_cname_record" "mobiledomaindns" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = format("mobile.%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record = azurerm_cdn_frontdoor_endpoint.web[each.key].host_name
depends_on = [azurerm_cdn_frontdoor_route.mobileroute]
}
resource "azurerm_dns_txt_record" "mobiledomaindnstxt" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = format("_dnsauth.mobile.%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record {
value = azurerm_cdn_frontdoor_custom_domain.mobiledomain[each.key].validation_token
}
}
resource "azurerm_cdn_frontdoor_custom_domain_association" "mobiledomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
cdn_frontdoor_custom_domain_id = azurerm_cdn_frontdoor_custom_domain.mobiledomain[each.key].id
cdn_frontdoor_route_ids = [azurerm_cdn_frontdoor_route.mobileroute[each.key].id ]
}
# ------------------ Secrets Certificates ---------------------- #
resource "azurerm_cdn_frontdoor_secret" "wildcard" {
for_each = toset(var.environmentNames)
name = "qx-${each.value}-wildcard-certificate"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.key].id
secret {
customer_certificate {
key_vault_certificate_id = data.azurerm_key_vault_certificate.wildcardpublic[each.value].versionless_id
}
}
}
resource "azurerm_cdn_frontdoor_secret" "api" {
for_each = toset(var.environmentNames)
name = "qx-${each.value}-api-certificate"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.key].id
secret {
customer_certificate {
key_vault_certificate_id = data.azurerm_key_vault_certificate.apipublic[each.value].versionless_id
}
}
}
resource "azurerm_cdn_frontdoor_secret" "conf" {
for_each = var.aksClusters
name = "${var.organization}-${each.value.env}${each.value.instance}-conf-certificate"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.hosts[0]].id
secret {
customer_certificate {
key_vault_certificate_id = data.azurerm_key_vault_certificate.confpublic[each.key].versionless_id
}
}
}
```
### Debug Output/Panic Output
```shell
│ with azurerm_cdn_frontdoor_endpoint.web["xxx-int01extranet"],
│ on Script.FrontDoorCdn.tf line 339, in resource "azurerm_cdn_frontdoor_endpoint" "web":
│ 339: resource "azurerm_cdn_frontdoor_endpoint" "web" {
│
╵
╷
│ Error: retrieving Front Door Endpoint: (Afd Endpoint Name "xxx-int01-aks-ingress-endpoint" / Profile Name "xx-apl-int01-frontdoor-cdn" / Resource Group "xx-prod-gateway"): cdn.AFDEndpointsClient#Get: Failure sending request: StatusCode=429 -- Original Error: context deadline exceeded
│
│ with azurerm_cdn_frontdoor_endpoint.web["xxx-int01ingress"],
│ on Script.FrontDoorCdn.tf line 339, in resource "azurerm_cdn_frontdoor_endpoint" "web":
│ 339: resource "azurerm_cdn_frontdoor_endpoint" "web" {
│
╵
╷
│ Error: retrieving Front Door Endpoint: (Afd Endpoint Name "xxx-int01-aks-gov-endpoint" / Profile Name "xx-apl-int01-frontdoor-cdn" / Resource Group "qx-prod-gateway"): cdn.AFDEndpointsClient#Get: Failure sending request: StatusCode=429 -- Original Error: context deadline exceeded
│
│ with azurerm_cdn_frontdoor_endpoint.web["xxx-int01gov"],
│ on Script.FrontDoorCdn.tf line 339, in resource "azurerm_cdn_frontdoor_endpoint" "web":
│ 339: resource "azurerm_cdn_frontdoor_endpoint" "web" {
│
╵
╷
│ Error: retrieving Front Door Endpoint: (Afd Endpoint Name "xxx-int01-aks-govcloud-endpoint" / Profile Name "xx-psp-int01-frontdoor-cdn" / Resource Group "xx-prod-gateway"): cdn.AFDEndpointsClient#Get: Failure sending request: StatusCode=429 -- Original Error: context deadline exceeded
│
│ with azurerm_cdn_frontdoor_endpoint.web["xxx-int01govcloud"],
│ on Script.FrontDoorCdn.tf line 339, in resource "azurerm_cdn_frontdoor_endpoint" "web":
│ 339: resource "azurerm_cdn_frontdoor_endpoint" "web" {
│
╵
╷
│ Error: retrieving Front Door Endpoint: (Afd Endpoint Name "xxx-uat01-aks-diligence-endpoint" / Profile Name "xx-psp-uat01-frontdoor-cdn" / Resource Group "xx-prod-gateway"): cdn.AFDEndpointsClient#Get: Failure sending request: StatusCode=429 -- Original Error: context deadline exceeded
│
│ with azurerm_cdn_frontdoor_endpoint.web["xxx-uat01diligence"],
│ on Script.FrontDoorCdn.tf line 339, in resource "azurerm_cdn_frontdoor_endpoint" "web":
│ 339: resource "azurerm_cdn_frontdoor_endpoint" "web" {
```
### Expected Behaviour
Be able to refresh state after resource creation.
### Actual Behaviour
Terrraform is unable to refresh state after creation due to this random error.
The error does not touch one particularly azurerm_cdn_frontdoor_endpoint but will happen randomly on a batch of it at each run.
### Steps to Reproduce
terraform plan or apply generate the same error.
### Important Factoids
Large number of frontdoor profile and endpoint created
Terraform is able to create ressources but not able to read them all after creation
### References
_No response_ | 1.0 | azurerm_cdn_frontdoor_endpoint: Get: Failure sending request: StatusCode=429 - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.3.4
### AzureRM Provider Version
3.31.0
### Affected Resource(s)/Data Source(s)
azurerm_cdn_frontdoor_endpoint
### Terraform Configuration Files
```hcl
variable "environmentNames" {
type = list(string)
default = []
}
environmentNames = ["xxx-int01", "xxx-int01", "xxx-uat01", "xxx-prd01"]
// AKS Config
variable "aksClusters" {
type = map(object({
env = string
instance = string
location = string
vnetAddressSpace = string
subnetAddressSpace = string
privateLinkAddressSpace = string
sku = string
allowGatewayTransit = bool
useRemoteGateway = bool
detachAKSSystemNode = bool
aksAzureManaged = bool
aksRBACEnabled = bool
applicationNodeCount = number
spotNodeCount = number
kubeVersion = string
aksBackupInstanceEnabled = bool
aksBackupInstanceLocation = string
aksBackupInstanceSpotNodeCount = number
aksBackupVnetAddressSpace = string
aksBackupSubnetAddressSpace = string
aksBackupPrivateLinkAddressSpace= string
hosts = list(string)
}))
default = {}
}
aksClusters = {
"int-01" = {
env = "int",
instance = "01"
location = "Canada Central"
vnetAddressSpace = ""
subnetAddressSpace = ""
privateLinkAddressSpace = ""
sku = "Free"
allowGatewayTransit = true
useRemoteGateway = true
detachAKSSystemNode = false
aksAzureManaged = true
aksRBACEnabled = false
isBackupInstance = false
applicationNodeCount = 3
spotNodeCount = 0
kubeVersion = "1.23.8"
aksBackupInstanceEnabled = false
aksBackupInstanceLocation = "France Central"
aksBackupInstanceSpotNodeCount = 1
aksBackupVnetAddressSpace = ""
aksBackupSubnetAddressSpace = ""
aksBackupPrivateLinkAddressSpace = ""
hosts = [ "xxx-int01", "xxx-int01" ]
},
"uat-01" = {
env = "uat",
instance = "01"
location = "Canada Central"
vnetAddressSpace = ""
subnetAddressSpace = ""
privateLinkAddressSpace = ""
sku = "Free"
allowGatewayTransit = true
useRemoteGateway = true
detachAKSSystemNode = false
aksAzureManaged = true
aksRBACEnabled = false
isBackupInstance = false
applicationNodeCount = 3
spotNodeCount = 0
kubeVersion = "1.23.8"
aksBackupInstanceEnabled = false
aksBackupInstanceLocation = "France Central"
aksBackupInstanceSpotNodeCount = 1
aksBackupVnetAddressSpace = ""
aksBackupSubnetAddressSpace = ""
aksBackupPrivateLinkAddressSpace = ""
hosts = [ "xxx-uat01" ]
},
"prod-01" = {
env = "prod",
instance = "01"
location = "Canada Central"
vnetAddressSpace = ""
subnetAddressSpace = ""
privateLinkAddressSpace = ""
sku = "Paid"
allowGatewayTransit = true
useRemoteGateway = true
detachAKSSystemNode = true
aksAzureManaged = true
aksRBACEnabled = true
isBackupInstance = false
applicationNodeCount = 6
spotNodeCount = 0
kubeVersion = "1.23.8"
aksBackupInstanceEnabled = false
aksBackupInstanceLocation = "France Central"
aksBackupInstanceSpotNodeCount = 1
aksBackupVnetAddressSpace = ""
aksBackupSubnetAddressSpace = ""
aksBackupPrivateLinkAddressSpace = ""
hosts = [ "xxx-prd01" ]
}
}
// DNS Zone
variable "dnsZone" {
type = string
default = "domain.app"
}
locals {
frontdoorBackendWebHosts = flatten([
for clusterKey, cluster in var.aksClusters : [
for env in toset(cluster.hosts) : [
for frontend in toset(var.frontdoorFrontEnd) : {
env = env
host = frontend
clusterKey = clusterKey
backupOriginEnabled = cluster.aksBackupInstanceEnabled
hostNameBckp = format("%s.%sbckp.private.%s", frontend, env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
hostName = format("%s.%s.private.%s", frontend, env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
}
]
]
])
frontdoorBackendApiHosts = concat(flatten([
for clusterKey, cluster in var.aksClusters : [
for env in toset(cluster.hosts) : {
env = env
host = "api"
clusterKey = clusterKey
frontend = ""
backupOriginEnabled = cluster.aksBackupInstanceEnabled
hostNameBckp = format("api.%sbckp.private.%s", env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
hostName = format("api.%s.private.%s", env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
}
]
]),
flatten([
for clusterKey, cluster in var.aksClusters : [
for env in toset(cluster.hosts) : [
for frontend in toset(var.frontdoorFrontEnd) : {
env = env
host = "mobile"
frontend = frontend
clusterKey = clusterKey
backupOriginEnabled = cluster.aksBackupInstanceEnabled
hostNameBckp = format("mobile.%s.%sbckp.private.%s", frontend, env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
hostName = format("mobile.%s.%s.private.%s", frontend, env == "xxx-prd01" ? "xxx" : env, var.dnsZone)
}
]
]
]))
frontdoorDomain = flatten([
for clusterKey, cluster in var.aksClusters : [
for env in toset(cluster.hosts) : env
]
])
}
# ----------------- Front Door CDN ----------------------- #
resource "azurerm_cdn_frontdoor_profile" "frontdoor" {
for_each = toset(local.frontdoorDomain)
name = "${var.organization}-${each.key}-frontdoor-cdn"
resource_group_name = azurerm_resource_group.gateway.name
sku_name = "Premium_AzureFrontDoor"
}
# ------------------ Origin group ---------------------- #
resource "azurerm_cdn_frontdoor_origin_group" "conforigin" {
for_each = var.aksClusters
name = "${var.organization}-${each.value.env}${each.value.instance}-aks-conf-origin"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.hosts[0]].id
health_probe {
interval_in_seconds = 30
path = "/qxhealthprobe"
protocol = "Https"
request_type = "GET"
}
load_balancing {
additional_latency_in_milliseconds = 0
sample_size = 16
successful_samples_required = 3
}
depends_on = [azurerm_private_link_service.aks]
}
/* Conf default origin */
resource "azurerm_cdn_frontdoor_origin" "conforigin" {
for_each = var.aksClusters
name = "${each.value.env}${each.value.instance}-aks-conf-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.conforigin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = "conf.${each.value.env}${each.value.instance}.private.${var.dnsZone}"
origin_host_header = "conf.${each.value.env}${each.value.instance}.private.${var.dnsZone}"
priority = 1
weight = 1000
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aks[each.key].location
private_link_target_id = azurerm_private_link_service.aks[each.key].id
}
}
/* Conf backup origin */
resource "azurerm_cdn_frontdoor_origin" "conforiginbckp" {
for_each = { for key, cluster in var.aksClusters : key => cluster if cluster.aksBackupInstanceEnabled == true }
name = "${each.value.env}${each.value.instance}-aksbckp-conf-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.conforigin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = "conf.${each.value.env}${each.value.instance}bckp.private.${var.dnsZone}"
origin_host_header = "conf.${each.value.env}${each.value.instance}bckp.private.${var.dnsZone}"
priority = 2
weight = 100
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aksbckp[each.key].location
private_link_target_id = azurerm_private_link_service.aksbckp[each.key].id
}
}
# ------------------ Origin group ---------------------- #
resource "azurerm_cdn_frontdoor_origin_group" "origin" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.host}-origin"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
health_probe {
interval_in_seconds = 30
path = "/qxhealthprobe"
protocol = "Https"
request_type = "GET"
}
load_balancing {
additional_latency_in_milliseconds = 0
sample_size = 16
successful_samples_required = 3
}
depends_on = [azurerm_private_link_service.aks]
}
/* Web default origin */
resource "azurerm_cdn_frontdoor_origin" "weborigin" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.host}-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.origin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = each.value.hostName
origin_host_header = each.value.hostName
priority = 1
weight = 1000
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aks[each.value.clusterKey].location
private_link_target_id = azurerm_private_link_service.aks[each.value.clusterKey].id
}
}
/* Web backup origin */
resource "azurerm_cdn_frontdoor_origin" "weboriginbckp" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.backupOriginEnabled == true }
name = "${each.value.env}-aksbckp-${each.value.host}-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.origin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = each.value.hostNameBckp
origin_host_header = each.value.hostNameBckp
priority = 2
weight = 100
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aksbckp[each.value.clusterKey].location
private_link_target_id = azurerm_private_link_service.aksbckp[each.value.clusterKey].id
}
}
# ------------------ Origin group ---------------------- #
resource "azurerm_cdn_frontdoor_origin_group" "apiorigin" {
for_each = { for index, backendhost in local.frontdoorBackendApiHosts : "${backendhost.env}${backendhost.frontend}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.frontend}${each.value.host}-origin"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
health_probe {
interval_in_seconds = 30
path = "/qxhealthprobe"
protocol = "Https"
request_type = "GET"
}
load_balancing {
additional_latency_in_milliseconds = 0
sample_size = 16
successful_samples_required = 3
}
depends_on = [azurerm_private_link_service.aks]
}
/* Origin for Web Backend API */
resource "azurerm_cdn_frontdoor_origin" "apiorigin" {
for_each = { for index, backendhost in local.frontdoorBackendApiHosts : "${backendhost.env}${backendhost.frontend}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.frontend}${each.value.host}-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.apiorigin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = each.value.hostName
origin_host_header = each.value.hostName
priority = 1
weight = 1000
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aks[each.value.clusterKey].location
private_link_target_id = azurerm_private_link_service.aks[each.value.clusterKey].id
}
}
/* Origin for Web Backend API backup */
resource "azurerm_cdn_frontdoor_origin" "apioriginbckp" {
for_each = { for index, backendhost in local.frontdoorBackendApiHosts : "${backendhost.env}${backendhost.frontend}${backendhost.host}" => backendhost if backendhost.backupOriginEnabled == true }
name = "${each.value.env}-aksbckp-${each.value.frontend}${each.value.host}-origin-host"
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.apiorigin[each.key].id
enabled = true
certificate_name_check_enabled = true
host_name = each.value.hostNameBckp
origin_host_header = each.value.hostNameBckp
priority = 2
weight = 100
private_link {
request_message = "Request access for Private Link Origin CDN Frontdoor"
location = azurerm_resource_group.aksbckp[each.value.clusterKey].location
private_link_target_id = azurerm_private_link_service.aksbckp[each.value.clusterKey].id
}
}
# # ------------------ Endpoint Conf ---------------------- #
resource "azurerm_cdn_frontdoor_endpoint" "conf" {
for_each = var.aksClusters
name = "${each.value.env}${each.value.instance}conf"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.hosts[0]].id
}
resource "azurerm_cdn_frontdoor_custom_domain" "confdomain" {
for_each = var.aksClusters
name = "${each.value.env}${each.value.instance}-aks-conf-domain"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.hosts[0]].id
dns_zone_id = data.azurerm_dns_zone.quartzx.id
host_name = each.value.env == "prod" ? format("conf.public.%s", var.dnsZone) : format("conf.%s%s.public.%s", each.value.env, each.value.instance, var.dnsZone)
tls {
certificate_type = "CustomerCertificate"
minimum_tls_version = "TLS12"
cdn_frontdoor_secret_id = azurerm_cdn_frontdoor_secret.conf[each.key].id
}
}
resource "azurerm_dns_cname_record" "confdomaindns" {
provider = azurerm.production
for_each = var.aksClusters
name = each.value.env == "prod" ? format("conf.public.%s", var.dnsZone) : format("conf.%s%s.public", each.value.env, each.value.instance)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record = azurerm_cdn_frontdoor_endpoint.conf[each.key].host_name
depends_on = [azurerm_cdn_frontdoor_route.confroute]
}
resource "azurerm_dns_txt_record" "confdomaindnstxt" {
provider = azurerm.production
for_each = var.aksClusters
name = each.value.env == "prod" ? "_dnsauth.conf.public" : format("_dnsauth.conf.%s%s.public", each.value.env, each.value.instance)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record {
value = azurerm_cdn_frontdoor_custom_domain.confdomain[each.key].validation_token
}
}
resource "azurerm_cdn_frontdoor_route" "confroute" {
for_each = var.aksClusters
name = "${each.value.env}${each.value.instance}-aks-conf-route"
cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.conf[each.key].id
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.conforigin[each.key].id
cdn_frontdoor_origin_ids = concat([azurerm_cdn_frontdoor_origin.conforigin[each.key].id], each.value.aksBackupInstanceEnabled ? [azurerm_cdn_frontdoor_origin.conforiginbckp[each.key].id] : [])
cdn_frontdoor_rule_set_ids = []
enabled = true
forwarding_protocol = "HttpsOnly"
https_redirect_enabled = true
patterns_to_match = ["/*"]
supported_protocols = ["Http", "Https"]
cdn_frontdoor_custom_domain_ids = [azurerm_cdn_frontdoor_custom_domain.confdomain[each.key].id]
link_to_default_domain = false
}
resource "azurerm_cdn_frontdoor_custom_domain_association" "confdomain" {
for_each = var.aksClusters
cdn_frontdoor_custom_domain_id = azurerm_cdn_frontdoor_custom_domain.confdomain[each.key].id
cdn_frontdoor_route_ids = [azurerm_cdn_frontdoor_route.confroute[each.key].id ]
}
# ------------------ Endpoint Web ---------------------- #
resource "azurerm_cdn_frontdoor_endpoint" "web" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = "${each.value.env}${each.value.host}"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
tags = {}
}
resource "azurerm_cdn_frontdoor_custom_domain" "webdomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.host}-domain"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
dns_zone_id = data.azurerm_dns_zone.quartzx.id
host_name = format("%s.%s.public.%s", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env, var.dnsZone)
tls {
certificate_type = "CustomerCertificate"
minimum_tls_version = "TLS12"
cdn_frontdoor_secret_id = azurerm_cdn_frontdoor_secret.wildcard[each.value.env].id
}
}
resource "azurerm_dns_cname_record" "webdomaindns" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = format("%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record = azurerm_cdn_frontdoor_endpoint.web[each.key].host_name
depends_on = [azurerm_cdn_frontdoor_route.webroute]
}
resource "azurerm_dns_txt_record" "webdomaindnstxt" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = format("_dnsauth.%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record {
value = azurerm_cdn_frontdoor_custom_domain.webdomain[each.key].validation_token
}
}
resource "azurerm_cdn_frontdoor_route" "webroute" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
name = "${each.value.env}-aks-${each.value.host}-route"
cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.web[each.key].id
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.origin[each.key].id
cdn_frontdoor_origin_ids = concat([azurerm_cdn_frontdoor_origin.weborigin[each.key].id], each.value.backupOriginEnabled ? [azurerm_cdn_frontdoor_origin.weboriginbckp[each.key].id] : [])
cdn_frontdoor_rule_set_ids = []
enabled = true
forwarding_protocol = "HttpsOnly"
https_redirect_enabled = true
patterns_to_match = ["/*"]
supported_protocols = ["Http", "Https"]
cdn_frontdoor_custom_domain_ids = [azurerm_cdn_frontdoor_custom_domain.webdomain[each.key].id]
link_to_default_domain = false
}
resource "azurerm_cdn_frontdoor_custom_domain_association" "webdomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost }
cdn_frontdoor_custom_domain_id = azurerm_cdn_frontdoor_custom_domain.webdomain[each.key].id
cdn_frontdoor_route_ids = [azurerm_cdn_frontdoor_route.webroute[each.key].id ]
}
# ------------------ Endpoint API ---------------------- #
resource "azurerm_cdn_frontdoor_custom_domain" "apidomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = "${each.value.env}-aks-${each.value.host}-api-domain"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
dns_zone_id = data.azurerm_dns_zone.quartzx.id
host_name = format("api.%s.%s.public.%s", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env, var.dnsZone)
tls {
certificate_type = "CustomerCertificate"
minimum_tls_version = "TLS12"
cdn_frontdoor_secret_id = azurerm_cdn_frontdoor_secret.api[each.value.env].id
}
}
resource "azurerm_cdn_frontdoor_route" "apiroute" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = "${each.value.env}-aks-${each.value.host}-api-route"
cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.web[each.key].id
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.apiorigin["${each.value.env}api"].id
cdn_frontdoor_origin_ids = concat([azurerm_cdn_frontdoor_origin.apiorigin["${each.value.env}api"].id], each.value.backupOriginEnabled ? [azurerm_cdn_frontdoor_origin.apioriginbckp["${each.value.env}api"].id] : [])
cdn_frontdoor_rule_set_ids = []
enabled = true
forwarding_protocol = "HttpsOnly"
https_redirect_enabled = true
patterns_to_match = ["/*"]
supported_protocols = ["Http", "Https"]
cdn_frontdoor_custom_domain_ids = [azurerm_cdn_frontdoor_custom_domain.apidomain[each.key].id]
link_to_default_domain = false
}
resource "azurerm_dns_cname_record" "apidomaindns" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = format("api.%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record = azurerm_cdn_frontdoor_endpoint.web[each.key].host_name
depends_on = [azurerm_cdn_frontdoor_route.apiroute]
}
resource "azurerm_dns_txt_record" "apidomaindnstxt" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = format("_dnsauth.api.%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record {
value = azurerm_cdn_frontdoor_custom_domain.apidomain[each.key].validation_token
}
}
resource "azurerm_cdn_frontdoor_custom_domain_association" "apidomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
cdn_frontdoor_custom_domain_id = azurerm_cdn_frontdoor_custom_domain.apidomain[each.key].id
cdn_frontdoor_route_ids = [azurerm_cdn_frontdoor_route.apiroute[each.key].id ]
}
# ------------------ Endpoint API ---------------------- #
resource "azurerm_cdn_frontdoor_custom_domain" "mobiledomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = "${each.value.env}-aks-${each.value.host}-mobile-domain"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.env].id
dns_zone_id = data.azurerm_dns_zone.quartzx.id
host_name = format("mobile.%s.%s.public.%s", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env, var.dnsZone)
tls {
certificate_type = "CustomerCertificate"
minimum_tls_version = "TLS12"
cdn_frontdoor_secret_id = azurerm_cdn_frontdoor_secret.api[each.value.env].id
}
}
resource "azurerm_cdn_frontdoor_route" "mobileroute" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = "${each.value.env}-aks-${each.value.host}-mobile-route"
cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.web[each.key].id
cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.apiorigin["${each.value.env}${each.value.host}mobile"].id
cdn_frontdoor_origin_ids = concat([azurerm_cdn_frontdoor_origin.apiorigin["${each.value.env}${each.value.host}mobile"].id], each.value.backupOriginEnabled ? [azurerm_cdn_frontdoor_origin.apioriginbckp["${each.value.env}${each.value.host}mobile"].id] : [])
cdn_frontdoor_rule_set_ids = []
enabled = true
forwarding_protocol = "HttpsOnly"
https_redirect_enabled = true
patterns_to_match = ["/*"]
supported_protocols = ["Http", "Https"]
cdn_frontdoor_custom_domain_ids = [azurerm_cdn_frontdoor_custom_domain.mobiledomain[each.key].id]
link_to_default_domain = false
}
resource "azurerm_dns_cname_record" "mobiledomaindns" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = format("mobile.%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record = azurerm_cdn_frontdoor_endpoint.web[each.key].host_name
depends_on = [azurerm_cdn_frontdoor_route.mobileroute]
}
resource "azurerm_dns_txt_record" "mobiledomaindnstxt" {
provider = azurerm.production
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
name = format("_dnsauth.mobile.%s.%s.public", each.value.host, each.value.env == "xxx-prd01" ? "xxx" : each.value.env)
zone_name = data.azurerm_dns_zone.quartzx.name
resource_group_name = data.azurerm_dns_zone.quartzx.resource_group_name
ttl = 3600
record {
value = azurerm_cdn_frontdoor_custom_domain.mobiledomain[each.key].validation_token
}
}
resource "azurerm_cdn_frontdoor_custom_domain_association" "mobiledomain" {
for_each = { for index, backendhost in local.frontdoorBackendWebHosts : "${backendhost.env}${backendhost.host}" => backendhost if backendhost.host != "ingress" }
cdn_frontdoor_custom_domain_id = azurerm_cdn_frontdoor_custom_domain.mobiledomain[each.key].id
cdn_frontdoor_route_ids = [azurerm_cdn_frontdoor_route.mobileroute[each.key].id ]
}
# ------------------ Secrets Certificates ---------------------- #
resource "azurerm_cdn_frontdoor_secret" "wildcard" {
for_each = toset(var.environmentNames)
name = "qx-${each.value}-wildcard-certificate"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.key].id
secret {
customer_certificate {
key_vault_certificate_id = data.azurerm_key_vault_certificate.wildcardpublic[each.value].versionless_id
}
}
}
resource "azurerm_cdn_frontdoor_secret" "api" {
for_each = toset(var.environmentNames)
name = "qx-${each.value}-api-certificate"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.key].id
secret {
customer_certificate {
key_vault_certificate_id = data.azurerm_key_vault_certificate.apipublic[each.value].versionless_id
}
}
}
resource "azurerm_cdn_frontdoor_secret" "conf" {
for_each = var.aksClusters
name = "${var.organization}-${each.value.env}${each.value.instance}-conf-certificate"
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.frontdoor[each.value.hosts[0]].id
secret {
customer_certificate {
key_vault_certificate_id = data.azurerm_key_vault_certificate.confpublic[each.key].versionless_id
}
}
}
```
### Debug Output/Panic Output
```shell
│ with azurerm_cdn_frontdoor_endpoint.web["xxx-int01extranet"],
│ on Script.FrontDoorCdn.tf line 339, in resource "azurerm_cdn_frontdoor_endpoint" "web":
│ 339: resource "azurerm_cdn_frontdoor_endpoint" "web" {
│
╵
╷
│ Error: retrieving Front Door Endpoint: (Afd Endpoint Name "xxx-int01-aks-ingress-endpoint" / Profile Name "xx-apl-int01-frontdoor-cdn" / Resource Group "xx-prod-gateway"): cdn.AFDEndpointsClient#Get: Failure sending request: StatusCode=429 -- Original Error: context deadline exceeded
│
│ with azurerm_cdn_frontdoor_endpoint.web["xxx-int01ingress"],
│ on Script.FrontDoorCdn.tf line 339, in resource "azurerm_cdn_frontdoor_endpoint" "web":
│ 339: resource "azurerm_cdn_frontdoor_endpoint" "web" {
│
╵
╷
│ Error: retrieving Front Door Endpoint: (Afd Endpoint Name "xxx-int01-aks-gov-endpoint" / Profile Name "xx-apl-int01-frontdoor-cdn" / Resource Group "qx-prod-gateway"): cdn.AFDEndpointsClient#Get: Failure sending request: StatusCode=429 -- Original Error: context deadline exceeded
│
│ with azurerm_cdn_frontdoor_endpoint.web["xxx-int01gov"],
│ on Script.FrontDoorCdn.tf line 339, in resource "azurerm_cdn_frontdoor_endpoint" "web":
│ 339: resource "azurerm_cdn_frontdoor_endpoint" "web" {
│
╵
╷
│ Error: retrieving Front Door Endpoint: (Afd Endpoint Name "xxx-int01-aks-govcloud-endpoint" / Profile Name "xx-psp-int01-frontdoor-cdn" / Resource Group "xx-prod-gateway"): cdn.AFDEndpointsClient#Get: Failure sending request: StatusCode=429 -- Original Error: context deadline exceeded
│
│ with azurerm_cdn_frontdoor_endpoint.web["xxx-int01govcloud"],
│ on Script.FrontDoorCdn.tf line 339, in resource "azurerm_cdn_frontdoor_endpoint" "web":
│ 339: resource "azurerm_cdn_frontdoor_endpoint" "web" {
│
╵
╷
│ Error: retrieving Front Door Endpoint: (Afd Endpoint Name "xxx-uat01-aks-diligence-endpoint" / Profile Name "xx-psp-uat01-frontdoor-cdn" / Resource Group "xx-prod-gateway"): cdn.AFDEndpointsClient#Get: Failure sending request: StatusCode=429 -- Original Error: context deadline exceeded
│
│ with azurerm_cdn_frontdoor_endpoint.web["xxx-uat01diligence"],
│ on Script.FrontDoorCdn.tf line 339, in resource "azurerm_cdn_frontdoor_endpoint" "web":
│ 339: resource "azurerm_cdn_frontdoor_endpoint" "web" {
```
### Expected Behaviour
Be able to refresh state after resource creation.
### Actual Behaviour
Terrraform is unable to refresh state after creation due to this random error.
The error does not touch one particularly azurerm_cdn_frontdoor_endpoint but will happen randomly on a batch of it at each run.
### Steps to Reproduce
terraform plan or apply generate the same error.
### Important Factoids
Large number of frontdoor profile and endpoint created
Terraform is able to create ressources but not able to read them all after creation
### References
_No response_ | non_priority | azurerm cdn frontdoor endpoint get failure sending request statuscode is there an existing issue for this i have searched the existing issues community note please vote on this issue by adding a thumbsup to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version azurerm provider version affected resource s data source s azurerm cdn frontdoor endpoint terraform configuration files hcl variable environmentnames type list string default environmentnames aks config variable aksclusters type map object env string instance string location string vnetaddressspace string subnetaddressspace string privatelinkaddressspace string sku string allowgatewaytransit bool useremotegateway bool detachakssystemnode bool aksazuremanaged bool aksrbacenabled bool applicationnodecount number spotnodecount number kubeversion string aksbackupinstanceenabled bool aksbackupinstancelocation string aksbackupinstancespotnodecount number aksbackupvnetaddressspace string aksbackupsubnetaddressspace string aksbackupprivatelinkaddressspace string hosts list string default aksclusters int env int instance location canada central vnetaddressspace subnetaddressspace privatelinkaddressspace sku free allowgatewaytransit true useremotegateway true detachakssystemnode false aksazuremanaged true aksrbacenabled false isbackupinstance false applicationnodecount spotnodecount kubeversion aksbackupinstanceenabled false aksbackupinstancelocation france central aksbackupinstancespotnodecount aksbackupvnetaddressspace aksbackupsubnetaddressspace aksbackupprivatelinkaddressspace hosts uat env uat instance location canada central vnetaddressspace subnetaddressspace privatelinkaddressspace sku free allowgatewaytransit true useremotegateway true detachakssystemnode false aksazuremanaged true aksrbacenabled false isbackupinstance false applicationnodecount spotnodecount kubeversion aksbackupinstanceenabled false aksbackupinstancelocation france central aksbackupinstancespotnodecount aksbackupvnetaddressspace aksbackupsubnetaddressspace aksbackupprivatelinkaddressspace hosts prod env prod instance location canada central vnetaddressspace subnetaddressspace privatelinkaddressspace sku paid allowgatewaytransit true useremotegateway true detachakssystemnode true aksazuremanaged true aksrbacenabled true isbackupinstance false applicationnodecount spotnodecount kubeversion aksbackupinstanceenabled false aksbackupinstancelocation france central aksbackupinstancespotnodecount aksbackupvnetaddressspace aksbackupsubnetaddressspace aksbackupprivatelinkaddressspace hosts dns zone variable dnszone type string default domain app locals frontdoorbackendwebhosts flatten for clusterkey cluster in var aksclusters for env in toset cluster hosts for frontend in toset var frontdoorfrontend env env host frontend clusterkey clusterkey backuporiginenabled cluster aksbackupinstanceenabled hostnamebckp format s sbckp private s frontend env xxx xxx env var dnszone hostname format s s private s frontend env xxx xxx env var dnszone frontdoorbackendapihosts concat flatten for clusterkey cluster in var aksclusters for env in toset cluster hosts env env host api clusterkey clusterkey frontend backuporiginenabled cluster aksbackupinstanceenabled hostnamebckp format api sbckp private s env xxx xxx env var dnszone hostname format api s private s env xxx xxx env var dnszone flatten for clusterkey cluster in var aksclusters for env in toset cluster hosts for frontend in toset var frontdoorfrontend env env host mobile frontend frontend clusterkey clusterkey backuporiginenabled cluster aksbackupinstanceenabled hostnamebckp format mobile s sbckp private s frontend env xxx xxx env var dnszone hostname format mobile s s private s frontend env xxx xxx env var dnszone frontdoordomain flatten for clusterkey cluster in var aksclusters for env in toset cluster hosts env front door cdn resource azurerm cdn frontdoor profile frontdoor for each toset local frontdoordomain name var organization each key frontdoor cdn resource group name azurerm resource group gateway name sku name premium azurefrontdoor origin group resource azurerm cdn frontdoor origin group conforigin for each var aksclusters name var organization each value env each value instance aks conf origin cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id health probe interval in seconds path qxhealthprobe protocol https request type get load balancing additional latency in milliseconds sample size successful samples required depends on conf default origin resource azurerm cdn frontdoor origin conforigin for each var aksclusters name each value env each value instance aks conf origin host cdn frontdoor origin group id azurerm cdn frontdoor origin group conforigin id enabled true certificate name check enabled true host name conf each value env each value instance private var dnszone origin host header conf each value env each value instance private var dnszone priority weight private link request message request access for private link origin cdn frontdoor location azurerm resource group aks location private link target id azurerm private link service aks id conf backup origin resource azurerm cdn frontdoor origin conforiginbckp for each for key cluster in var aksclusters key cluster if cluster aksbackupinstanceenabled true name each value env each value instance aksbckp conf origin host cdn frontdoor origin group id azurerm cdn frontdoor origin group conforigin id enabled true certificate name check enabled true host name conf each value env each value instance bckp private var dnszone origin host header conf each value env each value instance bckp private var dnszone priority weight private link request message request access for private link origin cdn frontdoor location azurerm resource group aksbckp location private link target id azurerm private link service aksbckp id origin group resource azurerm cdn frontdoor origin group origin for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost name each value env aks each value host origin cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id health probe interval in seconds path qxhealthprobe protocol https request type get load balancing additional latency in milliseconds sample size successful samples required depends on web default origin resource azurerm cdn frontdoor origin weborigin for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost name each value env aks each value host origin host cdn frontdoor origin group id azurerm cdn frontdoor origin group origin id enabled true certificate name check enabled true host name each value hostname origin host header each value hostname priority weight private link request message request access for private link origin cdn frontdoor location azurerm resource group aks location private link target id azurerm private link service aks id web backup origin resource azurerm cdn frontdoor origin weboriginbckp for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost backuporiginenabled true name each value env aksbckp each value host origin host cdn frontdoor origin group id azurerm cdn frontdoor origin group origin id enabled true certificate name check enabled true host name each value hostnamebckp origin host header each value hostnamebckp priority weight private link request message request access for private link origin cdn frontdoor location azurerm resource group aksbckp location private link target id azurerm private link service aksbckp id origin group resource azurerm cdn frontdoor origin group apiorigin for each for index backendhost in local frontdoorbackendapihosts backendhost env backendhost frontend backendhost host backendhost name each value env aks each value frontend each value host origin cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id health probe interval in seconds path qxhealthprobe protocol https request type get load balancing additional latency in milliseconds sample size successful samples required depends on origin for web backend api resource azurerm cdn frontdoor origin apiorigin for each for index backendhost in local frontdoorbackendapihosts backendhost env backendhost frontend backendhost host backendhost name each value env aks each value frontend each value host origin host cdn frontdoor origin group id azurerm cdn frontdoor origin group apiorigin id enabled true certificate name check enabled true host name each value hostname origin host header each value hostname priority weight private link request message request access for private link origin cdn frontdoor location azurerm resource group aks location private link target id azurerm private link service aks id origin for web backend api backup resource azurerm cdn frontdoor origin apioriginbckp for each for index backendhost in local frontdoorbackendapihosts backendhost env backendhost frontend backendhost host backendhost if backendhost backuporiginenabled true name each value env aksbckp each value frontend each value host origin host cdn frontdoor origin group id azurerm cdn frontdoor origin group apiorigin id enabled true certificate name check enabled true host name each value hostnamebckp origin host header each value hostnamebckp priority weight private link request message request access for private link origin cdn frontdoor location azurerm resource group aksbckp location private link target id azurerm private link service aksbckp id endpoint conf resource azurerm cdn frontdoor endpoint conf for each var aksclusters name each value env each value instance conf cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id resource azurerm cdn frontdoor custom domain confdomain for each var aksclusters name each value env each value instance aks conf domain cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id dns zone id data azurerm dns zone quartzx id host name each value env prod format conf public s var dnszone format conf s s public s each value env each value instance var dnszone tls certificate type customercertificate minimum tls version cdn frontdoor secret id azurerm cdn frontdoor secret conf id resource azurerm dns cname record confdomaindns provider azurerm production for each var aksclusters name each value env prod format conf public s var dnszone format conf s s public each value env each value instance zone name data azurerm dns zone quartzx name resource group name data azurerm dns zone quartzx resource group name ttl record azurerm cdn frontdoor endpoint conf host name depends on resource azurerm dns txt record confdomaindnstxt provider azurerm production for each var aksclusters name each value env prod dnsauth conf public format dnsauth conf s s public each value env each value instance zone name data azurerm dns zone quartzx name resource group name data azurerm dns zone quartzx resource group name ttl record value azurerm cdn frontdoor custom domain confdomain validation token resource azurerm cdn frontdoor route confroute for each var aksclusters name each value env each value instance aks conf route cdn frontdoor endpoint id azurerm cdn frontdoor endpoint conf id cdn frontdoor origin group id azurerm cdn frontdoor origin group conforigin id cdn frontdoor origin ids concat id each value aksbackupinstanceenabled id cdn frontdoor rule set ids enabled true forwarding protocol httpsonly https redirect enabled true patterns to match supported protocols cdn frontdoor custom domain ids id link to default domain false resource azurerm cdn frontdoor custom domain association confdomain for each var aksclusters cdn frontdoor custom domain id azurerm cdn frontdoor custom domain confdomain id cdn frontdoor route ids id endpoint web resource azurerm cdn frontdoor endpoint web for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost name each value env each value host cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id tags resource azurerm cdn frontdoor custom domain webdomain for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost name each value env aks each value host domain cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id dns zone id data azurerm dns zone quartzx id host name format s s public s each value host each value env xxx xxx each value env var dnszone tls certificate type customercertificate minimum tls version cdn frontdoor secret id azurerm cdn frontdoor secret wildcard id resource azurerm dns cname record webdomaindns provider azurerm production for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost name format s s public each value host each value env xxx xxx each value env zone name data azurerm dns zone quartzx name resource group name data azurerm dns zone quartzx resource group name ttl record azurerm cdn frontdoor endpoint web host name depends on resource azurerm dns txt record webdomaindnstxt provider azurerm production for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost name format dnsauth s s public each value host each value env xxx xxx each value env zone name data azurerm dns zone quartzx name resource group name data azurerm dns zone quartzx resource group name ttl record value azurerm cdn frontdoor custom domain webdomain validation token resource azurerm cdn frontdoor route webroute for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost name each value env aks each value host route cdn frontdoor endpoint id azurerm cdn frontdoor endpoint web id cdn frontdoor origin group id azurerm cdn frontdoor origin group origin id cdn frontdoor origin ids concat id each value backuporiginenabled id cdn frontdoor rule set ids enabled true forwarding protocol httpsonly https redirect enabled true patterns to match supported protocols cdn frontdoor custom domain ids id link to default domain false resource azurerm cdn frontdoor custom domain association webdomain for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost cdn frontdoor custom domain id azurerm cdn frontdoor custom domain webdomain id cdn frontdoor route ids id endpoint api resource azurerm cdn frontdoor custom domain apidomain for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost host ingress name each value env aks each value host api domain cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id dns zone id data azurerm dns zone quartzx id host name format api s s public s each value host each value env xxx xxx each value env var dnszone tls certificate type customercertificate minimum tls version cdn frontdoor secret id azurerm cdn frontdoor secret api id resource azurerm cdn frontdoor route apiroute for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost host ingress name each value env aks each value host api route cdn frontdoor endpoint id azurerm cdn frontdoor endpoint web id cdn frontdoor origin group id azurerm cdn frontdoor origin group apiorigin id cdn frontdoor origin ids concat id each value backuporiginenabled id cdn frontdoor rule set ids enabled true forwarding protocol httpsonly https redirect enabled true patterns to match supported protocols cdn frontdoor custom domain ids id link to default domain false resource azurerm dns cname record apidomaindns provider azurerm production for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost host ingress name format api s s public each value host each value env xxx xxx each value env zone name data azurerm dns zone quartzx name resource group name data azurerm dns zone quartzx resource group name ttl record azurerm cdn frontdoor endpoint web host name depends on resource azurerm dns txt record apidomaindnstxt provider azurerm production for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost host ingress name format dnsauth api s s public each value host each value env xxx xxx each value env zone name data azurerm dns zone quartzx name resource group name data azurerm dns zone quartzx resource group name ttl record value azurerm cdn frontdoor custom domain apidomain validation token resource azurerm cdn frontdoor custom domain association apidomain for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost host ingress cdn frontdoor custom domain id azurerm cdn frontdoor custom domain apidomain id cdn frontdoor route ids id endpoint api resource azurerm cdn frontdoor custom domain mobiledomain for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost host ingress name each value env aks each value host mobile domain cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id dns zone id data azurerm dns zone quartzx id host name format mobile s s public s each value host each value env xxx xxx each value env var dnszone tls certificate type customercertificate minimum tls version cdn frontdoor secret id azurerm cdn frontdoor secret api id resource azurerm cdn frontdoor route mobileroute for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost host ingress name each value env aks each value host mobile route cdn frontdoor endpoint id azurerm cdn frontdoor endpoint web id cdn frontdoor origin group id azurerm cdn frontdoor origin group apiorigin id cdn frontdoor origin ids concat id each value backuporiginenabled id cdn frontdoor rule set ids enabled true forwarding protocol httpsonly https redirect enabled true patterns to match supported protocols cdn frontdoor custom domain ids id link to default domain false resource azurerm dns cname record mobiledomaindns provider azurerm production for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost host ingress name format mobile s s public each value host each value env xxx xxx each value env zone name data azurerm dns zone quartzx name resource group name data azurerm dns zone quartzx resource group name ttl record azurerm cdn frontdoor endpoint web host name depends on resource azurerm dns txt record mobiledomaindnstxt provider azurerm production for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost host ingress name format dnsauth mobile s s public each value host each value env xxx xxx each value env zone name data azurerm dns zone quartzx name resource group name data azurerm dns zone quartzx resource group name ttl record value azurerm cdn frontdoor custom domain mobiledomain validation token resource azurerm cdn frontdoor custom domain association mobiledomain for each for index backendhost in local frontdoorbackendwebhosts backendhost env backendhost host backendhost if backendhost host ingress cdn frontdoor custom domain id azurerm cdn frontdoor custom domain mobiledomain id cdn frontdoor route ids id secrets certificates resource azurerm cdn frontdoor secret wildcard for each toset var environmentnames name qx each value wildcard certificate cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id secret customer certificate key vault certificate id data azurerm key vault certificate wildcardpublic versionless id resource azurerm cdn frontdoor secret api for each toset var environmentnames name qx each value api certificate cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id secret customer certificate key vault certificate id data azurerm key vault certificate apipublic versionless id resource azurerm cdn frontdoor secret conf for each var aksclusters name var organization each value env each value instance conf certificate cdn frontdoor profile id azurerm cdn frontdoor profile frontdoor id secret customer certificate key vault certificate id data azurerm key vault certificate confpublic versionless id debug output panic output shell │ with azurerm cdn frontdoor endpoint web │ on script frontdoorcdn tf line in resource azurerm cdn frontdoor endpoint web │ resource azurerm cdn frontdoor endpoint web │ ╵ ╷ │ error retrieving front door endpoint afd endpoint name xxx aks ingress endpoint profile name xx apl frontdoor cdn resource group xx prod gateway cdn afdendpointsclient get failure sending request statuscode original error context deadline exceeded │ │ with azurerm cdn frontdoor endpoint web │ on script frontdoorcdn tf line in resource azurerm cdn frontdoor endpoint web │ resource azurerm cdn frontdoor endpoint web │ ╵ ╷ │ error retrieving front door endpoint afd endpoint name xxx aks gov endpoint profile name xx apl frontdoor cdn resource group qx prod gateway cdn afdendpointsclient get failure sending request statuscode original error context deadline exceeded │ │ with azurerm cdn frontdoor endpoint web │ on script frontdoorcdn tf line in resource azurerm cdn frontdoor endpoint web │ resource azurerm cdn frontdoor endpoint web │ ╵ ╷ │ error retrieving front door endpoint afd endpoint name xxx aks govcloud endpoint profile name xx psp frontdoor cdn resource group xx prod gateway cdn afdendpointsclient get failure sending request statuscode original error context deadline exceeded │ │ with azurerm cdn frontdoor endpoint web │ on script frontdoorcdn tf line in resource azurerm cdn frontdoor endpoint web │ resource azurerm cdn frontdoor endpoint web │ ╵ ╷ │ error retrieving front door endpoint afd endpoint name xxx aks diligence endpoint profile name xx psp frontdoor cdn resource group xx prod gateway cdn afdendpointsclient get failure sending request statuscode original error context deadline exceeded │ │ with azurerm cdn frontdoor endpoint web │ on script frontdoorcdn tf line in resource azurerm cdn frontdoor endpoint web │ resource azurerm cdn frontdoor endpoint web expected behaviour be able to refresh state after resource creation actual behaviour terrraform is unable to refresh state after creation due to this random error the error does not touch one particularly azurerm cdn frontdoor endpoint but will happen randomly on a batch of it at each run steps to reproduce terraform plan or apply generate the same error important factoids large number of frontdoor profile and endpoint created terraform is able to create ressources but not able to read them all after creation references no response | 0 |
124,988 | 26,571,771,844 | IssuesEvent | 2023-01-21 08:39:48 | thirtybees/thirtybees | https://api.github.com/repos/thirtybees/thirtybees | closed | rename displayPrice in tools.js | Bug Code Quality | I previously made a forum post about a conflict between the function displayPrice() in tools.js and AWP:
https://forum.thirtybees.com/topic/6009-awp-and-thirty-bees/
I looked a little further and I saw that Prestashop 1.6.1 uses the function formatCurrency(() for the same purpose. When you look in Thirty Bees that function formatCurrency is still there but it calls the function displayPrice() with exactly the same arguments. Interesting is that this TB function formatCurrency also contains a comment:
`// Really? It's used quite a lot.
//console.log('Deprecated with v1.1.0. Use displayPrice() directly.');`
displayPrice has a comment too:
` * This also uses global priceDisplayPrecision, so this should be right.
* Formatting should match Tools::displayPrice().`
My conclusion is that the only reason for renaming this function was to make it have the same name as its Tools.php equivalent. A comment in Tools.php says that even explicitly.
I think this was a bad reason for renaming a function and it should be turned back. | 1.0 | rename displayPrice in tools.js - I previously made a forum post about a conflict between the function displayPrice() in tools.js and AWP:
https://forum.thirtybees.com/topic/6009-awp-and-thirty-bees/
I looked a little further and I saw that Prestashop 1.6.1 uses the function formatCurrency(() for the same purpose. When you look in Thirty Bees that function formatCurrency is still there but it calls the function displayPrice() with exactly the same arguments. Interesting is that this TB function formatCurrency also contains a comment:
`// Really? It's used quite a lot.
//console.log('Deprecated with v1.1.0. Use displayPrice() directly.');`
displayPrice has a comment too:
` * This also uses global priceDisplayPrecision, so this should be right.
* Formatting should match Tools::displayPrice().`
My conclusion is that the only reason for renaming this function was to make it have the same name as its Tools.php equivalent. A comment in Tools.php says that even explicitly.
I think this was a bad reason for renaming a function and it should be turned back. | non_priority | rename displayprice in tools js i previously made a forum post about a conflict between the function displayprice in tools js and awp i looked a little further and i saw that prestashop uses the function formatcurrency for the same purpose when you look in thirty bees that function formatcurrency is still there but it calls the function displayprice with exactly the same arguments interesting is that this tb function formatcurrency also contains a comment really it s used quite a lot console log deprecated with use displayprice directly displayprice has a comment too this also uses global pricedisplayprecision so this should be right formatting should match tools displayprice my conclusion is that the only reason for renaming this function was to make it have the same name as its tools php equivalent a comment in tools php says that even explicitly i think this was a bad reason for renaming a function and it should be turned back | 0 |
33,383 | 6,199,893,978 | IssuesEvent | 2017-07-05 22:54:48 | goliatone/core.io-express-auth | https://api.github.com/repos/goliatone/core.io-express-auth | opened | If core.io-express-server loads before persistence we get error | documentation enhancement | If you get an error on boot saying:
```
Auth module needs Passport model defined
```
Make sure that you have a configuration file that looks like this:
```js
passport: {
failureRedirect: '/login',
successReturnToOrRedirect: '/',
getPassportUser: function(){
return global.PassportUser;
},
getPassport: function(){
return global.Passport;
},
}
```
If you still get an error, ensure that you are declaring `persistence` as a dependency in `config/server.js`.
```js
dependencies: [
'persistence'
]
```
Add to documentation. We should try to handle this error in out auth module. | 1.0 | If core.io-express-server loads before persistence we get error - If you get an error on boot saying:
```
Auth module needs Passport model defined
```
Make sure that you have a configuration file that looks like this:
```js
passport: {
failureRedirect: '/login',
successReturnToOrRedirect: '/',
getPassportUser: function(){
return global.PassportUser;
},
getPassport: function(){
return global.Passport;
},
}
```
If you still get an error, ensure that you are declaring `persistence` as a dependency in `config/server.js`.
```js
dependencies: [
'persistence'
]
```
Add to documentation. We should try to handle this error in out auth module. | non_priority | if core io express server loads before persistence we get error if you get an error on boot saying auth module needs passport model defined make sure that you have a configuration file that looks like this js passport failureredirect login successreturntoorredirect getpassportuser function return global passportuser getpassport function return global passport if you still get an error ensure that you are declaring persistence as a dependency in config server js js dependencies persistence add to documentation we should try to handle this error in out auth module | 0 |
330,898 | 24,282,475,117 | IssuesEvent | 2022-09-28 18:42:10 | neevaco/neeva-android | https://api.github.com/repos/neevaco/neeva-android | opened | Chromium: Download Renaming Risks | documentation | Originally when downloading, `weblayer` on Android would simply overwrite files with the same name.
In the [file_util.cc](https://source.chromium.org/chromium/chromium/src/+/main:base/files/file_util.cc;l=471;drc=a432cd59d51281057ba2a2673ca645a9600bb927;bpv=1;bpt=1) there is a function `base::GetUniquePath` that will generate a new filename by `appending (%d)` before the extension where `%d` is a unique number between 1 and 100. After the 100th name, it would actually just give up.
@darinwf speculates that android memory is a very scarce resource and the fact that the chromium engineers didn't pursue further uniquification is probably to prevent malicious websites from overloading your phone with dummy files. Of course, the malicious website could also just generate unique names so it isn't really that great of a protection.
So in this [Chromium Download Rename PR](https://github.com/neevaco/chromium/pull/15), I've decided to take that malicious website risk and start using unique timestamps after the 100th number name. | 1.0 | Chromium: Download Renaming Risks - Originally when downloading, `weblayer` on Android would simply overwrite files with the same name.
In the [file_util.cc](https://source.chromium.org/chromium/chromium/src/+/main:base/files/file_util.cc;l=471;drc=a432cd59d51281057ba2a2673ca645a9600bb927;bpv=1;bpt=1) there is a function `base::GetUniquePath` that will generate a new filename by `appending (%d)` before the extension where `%d` is a unique number between 1 and 100. After the 100th name, it would actually just give up.
@darinwf speculates that android memory is a very scarce resource and the fact that the chromium engineers didn't pursue further uniquification is probably to prevent malicious websites from overloading your phone with dummy files. Of course, the malicious website could also just generate unique names so it isn't really that great of a protection.
So in this [Chromium Download Rename PR](https://github.com/neevaco/chromium/pull/15), I've decided to take that malicious website risk and start using unique timestamps after the 100th number name. | non_priority | chromium download renaming risks originally when downloading weblayer on android would simply overwrite files with the same name in the there is a function base getuniquepath that will generate a new filename by appending d before the extension where d is a unique number between and after the name it would actually just give up darinwf speculates that android memory is a very scarce resource and the fact that the chromium engineers didn t pursue further uniquification is probably to prevent malicious websites from overloading your phone with dummy files of course the malicious website could also just generate unique names so it isn t really that great of a protection so in this i ve decided to take that malicious website risk and start using unique timestamps after the number name | 0 |
330,830 | 24,279,248,626 | IssuesEvent | 2022-09-28 15:58:44 | openfoodfacts/facets-knowledge-panels | https://api.github.com/repos/openfoodfacts/facets-knowledge-panels | closed | Property documentation | documentation | ### Story
- Document all the food properties in Wikidata in a structured format
- Ensure the code is flexible enough to be extended as some new properties are added
- Ensure we document how to use them
### Part of
- Wikidata integration | 1.0 | Property documentation - ### Story
- Document all the food properties in Wikidata in a structured format
- Ensure the code is flexible enough to be extended as some new properties are added
- Ensure we document how to use them
### Part of
- Wikidata integration | non_priority | property documentation story document all the food properties in wikidata in a structured format ensure the code is flexible enough to be extended as some new properties are added ensure we document how to use them part of wikidata integration | 0 |
60,035 | 8,401,259,282 | IssuesEvent | 2018-10-11 00:01:12 | GCES-2018-2/SIGS-GCES | https://api.github.com/repos/GCES-2018-2/SIGS-GCES | closed | Update README | Hacktoberfest documentation enhancement good first issue help wanted | ## Problem description
There is a lot of things on README that its necessary to change
## Expected Behavior
Clean README
## Current Behavior
This that its necessary to remove:
- Remove Heroku Deploy (its not been used)
- Re-link License
- Update Docker commands with dev and prod environments
- Remove Vagrant Tutorial (its not been used)
| 1.0 | Update README - ## Problem description
There is a lot of things on README that its necessary to change
## Expected Behavior
Clean README
## Current Behavior
This that its necessary to remove:
- Remove Heroku Deploy (its not been used)
- Re-link License
- Update Docker commands with dev and prod environments
- Remove Vagrant Tutorial (its not been used)
| non_priority | update readme problem description there is a lot of things on readme that its necessary to change expected behavior clean readme current behavior this that its necessary to remove remove heroku deploy its not been used re link license update docker commands with dev and prod environments remove vagrant tutorial its not been used | 0 |
59,698 | 14,442,806,331 | IssuesEvent | 2020-12-07 18:41:45 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | CVE-2020-8554: Man in the middle using LoadBalancer or ExternalIPs | area/security committee/product-security kind/bug sig/network | CVSS Rating: **Medium** ([CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L))
This issue affects multitenant clusters. If a potential attacker can already create or edit services and pods, then they may be able to intercept traffic from other pods (or nodes) in the cluster.
An attacker that is able to create a ClusterIP service and set the spec.externalIPs field can intercept traffic to that IP. An attacker that is able to patch the status (which is considered a privileged operation and should not typically be granted to users) of a LoadBalancer service can set the status.loadBalancer.ingress.ip to similar effect.
This issue is a design flaw that cannot be mitigated without user-facing changes.
### Affected Components and Configurations
All Kubernetes versions are affected. Multi-tenant clusters that grant tenants the ability to create and update services and pods are most vulnerable.
### Mitigations
There is no patch for this issue, and it can currently only be mitigated by restricting access to the vulnerable features. Because an in-tree fix would require a breaking change, we will open a conversation about a longer-term fix or built-in mitigation after the embargo is lifted
To restrict the use of external IPs we are providing an admission webhook container: k8s.gcr.io/multitenancy/externalip-webhook:v1.0.0. The source code and deployment instructions are published at https://github.com/kubernetes-sigs/externalip-webhook.
Alternatively, external IPs can be restricted using [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper). A sample ConstraintTemplate and Constraint can be found here: https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/general/externalip.
No mitigations are provided for LoadBalancer IPs since we do not recommend granting users *patch service/status* permission. If LoadBalancer IP restrictions are required, the approach for the external IP mitigations can be copied.
### Detection
ExternalIP services are not widely used, so we recommend manually auditing any external IP usage. Users should not patch service status, so audit events for patch service status requests authenticated to a user may be suspicious.
If you find evidence that this vulnerability has been exploited, please contact security@kubernetes.io
#### Acknowledgements
This vulnerability was reported by Etienne Champetier (@champtar) of Anevia.
/area security
/kind bug
/committee product-security
/sig network
| True | CVE-2020-8554: Man in the middle using LoadBalancer or ExternalIPs - CVSS Rating: **Medium** ([CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L](https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L))
This issue affects multitenant clusters. If a potential attacker can already create or edit services and pods, then they may be able to intercept traffic from other pods (or nodes) in the cluster.
An attacker that is able to create a ClusterIP service and set the spec.externalIPs field can intercept traffic to that IP. An attacker that is able to patch the status (which is considered a privileged operation and should not typically be granted to users) of a LoadBalancer service can set the status.loadBalancer.ingress.ip to similar effect.
This issue is a design flaw that cannot be mitigated without user-facing changes.
### Affected Components and Configurations
All Kubernetes versions are affected. Multi-tenant clusters that grant tenants the ability to create and update services and pods are most vulnerable.
### Mitigations
There is no patch for this issue, and it can currently only be mitigated by restricting access to the vulnerable features. Because an in-tree fix would require a breaking change, we will open a conversation about a longer-term fix or built-in mitigation after the embargo is lifted
To restrict the use of external IPs we are providing an admission webhook container: k8s.gcr.io/multitenancy/externalip-webhook:v1.0.0. The source code and deployment instructions are published at https://github.com/kubernetes-sigs/externalip-webhook.
Alternatively, external IPs can be restricted using [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper). A sample ConstraintTemplate and Constraint can be found here: https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/general/externalip.
No mitigations are provided for LoadBalancer IPs since we do not recommend granting users *patch service/status* permission. If LoadBalancer IP restrictions are required, the approach for the external IP mitigations can be copied.
### Detection
ExternalIP services are not widely used, so we recommend manually auditing any external IP usage. Users should not patch service status, so audit events for patch service status requests authenticated to a user may be suspicious.
If you find evidence that this vulnerability has been exploited, please contact security@kubernetes.io
#### Acknowledgements
This vulnerability was reported by Etienne Champetier (@champtar) of Anevia.
/area security
/kind bug
/committee product-security
/sig network
| non_priority | cve man in the middle using loadbalancer or externalips cvss rating medium this issue affects multitenant clusters if a potential attacker can already create or edit services and pods then they may be able to intercept traffic from other pods or nodes in the cluster an attacker that is able to create a clusterip service and set the spec externalips field can intercept traffic to that ip an attacker that is able to patch the status which is considered a privileged operation and should not typically be granted to users of a loadbalancer service can set the status loadbalancer ingress ip to similar effect this issue is a design flaw that cannot be mitigated without user facing changes affected components and configurations all kubernetes versions are affected multi tenant clusters that grant tenants the ability to create and update services and pods are most vulnerable mitigations there is no patch for this issue and it can currently only be mitigated by restricting access to the vulnerable features because an in tree fix would require a breaking change we will open a conversation about a longer term fix or built in mitigation after the embargo is lifted to restrict the use of external ips we are providing an admission webhook container gcr io multitenancy externalip webhook the source code and deployment instructions are published at alternatively external ips can be restricted using a sample constrainttemplate and constraint can be found here no mitigations are provided for loadbalancer ips since we do not recommend granting users patch service status permission if loadbalancer ip restrictions are required the approach for the external ip mitigations can be copied detection externalip services are not widely used so we recommend manually auditing any external ip usage users should not patch service status so audit events for patch service status requests authenticated to a user may be suspicious if you find evidence that this vulnerability has been exploited please contact security kubernetes io acknowledgements this vulnerability was reported by etienne champetier champtar of anevia area security kind bug committee product security sig network | 0 |
112,871 | 24,338,142,363 | IssuesEvent | 2022-10-01 10:45:48 | niteshjitender/HacktoberFest2022 | https://api.github.com/repos/niteshjitender/HacktoberFest2022 | closed | Add contributors section in Readme (No Coding, Documentation) | documentation good first issue hacktoberfest Beginner No-code | Add a contributors section in Readme in which a Profile pic of the contributors is shown. | 1.0 | Add contributors section in Readme (No Coding, Documentation) - Add a contributors section in Readme in which a Profile pic of the contributors is shown. | non_priority | add contributors section in readme no coding documentation add a contributors section in readme in which a profile pic of the contributors is shown | 0 |
68,949 | 14,966,607,141 | IssuesEvent | 2021-01-27 14:46:49 | MValle21/session | https://api.github.com/repos/MValle21/session | opened | WS-2017-0247 (Low) detected in ms-0.7.1.tgz | security vulnerability | ## WS-2017-0247 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ms-0.7.1.tgz</b></p></summary>
<p>Tiny ms conversion utility</p>
<p>Library home page: <a href="https://registry.npmjs.org/ms/-/ms-0.7.1.tgz">https://registry.npmjs.org/ms/-/ms-0.7.1.tgz</a></p>
<p>Path to dependency file: session/package.json</p>
<p>Path to vulnerable library: session/node_modules/nyc/node_modules/ms/package.json</p>
<p>
Dependency Hierarchy:
- nyc-8.4.0.tgz (Root Library)
- istanbul-lib-instrument-1.2.0.tgz
- babel-traverse-6.18.0.tgz
- debug-2.2.0.tgz
- :x: **ms-0.7.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/MValle21/session/commit/35868837b539bc9caed7bacee6812d6ce7db97e8">35868837b539bc9caed7bacee6812d6ce7db97e8</a></p>
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS).
<p>Publish Date: 2017-04-12
<p>URL: <a href=https://github.com/zeit/ms/commit/305f2ddcd4eff7cc7c518aca6bb2b2d2daad8fef>WS-2017-0247</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>3.4</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/vercel/ms/pull/89">https://github.com/vercel/ms/pull/89</a></p>
<p>Release Date: 2017-04-12</p>
<p>Fix Resolution: 2.1.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ms","packageVersion":"0.7.1","isTransitiveDependency":true,"dependencyTree":"nyc:8.4.0;istanbul-lib-instrument:1.2.0;babel-traverse:6.18.0;debug:2.2.0;ms:0.7.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.1.1"}],"vulnerabilityIdentifier":"WS-2017-0247","vulnerabilityDetails":"Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS).","vulnerabilityUrl":"https://github.com/zeit/ms/commit/305f2ddcd4eff7cc7c518aca6bb2b2d2daad8fef","cvss2Severity":"low","cvss2Score":"3.4","extraData":{}}</REMEDIATE> --> | True | WS-2017-0247 (Low) detected in ms-0.7.1.tgz - ## WS-2017-0247 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ms-0.7.1.tgz</b></p></summary>
<p>Tiny ms conversion utility</p>
<p>Library home page: <a href="https://registry.npmjs.org/ms/-/ms-0.7.1.tgz">https://registry.npmjs.org/ms/-/ms-0.7.1.tgz</a></p>
<p>Path to dependency file: session/package.json</p>
<p>Path to vulnerable library: session/node_modules/nyc/node_modules/ms/package.json</p>
<p>
Dependency Hierarchy:
- nyc-8.4.0.tgz (Root Library)
- istanbul-lib-instrument-1.2.0.tgz
- babel-traverse-6.18.0.tgz
- debug-2.2.0.tgz
- :x: **ms-0.7.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/MValle21/session/commit/35868837b539bc9caed7bacee6812d6ce7db97e8">35868837b539bc9caed7bacee6812d6ce7db97e8</a></p>
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS).
<p>Publish Date: 2017-04-12
<p>URL: <a href=https://github.com/zeit/ms/commit/305f2ddcd4eff7cc7c518aca6bb2b2d2daad8fef>WS-2017-0247</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>3.4</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/vercel/ms/pull/89">https://github.com/vercel/ms/pull/89</a></p>
<p>Release Date: 2017-04-12</p>
<p>Fix Resolution: 2.1.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ms","packageVersion":"0.7.1","isTransitiveDependency":true,"dependencyTree":"nyc:8.4.0;istanbul-lib-instrument:1.2.0;babel-traverse:6.18.0;debug:2.2.0;ms:0.7.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.1.1"}],"vulnerabilityIdentifier":"WS-2017-0247","vulnerabilityDetails":"Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS).","vulnerabilityUrl":"https://github.com/zeit/ms/commit/305f2ddcd4eff7cc7c518aca6bb2b2d2daad8fef","cvss2Severity":"low","cvss2Score":"3.4","extraData":{}}</REMEDIATE> --> | non_priority | ws low detected in ms tgz ws low severity vulnerability vulnerable library ms tgz tiny ms conversion utility library home page a href path to dependency file session package json path to vulnerable library session node modules nyc node modules ms package json dependency hierarchy nyc tgz root library istanbul lib instrument tgz babel traverse tgz debug tgz x ms tgz vulnerable library found in head commit a href found in base branch development vulnerability details affected versions of this package are vulnerable to regular expression denial of service redos publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails affected versions of this package are vulnerable to regular expression denial of service redos vulnerabilityurl | 0 |
190,231 | 22,047,332,806 | IssuesEvent | 2022-05-30 04:18:22 | pazhanivel07/linux-4.19.72 | https://api.github.com/repos/pazhanivel07/linux-4.19.72 | closed | CVE-2021-45485 (High) detected in linux-yoctov5.4.51 - autoclosed | security vulnerability | ## CVE-2021-45485 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/linux-4.19.72/commit/ce28e4f7a922d93d9b737061ae46827305c8c30a">ce28e4f7a922d93d9b737061ae46827305c8c30a</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv6/output_core.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv6/output_core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the IPv6 implementation in the Linux kernel before 5.13.3, net/ipv6/output_core.c has an information leak because of certain use of a hash table which, although big, doesn't properly consider that IPv6-based attackers can typically choose among many IPv6 source addresses.
<p>Publish Date: 2021-12-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-45485>CVE-2021-45485</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45485">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45485</a></p>
<p>Release Date: 2021-12-25</p>
<p>Fix Resolution: v4.4.276,v4.9.276,v4.14.240,v4.19.198,v5.4.133,v5.10.51,v5.12.18,v5.13.3,v5.14-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-45485 (High) detected in linux-yoctov5.4.51 - autoclosed - ## CVE-2021-45485 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/linux-4.19.72/commit/ce28e4f7a922d93d9b737061ae46827305c8c30a">ce28e4f7a922d93d9b737061ae46827305c8c30a</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv6/output_core.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv6/output_core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the IPv6 implementation in the Linux kernel before 5.13.3, net/ipv6/output_core.c has an information leak because of certain use of a hash table which, although big, doesn't properly consider that IPv6-based attackers can typically choose among many IPv6 source addresses.
<p>Publish Date: 2021-12-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-45485>CVE-2021-45485</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45485">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45485</a></p>
<p>Release Date: 2021-12-25</p>
<p>Fix Resolution: v4.4.276,v4.9.276,v4.14.240,v4.19.198,v5.4.133,v5.10.51,v5.12.18,v5.13.3,v5.14-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in linux autoclosed cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files net output core c net output core c vulnerability details in the implementation in the linux kernel before net output core c has an information leak because of certain use of a hash table which although big doesn t properly consider that based attackers can typically choose among many source addresses publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
25,294 | 4,282,901,920 | IssuesEvent | 2016-07-15 11:10:13 | schuel/hmmm | https://api.github.com/repos/schuel/hmmm | closed | Exception when enrolling | defect | Click "I want to take part" -> "Exception while simulating the effect of invoking 'change_comment'"
Enrolling works because the exception happens after the role change was processed. | 1.0 | Exception when enrolling - Click "I want to take part" -> "Exception while simulating the effect of invoking 'change_comment'"
Enrolling works because the exception happens after the role change was processed. | non_priority | exception when enrolling click i want to take part exception while simulating the effect of invoking change comment enrolling works because the exception happens after the role change was processed | 0 |
442,009 | 30,811,582,659 | IssuesEvent | 2023-08-01 10:45:28 | TheOdinProject/curriculum | https://api.github.com/repos/TheOdinProject/curriculum | closed | Style Guide: add punctuation rules for list items | Type: Documentation | ### Describe your suggestion
As discussed here: https://github.com/TheOdinProject/curriculum/issues/25887#issuecomment-1646556086 it would be good to add some guidance in the style guide on how we would like to format list items.
Proposal:
- [ ] make sure the [current list examples](https://github.com/TheOdinProject/curriculum/blob/main/LAYOUT_STYLE_GUIDE.md#lists) in the style guide follow the [google styleguide](https://developers.google.com/style/lists#numbered-lettered-bulleted-lists)
- [ ] add some instructions on how to deal with punctuation and capitalization in lists
- [ ] add a link to the [Google style guide section on lists](https://developers.google.com/style/lists#numbered-lettered-bulleted-lists) for examples
### Path
Other / NA
### Lesson Url
https://github.com/TheOdinProject/curriculum/blob/main/LAYOUT_STYLE_GUIDE.md#lists
### Checks
- [X] I have thoroughly read and understand [The Odin Project Contributing Guide](https://github.com/TheOdinProject/.github/blob/main/CONTRIBUTING.md)
- [ ] Would you like to work on this issue?
Another learner indicated they'd be willing to make this change to the style guide
### (Optional) Discord Name
_No response_
### (Optional) Additional Comments
_No response_ | 1.0 | Style Guide: add punctuation rules for list items - ### Describe your suggestion
As discussed here: https://github.com/TheOdinProject/curriculum/issues/25887#issuecomment-1646556086 it would be good to add some guidance in the style guide on how we would like to format list items.
Proposal:
- [ ] make sure the [current list examples](https://github.com/TheOdinProject/curriculum/blob/main/LAYOUT_STYLE_GUIDE.md#lists) in the style guide follow the [google styleguide](https://developers.google.com/style/lists#numbered-lettered-bulleted-lists)
- [ ] add some instructions on how to deal with punctuation and capitalization in lists
- [ ] add a link to the [Google style guide section on lists](https://developers.google.com/style/lists#numbered-lettered-bulleted-lists) for examples
### Path
Other / NA
### Lesson Url
https://github.com/TheOdinProject/curriculum/blob/main/LAYOUT_STYLE_GUIDE.md#lists
### Checks
- [X] I have thoroughly read and understand [The Odin Project Contributing Guide](https://github.com/TheOdinProject/.github/blob/main/CONTRIBUTING.md)
- [ ] Would you like to work on this issue?
Another learner indicated they'd be willing to make this change to the style guide
### (Optional) Discord Name
_No response_
### (Optional) Additional Comments
_No response_ | non_priority | style guide add punctuation rules for list items describe your suggestion as discussed here it would be good to add some guidance in the style guide on how we would like to format list items proposal make sure the in the style guide follow the add some instructions on how to deal with punctuation and capitalization in lists add a link to the for examples path other na lesson url checks i have thoroughly read and understand would you like to work on this issue another learner indicated they d be willing to make this change to the style guide optional discord name no response optional additional comments no response | 0 |
2,884 | 29,187,064,143 | IssuesEvent | 2023-05-19 16:19:10 | pulumi/pulumi-aws | https://api.github.com/repos/pulumi/pulumi-aws | closed | Created SQS redrive policy shows up in diff | kind/bug impact/reliability customer/feedback bug/diff | ### What happened?
Created a SQS queue and associated dead-letter-queue using the following code:
```
def create_sqs_queue_with_dlq(
queue_name: str, *,
message_retention_seconds: int = 3600,
delay_seconds: int = 60,
visibility_timeout_seconds: int = 60,
content_based_deduplication: bool = False,
fifo_queue: bool = False) -> Queue:
if fifo_queue:
queue_name = queue_name + ".fifo"
queue = Queue(
queue_name,
name=queue_name, # Avoid Pulumi random char suffix.
message_retention_seconds=message_retention_seconds,
delay_seconds=delay_seconds,
visibility_timeout_seconds=visibility_timeout_seconds,
content_based_deduplication=content_based_deduplication,
fifo_queue=fifo_queue)
# Create a Dead-Letter-Queue
dlq_name: str = f"{queue_name}-DLQ"
queue_dlq = Queue(
dlq_name,
name=dlq_name, # Avoid Pulumi random char suffix.
message_retention_seconds=1209600, # 60 * 60 * 24 * 14 - SQS max message retention is 14 days
visibility_timeout_seconds=visibility_timeout_seconds,
fifo_queue=fifo_queue,
redrive_allow_policy=queue.arn.apply(
lambda arn: json.dumps({
"redrivePermission": "byQueue",
"sourceQueueArns": [arn],
})
)
)
# Create a redrive policy for the queue to send messages to dlq.
redrive_policy = RedrivePolicy(
"redrivePolicy",
queue_url=queue.id,
redrive_policy=queue_dlq.arn.apply(
lambda arn: json.dumps({
"deadLetterTargetArn": arn,
"maxReceiveCount": 4,
})
)
)
return queue
```
The AWS resources get created and what shows in the console is fine. But when I run `pulumi up` again, instead of nothing to do, I see the following:
```
Type Name Plan Info
pulumi:pulumi:Stack myproject-development
~ └─ aws:sqs:RedrivePolicy redrivePolicy update [diff: ~redrivePolicy]
Resources:
~ 1 to update
5 unchanged
```
Pulumi seems to not recognize that the redrive policy has been created correctly.
Output of `diff` :
```
$ pulumi preview --diff
Previewing update (project/myproj-compute/myproj-development)
pulumi:pulumi:Stack: (same)
[urn=urn:pulumi:myproj-development::myproj-compute::pulumi:pulumi:Stack::myproj-compute-myproj-development]
~ aws:sqs/redrivePolicy:RedrivePolicy: (update)
[id=https://sqs.us-west-2.amazonaws.com/<acct_id>/development-myapp-sqs]
[urn=urn:pulumi:myproj-development::myproj-compute::aws:sqs/redrivePolicy:RedrivePolicy::redrivePolicy]
[provider=urn:pulumi:myproj-development::myproj-compute::pulumi:providers:aws::default_5_18_0::3e6f7b6a-3749-4965-8a22-261e940628a0]
~ redrivePolicy: (json) {
deadLetterTargetArn: "arn:aws:sqs:us-west-2:<acct_id>:development-myapp-sqs-DLQ"
maxReceiveCount : 4
}
Resources:
~ 1 to update
5 unchanged
```
### Steps to reproduce
Using the above code snippet to create an AWS SQS queue and dll should show the issue.
### Expected Behavior
After running `pulumi up` once and successfully creating the resources, I should not see any further updates when running `pulumi up` or diff subsequently without any modification to code.
### Actual Behavior
Pulumi indicates that the SQS redrive policy has not been created.
### Output of `pulumi about`
_No response_
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
| True | Created SQS redrive policy shows up in diff - ### What happened?
Created a SQS queue and associated dead-letter-queue using the following code:
```
def create_sqs_queue_with_dlq(
queue_name: str, *,
message_retention_seconds: int = 3600,
delay_seconds: int = 60,
visibility_timeout_seconds: int = 60,
content_based_deduplication: bool = False,
fifo_queue: bool = False) -> Queue:
if fifo_queue:
queue_name = queue_name + ".fifo"
queue = Queue(
queue_name,
name=queue_name, # Avoid Pulumi random char suffix.
message_retention_seconds=message_retention_seconds,
delay_seconds=delay_seconds,
visibility_timeout_seconds=visibility_timeout_seconds,
content_based_deduplication=content_based_deduplication,
fifo_queue=fifo_queue)
# Create a Dead-Letter-Queue
dlq_name: str = f"{queue_name}-DLQ"
queue_dlq = Queue(
dlq_name,
name=dlq_name, # Avoid Pulumi random char suffix.
message_retention_seconds=1209600, # 60 * 60 * 24 * 14 - SQS max message retention is 14 days
visibility_timeout_seconds=visibility_timeout_seconds,
fifo_queue=fifo_queue,
redrive_allow_policy=queue.arn.apply(
lambda arn: json.dumps({
"redrivePermission": "byQueue",
"sourceQueueArns": [arn],
})
)
)
# Create a redrive policy for the queue to send messages to dlq.
redrive_policy = RedrivePolicy(
"redrivePolicy",
queue_url=queue.id,
redrive_policy=queue_dlq.arn.apply(
lambda arn: json.dumps({
"deadLetterTargetArn": arn,
"maxReceiveCount": 4,
})
)
)
return queue
```
The AWS resources get created and what shows in the console is fine. But when I run `pulumi up` again, instead of nothing to do, I see the following:
```
Type Name Plan Info
pulumi:pulumi:Stack myproject-development
~ └─ aws:sqs:RedrivePolicy redrivePolicy update [diff: ~redrivePolicy]
Resources:
~ 1 to update
5 unchanged
```
Pulumi seems to not recognize that the redrive policy has been created correctly.
Output of `diff` :
```
$ pulumi preview --diff
Previewing update (project/myproj-compute/myproj-development)
pulumi:pulumi:Stack: (same)
[urn=urn:pulumi:myproj-development::myproj-compute::pulumi:pulumi:Stack::myproj-compute-myproj-development]
~ aws:sqs/redrivePolicy:RedrivePolicy: (update)
[id=https://sqs.us-west-2.amazonaws.com/<acct_id>/development-myapp-sqs]
[urn=urn:pulumi:myproj-development::myproj-compute::aws:sqs/redrivePolicy:RedrivePolicy::redrivePolicy]
[provider=urn:pulumi:myproj-development::myproj-compute::pulumi:providers:aws::default_5_18_0::3e6f7b6a-3749-4965-8a22-261e940628a0]
~ redrivePolicy: (json) {
deadLetterTargetArn: "arn:aws:sqs:us-west-2:<acct_id>:development-myapp-sqs-DLQ"
maxReceiveCount : 4
}
Resources:
~ 1 to update
5 unchanged
```
### Steps to reproduce
Using the above code snippet to create an AWS SQS queue and dll should show the issue.
### Expected Behavior
After running `pulumi up` once and successfully creating the resources, I should not see any further updates when running `pulumi up` or diff subsequently without any modification to code.
### Actual Behavior
Pulumi indicates that the SQS redrive policy has not been created.
### Output of `pulumi about`
_No response_
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
| non_priority | created sqs redrive policy shows up in diff what happened created a sqs queue and associated dead letter queue using the following code def create sqs queue with dlq queue name str message retention seconds int delay seconds int visibility timeout seconds int content based deduplication bool false fifo queue bool false queue if fifo queue queue name queue name fifo queue queue queue name name queue name avoid pulumi random char suffix message retention seconds message retention seconds delay seconds delay seconds visibility timeout seconds visibility timeout seconds content based deduplication content based deduplication fifo queue fifo queue create a dead letter queue dlq name str f queue name dlq queue dlq queue dlq name name dlq name avoid pulumi random char suffix message retention seconds sqs max message retention is days visibility timeout seconds visibility timeout seconds fifo queue fifo queue redrive allow policy queue arn apply lambda arn json dumps redrivepermission byqueue sourcequeuearns create a redrive policy for the queue to send messages to dlq redrive policy redrivepolicy redrivepolicy queue url queue id redrive policy queue dlq arn apply lambda arn json dumps deadlettertargetarn arn maxreceivecount return queue the aws resources get created and what shows in the console is fine but when i run pulumi up again instead of nothing to do i see the following type name plan info pulumi pulumi stack myproject development └─ aws sqs redrivepolicy redrivepolicy update resources to update unchanged pulumi seems to not recognize that the redrive policy has been created correctly output of diff pulumi preview diff previewing update project myproj compute myproj development pulumi pulumi stack same aws sqs redrivepolicy redrivepolicy update redrivepolicy json deadlettertargetarn arn aws sqs us west development myapp sqs dlq maxreceivecount resources to update unchanged steps to reproduce using the above code snippet to create an aws sqs queue and dll should show the issue expected behavior after running pulumi up once and successfully creating the resources i should not see any further updates when running pulumi up or diff subsequently without any modification to code actual behavior pulumi indicates that the sqs redrive policy has not been created output of pulumi about no response additional context no response contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already | 0 |
6,239 | 8,638,222,438 | IssuesEvent | 2018-11-23 14:03:32 | pingcap/tidb | https://api.github.com/repos/pingcap/tidb | opened | GRANT statements do not invalidate privilege cache | type/compatibility | ## Bug Report
Please answer these questions before submitting your issue. Thanks!
1. What did you do?
```
CREATE USER ted;
```
Immediately after:
```
mysql -u ted
```
2. What did you expect to see?
Login should work.
3. What did you see instead?
Error (requires `FLUSH PRIVILEGES` to be run).
For MySQL the following commands imply `FLUSH PRIVILEGES`:
- GRANT
- CREATE USER
- SET PASSWORD
- REVOKE
- DROP USER
The TiDB test suite also contains a lot of `FLUSH PRIVILEGES` commands that for MySQL compatibility would not be required.
This has the greatest impact on users that have permission to change their password, but don't have the ability to `FLUSH PRIVILEGES`! (fixed in https://github.com/pingcap/tidb/pull/8426 ) i.e. it is a very weird semantic for the change to take effect much further down the line.
4. What version of TiDB are you using (`tidb-server -V` or run `select tidb_version();` on TiDB)?
```
mysql> select tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v2.1.0-rc.3-213-g4ec77d5cb
Git Commit Hash: 4ec77d5cbe2536b587afa457a7dcc06bac61eda4
Git Branch: set-password
UTC Build Time: 2018-11-23 01:59:37
GoVersion: go version go1.11 linux/amd64
Race Enabled: false
TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e
Check Table Before Drop: false
1 row in set (0.00 sec)
```
| True | GRANT statements do not invalidate privilege cache - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
1. What did you do?
```
CREATE USER ted;
```
Immediately after:
```
mysql -u ted
```
2. What did you expect to see?
Login should work.
3. What did you see instead?
Error (requires `FLUSH PRIVILEGES` to be run).
For MySQL the following commands imply `FLUSH PRIVILEGES`:
- GRANT
- CREATE USER
- SET PASSWORD
- REVOKE
- DROP USER
The TiDB test suite also contains a lot of `FLUSH PRIVILEGES` commands that for MySQL compatibility would not be required.
This has the greatest impact on users that have permission to change their password, but don't have the ability to `FLUSH PRIVILEGES`! (fixed in https://github.com/pingcap/tidb/pull/8426 ) i.e. it is a very weird semantic for the change to take effect much further down the line.
4. What version of TiDB are you using (`tidb-server -V` or run `select tidb_version();` on TiDB)?
```
mysql> select tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v2.1.0-rc.3-213-g4ec77d5cb
Git Commit Hash: 4ec77d5cbe2536b587afa457a7dcc06bac61eda4
Git Branch: set-password
UTC Build Time: 2018-11-23 01:59:37
GoVersion: go version go1.11 linux/amd64
Race Enabled: false
TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e
Check Table Before Drop: false
1 row in set (0.00 sec)
```
| non_priority | grant statements do not invalidate privilege cache bug report please answer these questions before submitting your issue thanks what did you do create user ted immediately after mysql u ted what did you expect to see login should work what did you see instead error requires flush privileges to be run for mysql the following commands imply flush privileges grant create user set password revoke drop user the tidb test suite also contains a lot of flush privileges commands that for mysql compatibility would not be required this has the greatest impact on users that have permission to change their password but don t have the ability to flush privileges fixed in i e it is a very weird semantic for the change to take effect much further down the line what version of tidb are you using tidb server v or run select tidb version on tidb mysql select tidb version g row tidb version release version rc git commit hash git branch set password utc build time goversion go version linux race enabled false tikv min version alpha check table before drop false row in set sec | 0 |
107,967 | 16,762,615,310 | IssuesEvent | 2021-06-14 02:37:58 | gms-ws-demo/nibrs-pr-test | https://api.github.com/repos/gms-ws-demo/nibrs-pr-test | opened | CVE-2020-5421 (Medium) detected in multiple libraries | security vulnerability | ## CVE-2020-5421 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-web-4.3.11.RELEASE.jar</b>, <b>spring-web-5.0.9.RELEASE.jar</b>, <b>spring-web-5.1.7.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-web-4.3.11.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: nibrs-pr-test/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/4.3.11.RELEASE/spring-web-4.3.11.RELEASE.jar,nibrs-pr-test/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/spring-web-4.3.11.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-web-4.3.11.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-web-5.0.9.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: nibrs-pr-test/tools/nibrs-staging-data/pom.xml</p>
<p>Path to vulnerable library: nibrs-pr-test/web/nibrs-web/target/nibrs-web/WEB-INF/lib/spring-web-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.0.9.RELEASE/spring-web-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.0.9.RELEASE/spring-web-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.0.9.RELEASE/spring-web-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.0.9.RELEASE/spring-web-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.0.9.RELEASE/spring-web-5.0.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-web-5.0.9.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-web-5.1.7.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: nibrs-pr-test/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/5.1.7.RELEASE/spring-web-5.1.7.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- :x: **spring-web-5.1.7.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs-pr-test/commit/860cc22f54e17594e32e303f0716fb065202fff5">860cc22f54e17594e32e303f0716fb065202fff5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.
<p>Publish Date: 2020-09-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421>CVE-2020-5421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2020-5421">https://tanzu.vmware.com/security/cve-2020-5421</a></p>
<p>Release Date: 2020-09-19</p>
<p>Fix Resolution: org.springframework:spring-web:4.3.29,5.0.19,5.1.18,5.2.9</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-web","packageVersion":"4.3.11.RELEASE","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-web:4.3.11.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-web:4.3.29,5.0.19,5.1.18,5.2.9"},{"packageType":"Java","groupId":"org.springframework","packageName":"spring-web","packageVersion":"5.0.9.RELEASE","packageFilePaths":["/tools/nibrs-staging-data/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml","/web/nibrs-web/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-web:5.0.9.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-web:4.3.29,5.0.19,5.1.18,5.2.9"},{"packageType":"Java","groupId":"org.springframework","packageName":"spring-web","packageVersion":"5.1.7.RELEASE","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework:spring-web:5.1.7.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-web:4.3.29,5.0.19,5.1.18,5.2.9"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-5421","vulnerabilityDetails":"In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"High","PR":"Low","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-5421 (Medium) detected in multiple libraries - ## CVE-2020-5421 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-web-4.3.11.RELEASE.jar</b>, <b>spring-web-5.0.9.RELEASE.jar</b>, <b>spring-web-5.1.7.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-web-4.3.11.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: nibrs-pr-test/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/4.3.11.RELEASE/spring-web-4.3.11.RELEASE.jar,nibrs-pr-test/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/spring-web-4.3.11.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-web-4.3.11.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-web-5.0.9.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: nibrs-pr-test/tools/nibrs-staging-data/pom.xml</p>
<p>Path to vulnerable library: nibrs-pr-test/web/nibrs-web/target/nibrs-web/WEB-INF/lib/spring-web-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.0.9.RELEASE/spring-web-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.0.9.RELEASE/spring-web-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.0.9.RELEASE/spring-web-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.0.9.RELEASE/spring-web-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-web/5.0.9.RELEASE/spring-web-5.0.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-web-5.0.9.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-web-5.1.7.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: nibrs-pr-test/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/5.1.7.RELEASE/spring-web-5.1.7.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- :x: **spring-web-5.1.7.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs-pr-test/commit/860cc22f54e17594e32e303f0716fb065202fff5">860cc22f54e17594e32e303f0716fb065202fff5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.
<p>Publish Date: 2020-09-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421>CVE-2020-5421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2020-5421">https://tanzu.vmware.com/security/cve-2020-5421</a></p>
<p>Release Date: 2020-09-19</p>
<p>Fix Resolution: org.springframework:spring-web:4.3.29,5.0.19,5.1.18,5.2.9</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-web","packageVersion":"4.3.11.RELEASE","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-web:4.3.11.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-web:4.3.29,5.0.19,5.1.18,5.2.9"},{"packageType":"Java","groupId":"org.springframework","packageName":"spring-web","packageVersion":"5.0.9.RELEASE","packageFilePaths":["/tools/nibrs-staging-data/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml","/web/nibrs-web/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-web:5.0.9.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-web:4.3.29,5.0.19,5.1.18,5.2.9"},{"packageType":"Java","groupId":"org.springframework","packageName":"spring-web","packageVersion":"5.1.7.RELEASE","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework:spring-web:5.1.7.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-web:4.3.29,5.0.19,5.1.18,5.2.9"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-5421","vulnerabilityDetails":"In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"High","PR":"Low","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries spring web release jar spring web release jar spring web release jar spring web release jar spring web library home page a href path to dependency file nibrs pr test tools nibrs fbi service pom xml path to vulnerable library home wss scanner repository org springframework spring web release spring web release jar nibrs pr test tools nibrs fbi service target nibrs fbi service web inf lib spring web release jar dependency hierarchy x spring web release jar vulnerable library spring web release jar spring web library home page a href path to dependency file nibrs pr test tools nibrs staging data pom xml path to vulnerable library nibrs pr test web nibrs web target nibrs web web inf lib spring web release jar home wss scanner repository org springframework spring web release spring web release jar home wss scanner repository org springframework spring web release spring web release jar home wss scanner repository org springframework spring web release spring web release jar home wss scanner repository org springframework spring web release spring web release jar home wss scanner repository org springframework spring web release spring web release jar dependency hierarchy x spring web release jar vulnerable library spring web release jar spring web library home page a href path to dependency file nibrs pr test tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository org springframework spring web release spring web release jar dependency hierarchy spring boot starter web release jar root library x spring web release jar vulnerable library found in head commit a href found in base branch master vulnerability details in spring framework versions and older unsupported versions the protections against rfd attacks from cve may be bypassed depending on the browser used through the use of a jsessionid path parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring web isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org springframework spring web release isminimumfixversionavailable true minimumfixversion org springframework spring web packagetype java groupid org springframework packagename spring web packageversion release packagefilepaths istransitivedependency false dependencytree org springframework spring web release isminimumfixversionavailable true minimumfixversion org springframework spring web packagetype java groupid org springframework packagename spring web packageversion release packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework spring web release isminimumfixversionavailable true minimumfixversion org springframework spring web basebranches vulnerabilityidentifier cve vulnerabilitydetails in spring framework versions and older unsupported versions the protections against rfd attacks from cve may be bypassed depending on the browser used through the use of a jsessionid path parameter vulnerabilityurl | 0 |
4,921 | 25,285,054,866 | IssuesEvent | 2022-11-16 18:34:43 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | sam-beta-cdk support for typescript lambda functions - aws-lambda-nodejs | type/feature maintainer/need-followup | **The feature:**
Would it be possible to add support for sam-cdk-beta to bundle typescript lambda functions using @aws-cdk/aws-lambda-nodejs (NodejsFunction) ?
**What I've tried:**
I have tried to use the @aws-cdk/aws-lambda-nodejs library to compile typescript functions as opposed to the @aws-cdk/aws-lambda library. However, what I notice is that when I run:
`sam-beta-cdk build`
the assets for a @aws-cdk/aws-lambda-nodejs (NodejsFunction) function only produce an index.js asset in the .aws-sam/cdk-out/asset.xxxxxxxxxxxxx/ directory but package.json also needs to be in there for the function to be build properly. If I manually add package.json from the function in question, then I can run
`cdk deploy -a .aws-sam/build`
and the stack will deploy correctly.
I'm not entirely sure if the solution would be as simple as adding package.json to the assets directory for a particular stack, but It seemed to work. Perhaps this is a feature I should request in the main aws-cdk repository. Please let me know if there's something I'm missing in my approach to creating typescript lambda functions.
| True | sam-beta-cdk support for typescript lambda functions - aws-lambda-nodejs - **The feature:**
Would it be possible to add support for sam-cdk-beta to bundle typescript lambda functions using @aws-cdk/aws-lambda-nodejs (NodejsFunction) ?
**What I've tried:**
I have tried to use the @aws-cdk/aws-lambda-nodejs library to compile typescript functions as opposed to the @aws-cdk/aws-lambda library. However, what I notice is that when I run:
`sam-beta-cdk build`
the assets for a @aws-cdk/aws-lambda-nodejs (NodejsFunction) function only produce an index.js asset in the .aws-sam/cdk-out/asset.xxxxxxxxxxxxx/ directory but package.json also needs to be in there for the function to be build properly. If I manually add package.json from the function in question, then I can run
`cdk deploy -a .aws-sam/build`
and the stack will deploy correctly.
I'm not entirely sure if the solution would be as simple as adding package.json to the assets directory for a particular stack, but It seemed to work. Perhaps this is a feature I should request in the main aws-cdk repository. Please let me know if there's something I'm missing in my approach to creating typescript lambda functions.
| non_priority | sam beta cdk support for typescript lambda functions aws lambda nodejs the feature would it be possible to add support for sam cdk beta to bundle typescript lambda functions using aws cdk aws lambda nodejs nodejsfunction what i ve tried i have tried to use the aws cdk aws lambda nodejs library to compile typescript functions as opposed to the aws cdk aws lambda library however what i notice is that when i run sam beta cdk build the assets for a aws cdk aws lambda nodejs nodejsfunction function only produce an index js asset in the aws sam cdk out asset xxxxxxxxxxxxx directory but package json also needs to be in there for the function to be build properly if i manually add package json from the function in question then i can run cdk deploy a aws sam build and the stack will deploy correctly i m not entirely sure if the solution would be as simple as adding package json to the assets directory for a particular stack but it seemed to work perhaps this is a feature i should request in the main aws cdk repository please let me know if there s something i m missing in my approach to creating typescript lambda functions | 0 |
161,346 | 25,324,078,616 | IssuesEvent | 2022-11-18 07:40:39 | dgattey/dg | https://api.github.com/repos/dgattey/dg | closed | More info about me (see other inspiration) | content design components | 1. Languages (Spanish, but also Swift)
4. Skills
5. Values
6. "Now" section (what I'm currently into/reading/interested in/watching/etc)
2. Who I am/what I like in more detail
3. What I do outside of work | 1.0 | More info about me (see other inspiration) - 1. Languages (Spanish, but also Swift)
4. Skills
5. Values
6. "Now" section (what I'm currently into/reading/interested in/watching/etc)
2. Who I am/what I like in more detail
3. What I do outside of work | non_priority | more info about me see other inspiration languages spanish but also swift skills values now section what i m currently into reading interested in watching etc who i am what i like in more detail what i do outside of work | 0 |
154,970 | 24,381,102,341 | IssuesEvent | 2022-10-04 07:54:51 | LuccaSA/lucca-front | https://api.github.com/repos/LuccaSA/lucca-front | closed | [Propostion] Gestion des styles scopés dans LF | 👥 Guilde Design | L'appli mobile Timmi Temps/Absences/Office est une sorte de "portail" avec 3 "sous-appli" qui, dans un premier temps, vont toutes avoir leur thème de couleurs
Malheureusement à l'heure actuelle LF ne répond pas à ce cas de figure, en effet certains trucs ne sont pas scopés :
- les variables de couleurs etc. sont dans le :root
- certaines éléments de base, type liens <a> ont du style générique, dont certaines couleurs
- dans les composants, on rajoute des variables sur :root également avec les @at-root
Certes on peut redéfinir les palettes mais on ne peut pas vraiment les redéfinir pour un élement enfant de root car il se passe ceci :
```scss
:root {
--red: red;
--autreVariable: var(--red);
}
.timmi-timesheet {
--red: blue;
* {
color: var(--autreVariable) !important; // donne du rouge
}
}
```
ca fait une espèce de closure en quelque sorte
on a le cas de figure par exemple pour les var css du spinner de chargement, qui font référence à palettes-primary-200 et palettes-primary-700 :
```scss
// commons/config.scss
$loading: (
'background': var(--palettes-primary-200),
'frontground': var(--palettes-primary-700),
'speed': 600ms,
);
```
cette map est ensuite utilisée dans un sélecteur :root :
```scss
// commons/base.scss
:root {
[...]
@include core.cssvars('commons-loading', config.$loading);
[...]
```
On pourra redéfinir autant que l'on veut les palettes dans un élément descendant de :root, --commons-loading-background fera toujours référence à la variable scss "de :root", qui n'est pas overridée.
Un solution serait donc de redéfinir également --commons-loading-background dans l'élement descendant de :root sur lequel on a redéfinit les palettes, introduisant une notion de "scope" pour les déclarations de variables
| 1.0 | [Propostion] Gestion des styles scopés dans LF - L'appli mobile Timmi Temps/Absences/Office est une sorte de "portail" avec 3 "sous-appli" qui, dans un premier temps, vont toutes avoir leur thème de couleurs
Malheureusement à l'heure actuelle LF ne répond pas à ce cas de figure, en effet certains trucs ne sont pas scopés :
- les variables de couleurs etc. sont dans le :root
- certaines éléments de base, type liens <a> ont du style générique, dont certaines couleurs
- dans les composants, on rajoute des variables sur :root également avec les @at-root
Certes on peut redéfinir les palettes mais on ne peut pas vraiment les redéfinir pour un élement enfant de root car il se passe ceci :
```scss
:root {
--red: red;
--autreVariable: var(--red);
}
.timmi-timesheet {
--red: blue;
* {
color: var(--autreVariable) !important; // donne du rouge
}
}
```
ca fait une espèce de closure en quelque sorte
on a le cas de figure par exemple pour les var css du spinner de chargement, qui font référence à palettes-primary-200 et palettes-primary-700 :
```scss
// commons/config.scss
$loading: (
'background': var(--palettes-primary-200),
'frontground': var(--palettes-primary-700),
'speed': 600ms,
);
```
cette map est ensuite utilisée dans un sélecteur :root :
```scss
// commons/base.scss
:root {
[...]
@include core.cssvars('commons-loading', config.$loading);
[...]
```
On pourra redéfinir autant que l'on veut les palettes dans un élément descendant de :root, --commons-loading-background fera toujours référence à la variable scss "de :root", qui n'est pas overridée.
Un solution serait donc de redéfinir également --commons-loading-background dans l'élement descendant de :root sur lequel on a redéfinit les palettes, introduisant une notion de "scope" pour les déclarations de variables
| non_priority | gestion des styles scopés dans lf l appli mobile timmi temps absences office est une sorte de portail avec sous appli qui dans un premier temps vont toutes avoir leur thème de couleurs malheureusement à l heure actuelle lf ne répond pas à ce cas de figure en effet certains trucs ne sont pas scopés les variables de couleurs etc sont dans le root certaines éléments de base type liens ont du style générique dont certaines couleurs dans les composants on rajoute des variables sur root également avec les at root certes on peut redéfinir les palettes mais on ne peut pas vraiment les redéfinir pour un élement enfant de root car il se passe ceci scss root red red autrevariable var red timmi timesheet red blue color var autrevariable important donne du rouge ca fait une espèce de closure en quelque sorte on a le cas de figure par exemple pour les var css du spinner de chargement qui font référence à palettes primary et palettes primary scss commons config scss loading background var palettes primary frontground var palettes primary speed cette map est ensuite utilisée dans un sélecteur root scss commons base scss root include core cssvars commons loading config loading on pourra redéfinir autant que l on veut les palettes dans un élément descendant de root commons loading background fera toujours référence à la variable scss de root qui n est pas overridée un solution serait donc de redéfinir également commons loading background dans l élement descendant de root sur lequel on a redéfinit les palettes introduisant une notion de scope pour les déclarations de variables | 0 |
355,049 | 25,175,459,030 | IssuesEvent | 2022-11-11 08:51:13 | asaierika/pe | https://api.github.com/repos/asaierika/pe | opened | Documentation of editing the remark of a person not clear | severity.Low type.DocumentationBug | In the User Guide, it was mentioned in 2.6 Adding a Remark that remark command can add or edit the person's remark. The format of the command for adding and editing is the same, and the only difference is that adding a remark to a person with an existing remark is considered "editing" the remark, but this is not specified clearly in the User Guide and may lead to the user's misunderstanding of the format. Specifically, the examples provided did not illustrate clearly how adding and editing remarks works. In the User Guide, it is written:

However, according to the behaviour of TAB, it is clear that "remark 1 r/Interested to be a TA" will edit the remark of the person only if the person already has an existing remark, otherwise, it would add the remark to the person. Additionally, the second example " remark 2 r/remark_one r/remark_two" is not phrased clearly as the user may misunderstand it as remark_one is the original remark and remark_two is the edited remark. Similarly, this example also did not specify that the command is done on a person with an existing remark, thus "editing", not "adding". The second example was intended to illustrate that multiple parameters taken will only have the last one to be considered, which is not specific to the format of editing a remark, but also adding one, as demonstrated below:


The above is editing the remark.


The above is adding a remark.
Thus, the second example is to illustrate that for both adding and editing remarks, since they have the same format, only the last parameter will be taken, but due to the phrasing, the user may misunderstand that this is only for editing a person's remark.
<!--session: 1668153136016-210482ea-097f-4dd0-a2c3-00e4044cd38c-->
<!--Version: Web v3.4.4--> | 1.0 | Documentation of editing the remark of a person not clear - In the User Guide, it was mentioned in 2.6 Adding a Remark that remark command can add or edit the person's remark. The format of the command for adding and editing is the same, and the only difference is that adding a remark to a person with an existing remark is considered "editing" the remark, but this is not specified clearly in the User Guide and may lead to the user's misunderstanding of the format. Specifically, the examples provided did not illustrate clearly how adding and editing remarks works. In the User Guide, it is written:

However, according to the behaviour of TAB, it is clear that "remark 1 r/Interested to be a TA" will edit the remark of the person only if the person already has an existing remark, otherwise, it would add the remark to the person. Additionally, the second example " remark 2 r/remark_one r/remark_two" is not phrased clearly as the user may misunderstand it as remark_one is the original remark and remark_two is the edited remark. Similarly, this example also did not specify that the command is done on a person with an existing remark, thus "editing", not "adding". The second example was intended to illustrate that multiple parameters taken will only have the last one to be considered, which is not specific to the format of editing a remark, but also adding one, as demonstrated below:


The above is editing the remark.


The above is adding a remark.
Thus, the second example is to illustrate that for both adding and editing remarks, since they have the same format, only the last parameter will be taken, but due to the phrasing, the user may misunderstand that this is only for editing a person's remark.
<!--session: 1668153136016-210482ea-097f-4dd0-a2c3-00e4044cd38c-->
<!--Version: Web v3.4.4--> | non_priority | documentation of editing the remark of a person not clear in the user guide it was mentioned in adding a remark that remark command can add or edit the person s remark the format of the command for adding and editing is the same and the only difference is that adding a remark to a person with an existing remark is considered editing the remark but this is not specified clearly in the user guide and may lead to the user s misunderstanding of the format specifically the examples provided did not illustrate clearly how adding and editing remarks works in the user guide it is written however according to the behaviour of tab it is clear that remark r interested to be a ta will edit the remark of the person only if the person already has an existing remark otherwise it would add the remark to the person additionally the second example remark r remark one r remark two is not phrased clearly as the user may misunderstand it as remark one is the original remark and remark two is the edited remark similarly this example also did not specify that the command is done on a person with an existing remark thus editing not adding the second example was intended to illustrate that multiple parameters taken will only have the last one to be considered which is not specific to the format of editing a remark but also adding one as demonstrated below the above is editing the remark the above is adding a remark thus the second example is to illustrate that for both adding and editing remarks since they have the same format only the last parameter will be taken but due to the phrasing the user may misunderstand that this is only for editing a person s remark | 0 |
283,533 | 21,317,433,755 | IssuesEvent | 2022-04-16 14:28:16 | UToledo-SeniorDesign/Diabetes-Management-Mobile-App | https://api.github.com/repos/UToledo-SeniorDesign/Diabetes-Management-Mobile-App | closed | Research about insulin scale and calculating insulin | documentation | ## Overview
Create a google doc with information regarding how does the insulin scale table work, how do we use this to calculate the total amount of insulin units to take (correlated to the total carbs intake).
**This doc doesn't have to be a final draft, but more like we need to start gathering data and information about the subject at hand**
## Details
- Find reliable source
- If we have several options of insulin sliding scale with different values then we want to annotate those findings AND keep the one from the most reliable source
> Note: Please update this description with the google doc reference to be able to comeback and check findings | 1.0 | Research about insulin scale and calculating insulin - ## Overview
Create a google doc with information regarding how does the insulin scale table work, how do we use this to calculate the total amount of insulin units to take (correlated to the total carbs intake).
**This doc doesn't have to be a final draft, but more like we need to start gathering data and information about the subject at hand**
## Details
- Find reliable source
- If we have several options of insulin sliding scale with different values then we want to annotate those findings AND keep the one from the most reliable source
> Note: Please update this description with the google doc reference to be able to comeback and check findings | non_priority | research about insulin scale and calculating insulin overview create a google doc with information regarding how does the insulin scale table work how do we use this to calculate the total amount of insulin units to take correlated to the total carbs intake this doc doesn t have to be a final draft but more like we need to start gathering data and information about the subject at hand details find reliable source if we have several options of insulin sliding scale with different values then we want to annotate those findings and keep the one from the most reliable source note please update this description with the google doc reference to be able to comeback and check findings | 0 |
277,924 | 30,695,946,194 | IssuesEvent | 2023-07-26 18:36:31 | RG4421/ampere-centos-kernel | https://api.github.com/repos/RG4421/ampere-centos-kernel | opened | CVE-2023-33952 (Medium) detected in multiple libraries | Mend: dependency security vulnerability | ## CVE-2023-33952 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxv5.2</b>, <b>linuxv5.2</b>, <b>linuxv5.2</b>, <b>linuxv5.2</b>, <b>linuxv5.2</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A double-free vulnerability was found in the vmwgfx driver in the Linux kernel. The flaw exists within the handling of vmw_buffer_object objects. The issue results from the lack of validating the existence of an object prior to performing further free operations on the object. This flaw allows a local privileged user to escalate privileges and execute code in the context of the kernel.
<p>Publish Date: 2023-07-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-33952>CVE-2023-33952</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-33952">https://www.linuxkernelcves.com/cves/CVE-2023-33952</a></p>
<p>Release Date: 2023-07-24</p>
<p>Fix Resolution: v6.1.13,v6.2,v6.3-rc1,v6.4-rc1</p>
</p>
</details>
<p></p>
| True | CVE-2023-33952 (Medium) detected in multiple libraries - ## CVE-2023-33952 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxv5.2</b>, <b>linuxv5.2</b>, <b>linuxv5.2</b>, <b>linuxv5.2</b>, <b>linuxv5.2</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A double-free vulnerability was found in the vmwgfx driver in the Linux kernel. The flaw exists within the handling of vmw_buffer_object objects. The issue results from the lack of validating the existence of an object prior to performing further free operations on the object. This flaw allows a local privileged user to escalate privileges and execute code in the context of the kernel.
<p>Publish Date: 2023-07-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-33952>CVE-2023-33952</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-33952">https://www.linuxkernelcves.com/cves/CVE-2023-33952</a></p>
<p>Release Date: 2023-07-24</p>
<p>Fix Resolution: v6.1.13,v6.2,v6.3-rc1,v6.4-rc1</p>
</p>
</details>
<p></p>
| non_priority | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries vulnerability details a double free vulnerability was found in the vmwgfx driver in the linux kernel the flaw exists within the handling of vmw buffer object objects the issue results from the lack of validating the existence of an object prior to performing further free operations on the object this flaw allows a local privileged user to escalate privileges and execute code in the context of the kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
24,005 | 23,203,752,737 | IssuesEvent | 2022-08-02 01:41:34 | dgraph-io/dgraph | https://api.github.com/repos/dgraph-io/dgraph | reopened | Add a validator for bulk and live loader | kind/feature status/accepted area/usability area/bulk-loader area/live-loader | ## Experience Report
### What you wanted to do
I wanted to load a big dataset use bulk loader.
Bulk loading and live loading can take quite a while for large datasets, it's very annoying that because of a badly formatted entry the process crashes after hours of running.
### What you actually did
I had to fix the issue and rerun the whole process from scratch.
### Why that wasn't great, with examples
It's a waste of time and resources.
Instead, I would have expected the input to be validated before the process started so any errors would be detected early on.
### Any external references to support your case
| True | Add a validator for bulk and live loader - ## Experience Report
### What you wanted to do
I wanted to load a big dataset use bulk loader.
Bulk loading and live loading can take quite a while for large datasets, it's very annoying that because of a badly formatted entry the process crashes after hours of running.
### What you actually did
I had to fix the issue and rerun the whole process from scratch.
### Why that wasn't great, with examples
It's a waste of time and resources.
Instead, I would have expected the input to be validated before the process started so any errors would be detected early on.
### Any external references to support your case
| non_priority | add a validator for bulk and live loader experience report what you wanted to do i wanted to load a big dataset use bulk loader bulk loading and live loading can take quite a while for large datasets it s very annoying that because of a badly formatted entry the process crashes after hours of running what you actually did i had to fix the issue and rerun the whole process from scratch why that wasn t great with examples it s a waste of time and resources instead i would have expected the input to be validated before the process started so any errors would be detected early on any external references to support your case | 0 |
153,832 | 13,528,579,608 | IssuesEvent | 2020-09-15 16:55:45 | corrados/jamulus | https://api.github.com/repos/corrados/jamulus | closed | Release message consistency? | documentation | I'm putting together an aggregation of release info feeds. The Jamulus messages are slightly different from most others, and I was wondering if a slight modification of format going forward might be possible? https://libreav.org/news | 1.0 | Release message consistency? - I'm putting together an aggregation of release info feeds. The Jamulus messages are slightly different from most others, and I was wondering if a slight modification of format going forward might be possible? https://libreav.org/news | non_priority | release message consistency i m putting together an aggregation of release info feeds the jamulus messages are slightly different from most others and i was wondering if a slight modification of format going forward might be possible | 0 |
17,398 | 4,160,909,458 | IssuesEvent | 2016-06-17 14:52:21 | USGS-R/wateRuse | https://api.github.com/repos/USGS-R/wateRuse | closed | Vignette Structure and Basic Setup and Descriptions | documentation enhancement | Get the basics of a vignette setup showing how to use the individual functions / plots. @rwdudley-usgs will take a first pass, @grrmartin-USGS will take a second pass, @cadieter-usgs will put in explanatory text and @mamaupin-usgs will do final review. | 1.0 | Vignette Structure and Basic Setup and Descriptions - Get the basics of a vignette setup showing how to use the individual functions / plots. @rwdudley-usgs will take a first pass, @grrmartin-USGS will take a second pass, @cadieter-usgs will put in explanatory text and @mamaupin-usgs will do final review. | non_priority | vignette structure and basic setup and descriptions get the basics of a vignette setup showing how to use the individual functions plots rwdudley usgs will take a first pass grrmartin usgs will take a second pass cadieter usgs will put in explanatory text and mamaupin usgs will do final review | 0 |
175,283 | 13,543,173,614 | IssuesEvent | 2020-09-16 18:31:56 | mozilla/addons-frontend | https://api.github.com/repos/mozilla/addons-frontend | opened | CircleCI Integration tests failing - test_category_section_loads_correct_category | component: testing component: uitests needs: info | Log Output
```
_____ test_category_section_loads_correct_category[Resolution: 1080x1920] ______
[gw0] linux -- Python 3.6.9 /usr/bin/python3
base_url = 'http://olympia.test'
selenium = <selenium.webdriver.firefox.webdriver.WebDriver (session="c98ac172-3349-4339-8871-484bc1c68041")>
@pytest.mark.desktop_only
@pytest.mark.nondestructive
def test_category_section_loads_correct_category(base_url, selenium):
page = Extensions(selenium, base_url).open()
item = page.categories.category_list[0]
name = item.name
category = item.click()
> assert name in category.header.name
E AssertionError: assert 'Alerts & Updates' in 'Searching for add-ons'
E + where 'Searching for add-ons' = <pages.desktop.category.Category.Header object at 0x7f7eed7136a0>.name
E + where <pages.desktop.category.Category.Header object at 0x7f7eed7136a0> = <pages.desktop.category.Category object at 0x7f7eed713ac8>.header
tests/frontend/ui/test_home.py:42: AssertionError
```
Are there any ideas on what might be causing this? Maybe a category has been renamed or something? | 2.0 | CircleCI Integration tests failing - test_category_section_loads_correct_category - Log Output
```
_____ test_category_section_loads_correct_category[Resolution: 1080x1920] ______
[gw0] linux -- Python 3.6.9 /usr/bin/python3
base_url = 'http://olympia.test'
selenium = <selenium.webdriver.firefox.webdriver.WebDriver (session="c98ac172-3349-4339-8871-484bc1c68041")>
@pytest.mark.desktop_only
@pytest.mark.nondestructive
def test_category_section_loads_correct_category(base_url, selenium):
page = Extensions(selenium, base_url).open()
item = page.categories.category_list[0]
name = item.name
category = item.click()
> assert name in category.header.name
E AssertionError: assert 'Alerts & Updates' in 'Searching for add-ons'
E + where 'Searching for add-ons' = <pages.desktop.category.Category.Header object at 0x7f7eed7136a0>.name
E + where <pages.desktop.category.Category.Header object at 0x7f7eed7136a0> = <pages.desktop.category.Category object at 0x7f7eed713ac8>.header
tests/frontend/ui/test_home.py:42: AssertionError
```
Are there any ideas on what might be causing this? Maybe a category has been renamed or something? | non_priority | circleci integration tests failing test category section loads correct category log output test category section loads correct category linux python usr bin base url selenium pytest mark desktop only pytest mark nondestructive def test category section loads correct category base url selenium page extensions selenium base url open item page categories category list name item name category item click assert name in category header name e assertionerror assert alerts updates in searching for add ons e where searching for add ons name e where header tests frontend ui test home py assertionerror are there any ideas on what might be causing this maybe a category has been renamed or something | 0 |
241,047 | 18,419,080,640 | IssuesEvent | 2021-10-13 14:23:13 | project-alvarium/alvarium-sdk-java | https://api.github.com/repos/project-alvarium/alvarium-sdk-java | opened | Add copyright notice to all files | documentation | the dell copyright notice mentioned [here](https://github.com/project-alvarium/alvarium-sdk-go/blob/16bd9f306d79529ca2144bf3dcf57c1bc66a9f8c/pkg/sdk.go#L1) should be added on top of all files present in the project. | 1.0 | Add copyright notice to all files - the dell copyright notice mentioned [here](https://github.com/project-alvarium/alvarium-sdk-go/blob/16bd9f306d79529ca2144bf3dcf57c1bc66a9f8c/pkg/sdk.go#L1) should be added on top of all files present in the project. | non_priority | add copyright notice to all files the dell copyright notice mentioned should be added on top of all files present in the project | 0 |
11,343 | 9,313,195,927 | IssuesEvent | 2019-03-26 04:48:46 | gctools-outilsgc/gcconnex | https://api.github.com/repos/gctools-outilsgc/gcconnex | closed | Fatal Error on Career Market Place --- GCcollab | Service: gccollab [zube]: In Progress bug | ## Fatal error on career marketplace on GCcollab while trying to find members
" I am consistently getting a 'fatal error' (see attachment) when trying to search 'find members' within the Career Marketplace. I have logged out/in and rebooted my computer multiple times. I have no outstanding updates for my workstation. "
User email : alec.judt@forces.gc.ca
Username : alecjudt
**( On Internet Explorer )**

Ticket : https://gccollab.gctools-outilsgc.ca/a/tickets/7350
I have also tested it out and this is what I receive **( On Chrome)**

## For the development team
- [ ] Issue user story documented
- [ ] UX input received
- [ ] Design completed
- [ ] Design validated by business team / UX
- [ ] Code review completed by peer
- [ ] Issue closing comment references any duplicate or connected issues or pull requests
- [ ] Issue closed
| 1.0 | Fatal Error on Career Market Place --- GCcollab - ## Fatal error on career marketplace on GCcollab while trying to find members
" I am consistently getting a 'fatal error' (see attachment) when trying to search 'find members' within the Career Marketplace. I have logged out/in and rebooted my computer multiple times. I have no outstanding updates for my workstation. "
User email : alec.judt@forces.gc.ca
Username : alecjudt
**( On Internet Explorer )**

Ticket : https://gccollab.gctools-outilsgc.ca/a/tickets/7350
I have also tested it out and this is what I receive **( On Chrome)**

## For the development team
- [ ] Issue user story documented
- [ ] UX input received
- [ ] Design completed
- [ ] Design validated by business team / UX
- [ ] Code review completed by peer
- [ ] Issue closing comment references any duplicate or connected issues or pull requests
- [ ] Issue closed
| non_priority | fatal error on career market place gccollab fatal error on career marketplace on gccollab while trying to find members i am consistently getting a fatal error see attachment when trying to search find members within the career marketplace i have logged out in and rebooted my computer multiple times i have no outstanding updates for my workstation user email alec judt forces gc ca username alecjudt on internet explorer ticket i have also tested it out and this is what i receive on chrome for the development team issue user story documented ux input received design completed design validated by business team ux code review completed by peer issue closing comment references any duplicate or connected issues or pull requests issue closed | 0 |
181,815 | 30,745,583,985 | IssuesEvent | 2023-07-28 14:48:38 | nextcloud/text | https://api.github.com/repos/nextcloud/text | closed | [UI] list items are too separated in a list | bug design | **Describe the bug**
The items are visually too separated in list
**To Reproduce**
Steps to reproduce the behavior:
1. write a list with some items
**Expected behavior**
The list items should be compact enough to help the user to see that is the same list.
In the html the content of the `<li>` is inside a `<p>` and a `<div>` each of them with a pretty big `margin-bottom`.
If this margin are disabled the space between list item is perfect.
**Screenshots**
actual style

What I think is better

| 1.0 | [UI] list items are too separated in a list - **Describe the bug**
The items are visually too separated in list
**To Reproduce**
Steps to reproduce the behavior:
1. write a list with some items
**Expected behavior**
The list items should be compact enough to help the user to see that is the same list.
In the html the content of the `<li>` is inside a `<p>` and a `<div>` each of them with a pretty big `margin-bottom`.
If this margin are disabled the space between list item is perfect.
**Screenshots**
actual style

What I think is better

| non_priority | list items are too separated in a list describe the bug the items are visually too separated in list to reproduce steps to reproduce the behavior write a list with some items expected behavior the list items should be compact enough to help the user to see that is the same list in the html the content of the is inside a and a each of them with a pretty big margin bottom if this margin are disabled the space between list item is perfect screenshots actual style what i think is better | 0 |
8,441 | 4,240,157,896 | IssuesEvent | 2016-07-06 12:27:40 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Doctool tests require absent eslint | build doc test tools | * **Version**: 6.2.1
* **Platform**: Linux veritas 3.19.0-59-generic #66~14.04.1-Ubuntu SMP Fri May 13 17:27:10 UTC 2016 x86_64 GNU/Linux
* **Subsystem**:
<!-- Enter your issue details below this comment. -->
It seems that several doctool tests depend on the availability of js-yaml in the source tree. js-yaml is supposedly included from the path `tools/eslint/node_modules/js-yaml`, which is not available in the source tarball. Running `make lint` in the source tree leads to the following error message:
```
Linting is not available through the source tarball.
Use the git repo instead: git clone https://github.com/nodejs/node.git
```
The actual offending test outputs, runnable via `/usr/bin/python tools/test.py --mode=release -J doctool`:
```
=== release test-doctool-html ===
Path: doctool/test-doctool-html
module.js:442
throw err;
^
Error: Cannot find module '/home/jelle/temp/node-v6.2.1/tools/eslint/node_modules/js-yaml'
at Function.Module._resolveFilename (module.js:440:15)
at Function.Module._load (module.js:388:25)
at Module.require (module.js:468:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (/home/jelle/temp/node-v6.2.1/tools/doc/node_modules/js-yaml/index.js:15:18)
at Module._compile (module.js:541:32)
at Object.Module._extensions..js (module.js:550:10)
at Module.load (module.js:458:32)
at tryModuleLoad (module.js:417:12)
at Function.Module._load (module.js:409:3)
Command: out/Release/node /home/jelle/temp/node-v6.2.1/test/doctool/test-doctool-html.js
=== release test-doctool-json ===
Path: doctool/test-doctool-json
module.js:442
throw err;
^
Error: Cannot find module '/home/jelle/temp/node-v6.2.1/tools/eslint/node_modules/js-yaml'
at Function.Module._resolveFilename (module.js:440:15)
at Function.Module._load (module.js:388:25)
at Module.require (module.js:468:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (/home/jelle/temp/node-v6.2.1/tools/doc/node_modules/js-yaml/index.js:15:18)
at Module._compile (module.js:541:32)
at Object.Module._extensions..js (module.js:550:10)
at Module.load (module.js:458:32)
at tryModuleLoad (module.js:417:12)
at Function.Module._load (module.js:409:3)
```
I would expect that these tests should rather be skipped when running tests from an extracted tarball, or rewritten so they do not depend on eslint. Otherwise, eslint could possibly be included instead.
/cc @jbergstroem @silverwind @TheAlphaNerd #6031
| 1.0 | Doctool tests require absent eslint - * **Version**: 6.2.1
* **Platform**: Linux veritas 3.19.0-59-generic #66~14.04.1-Ubuntu SMP Fri May 13 17:27:10 UTC 2016 x86_64 GNU/Linux
* **Subsystem**:
<!-- Enter your issue details below this comment. -->
It seems that several doctool tests depend on the availability of js-yaml in the source tree. js-yaml is supposedly included from the path `tools/eslint/node_modules/js-yaml`, which is not available in the source tarball. Running `make lint` in the source tree leads to the following error message:
```
Linting is not available through the source tarball.
Use the git repo instead: git clone https://github.com/nodejs/node.git
```
The actual offending test outputs, runnable via `/usr/bin/python tools/test.py --mode=release -J doctool`:
```
=== release test-doctool-html ===
Path: doctool/test-doctool-html
module.js:442
throw err;
^
Error: Cannot find module '/home/jelle/temp/node-v6.2.1/tools/eslint/node_modules/js-yaml'
at Function.Module._resolveFilename (module.js:440:15)
at Function.Module._load (module.js:388:25)
at Module.require (module.js:468:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (/home/jelle/temp/node-v6.2.1/tools/doc/node_modules/js-yaml/index.js:15:18)
at Module._compile (module.js:541:32)
at Object.Module._extensions..js (module.js:550:10)
at Module.load (module.js:458:32)
at tryModuleLoad (module.js:417:12)
at Function.Module._load (module.js:409:3)
Command: out/Release/node /home/jelle/temp/node-v6.2.1/test/doctool/test-doctool-html.js
=== release test-doctool-json ===
Path: doctool/test-doctool-json
module.js:442
throw err;
^
Error: Cannot find module '/home/jelle/temp/node-v6.2.1/tools/eslint/node_modules/js-yaml'
at Function.Module._resolveFilename (module.js:440:15)
at Function.Module._load (module.js:388:25)
at Module.require (module.js:468:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (/home/jelle/temp/node-v6.2.1/tools/doc/node_modules/js-yaml/index.js:15:18)
at Module._compile (module.js:541:32)
at Object.Module._extensions..js (module.js:550:10)
at Module.load (module.js:458:32)
at tryModuleLoad (module.js:417:12)
at Function.Module._load (module.js:409:3)
```
I would expect that these tests should rather be skipped when running tests from an extracted tarball, or rewritten so they do not depend on eslint. Otherwise, eslint could possibly be included instead.
/cc @jbergstroem @silverwind @TheAlphaNerd #6031
| non_priority | doctool tests require absent eslint version platform linux veritas generic ubuntu smp fri may utc gnu linux subsystem it seems that several doctool tests depend on the availability of js yaml in the source tree js yaml is supposedly included from the path tools eslint node modules js yaml which is not available in the source tarball running make lint in the source tree leads to the following error message linting is not available through the source tarball use the git repo instead git clone the actual offending test outputs runnable via usr bin python tools test py mode release j doctool release test doctool html path doctool test doctool html module js throw err error cannot find module home jelle temp node tools eslint node modules js yaml at function module resolvefilename module js at function module load module js at module require module js at require internal module js at object home jelle temp node tools doc node modules js yaml index js at module compile module js at object module extensions js module js at module load module js at trymoduleload module js at function module load module js command out release node home jelle temp node test doctool test doctool html js release test doctool json path doctool test doctool json module js throw err error cannot find module home jelle temp node tools eslint node modules js yaml at function module resolvefilename module js at function module load module js at module require module js at require internal module js at object home jelle temp node tools doc node modules js yaml index js at module compile module js at object module extensions js module js at module load module js at trymoduleload module js at function module load module js i would expect that these tests should rather be skipped when running tests from an extracted tarball or rewritten so they do not depend on eslint otherwise eslint could possibly be included instead cc jbergstroem silverwind thealphanerd | 0 |
419,333 | 28,142,805,625 | IssuesEvent | 2023-04-02 05:45:59 | Linuxfabrik/monitoring-plugins | https://api.github.com/repos/Linuxfabrik/monitoring-plugins | closed | README.rst: explain "How to install checks into Icinga"? (all or just one) | documentation | In GitLab by @markuslf on Feb 21, 2020, 11:06
- Check
- Basket
- PNG
- Grafana | 1.0 | README.rst: explain "How to install checks into Icinga"? (all or just one) - In GitLab by @markuslf on Feb 21, 2020, 11:06
- Check
- Basket
- PNG
- Grafana | non_priority | readme rst explain how to install checks into icinga all or just one in gitlab by markuslf on feb check basket png grafana | 0 |
34,217 | 16,481,902,925 | IssuesEvent | 2021-05-24 12:53:16 | apache/superset | https://api.github.com/repos/apache/superset | closed | [native filter] Instant filter component will generate 2x more fetch | #bug P1 bug:performance viz:dashboard:native-filter | #### How to reproduce the bug
1. Go to dashboard
2. add native filter component and add a value
3. Check `Apply changes instantly`
4. reload dashboard. You will see every charts are loaded 3 times. This issue cause dashboard load really slow.
#### Screenshots

### Environment
(please complete the following information):
latest master
### Checklist
Make sure to follow these steps before submitting your issue - thank you!
- [x] I have checked the superset logs for python stacktraces and included it here as text if there are any.
- [x] I have reproduced the issue with at least the latest released version of superset.
- [x] I have checked the issue tracker for the same issue and I haven't found one similar.
### Additional context
cc @junlincc @villebro This issue is reported by open source user. | True | [native filter] Instant filter component will generate 2x more fetch - #### How to reproduce the bug
1. Go to dashboard
2. add native filter component and add a value
3. Check `Apply changes instantly`
4. reload dashboard. You will see every charts are loaded 3 times. This issue cause dashboard load really slow.
#### Screenshots

### Environment
(please complete the following information):
latest master
### Checklist
Make sure to follow these steps before submitting your issue - thank you!
- [x] I have checked the superset logs for python stacktraces and included it here as text if there are any.
- [x] I have reproduced the issue with at least the latest released version of superset.
- [x] I have checked the issue tracker for the same issue and I haven't found one similar.
### Additional context
cc @junlincc @villebro This issue is reported by open source user. | non_priority | instant filter component will generate more fetch how to reproduce the bug go to dashboard add native filter component and add a value check apply changes instantly reload dashboard you will see every charts are loaded times this issue cause dashboard load really slow screenshots environment please complete the following information latest master checklist make sure to follow these steps before submitting your issue thank you i have checked the superset logs for python stacktraces and included it here as text if there are any i have reproduced the issue with at least the latest released version of superset i have checked the issue tracker for the same issue and i haven t found one similar additional context cc junlincc villebro this issue is reported by open source user | 0 |
201,534 | 23,018,585,103 | IssuesEvent | 2022-07-22 01:04:48 | william31212/NISRA_BlogEngine_WhiteSourceBolt | https://api.github.com/repos/william31212/NISRA_BlogEngine_WhiteSourceBolt | opened | WS-2021-0133 (Medium) detected in tinymce-4.2.4.min.js | security vulnerability | ## WS-2021-0133 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tinymce-4.2.4.min.js</b></p></summary>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.2.4/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.2.4/tinymce.min.js</a></p>
<p>Path to vulnerable library: /admin/editors/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.2.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/william31212/NISRA_BlogEngine_WhiteSourceBolt/commits/df351b2afd89988ab17bca6c76add5ddebcf055b">df351b2afd89988ab17bca6c76add5ddebcf055b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site scripting vulnerability was found in TinyMCE before 5.7.1. A cross-site scripting (XSS) vulnerability was discovered in the URL sanitization logic of the core parser for form elements. The vulnerability allowed arbitrary JavaScript execution when inserting a specially crafted piece of content into the editor using the clipboard or APIs, and then submitting the form. However, as TinyMCE does not allow forms to be submitted while editing, the vulnerability could only be triggered when the content was previewed or rendered outside of the editor.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://github.com/tinymce/tinymce/commit/09bfb1dcb176611d22a477666d8cea72cd14c3fe>WS-2021-0133</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5vm8-hhgr-jcjp">https://github.com/advisories/GHSA-5vm8-hhgr-jcjp</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: tinymce - 5.7.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0133 (Medium) detected in tinymce-4.2.4.min.js - ## WS-2021-0133 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tinymce-4.2.4.min.js</b></p></summary>
<p>TinyMCE rich text editor</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.2.4/tinymce.min.js">https://cdnjs.cloudflare.com/ajax/libs/tinymce/4.2.4/tinymce.min.js</a></p>
<p>Path to vulnerable library: /admin/editors/tinymce/tinymce.min.js</p>
<p>
Dependency Hierarchy:
- :x: **tinymce-4.2.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/william31212/NISRA_BlogEngine_WhiteSourceBolt/commits/df351b2afd89988ab17bca6c76add5ddebcf055b">df351b2afd89988ab17bca6c76add5ddebcf055b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site scripting vulnerability was found in TinyMCE before 5.7.1. A cross-site scripting (XSS) vulnerability was discovered in the URL sanitization logic of the core parser for form elements. The vulnerability allowed arbitrary JavaScript execution when inserting a specially crafted piece of content into the editor using the clipboard or APIs, and then submitting the form. However, as TinyMCE does not allow forms to be submitted while editing, the vulnerability could only be triggered when the content was previewed or rendered outside of the editor.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://github.com/tinymce/tinymce/commit/09bfb1dcb176611d22a477666d8cea72cd14c3fe>WS-2021-0133</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5vm8-hhgr-jcjp">https://github.com/advisories/GHSA-5vm8-hhgr-jcjp</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: tinymce - 5.7.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws medium detected in tinymce min js ws medium severity vulnerability vulnerable library tinymce min js tinymce rich text editor library home page a href path to vulnerable library admin editors tinymce tinymce min js dependency hierarchy x tinymce min js vulnerable library found in head commit a href found in base branch main vulnerability details cross site scripting vulnerability was found in tinymce before a cross site scripting xss vulnerability was discovered in the url sanitization logic of the core parser for form elements the vulnerability allowed arbitrary javascript execution when inserting a specially crafted piece of content into the editor using the clipboard or apis and then submitting the form however as tinymce does not allow forms to be submitted while editing the vulnerability could only be triggered when the content was previewed or rendered outside of the editor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tinymce step up your open source security game with mend | 0 |
159,846 | 25,065,588,829 | IssuesEvent | 2022-11-07 08:01:31 | radicle-dev/radicle-interface | https://api.github.com/repos/radicle-dev/radicle-interface | closed | Bad 404 | design | Currently, no matter what page you land on, if it doesn't exist you get this:
<img width="293" alt="image" src="https://user-images.githubusercontent.com/40774/188609196-08aa8172-6832-4f59-a052-d89a2a59aabf.png">
Eg. `/foo/bar/baz`
We should maybe say:
```
404
There's nothing here.
``` | 1.0 | Bad 404 - Currently, no matter what page you land on, if it doesn't exist you get this:
<img width="293" alt="image" src="https://user-images.githubusercontent.com/40774/188609196-08aa8172-6832-4f59-a052-d89a2a59aabf.png">
Eg. `/foo/bar/baz`
We should maybe say:
```
404
There's nothing here.
``` | non_priority | bad currently no matter what page you land on if it doesn t exist you get this img width alt image src eg foo bar baz we should maybe say there s nothing here | 0 |
23,446 | 3,823,347,287 | IssuesEvent | 2016-03-30 07:40:19 | AsyncHttpClient/async-http-client | https://api.github.com/repos/AsyncHttpClient/async-http-client | closed | Multipart/Form-Data: uploaded FilePart is corrupt/incomplete | AHC2 Defect | I'm creating a kind of Proxy using the PlayFramework 2.5.1 and the bundled AsyncHttpClient 2.0-RC16.
When uploading files over a certain size (not sure where's the limit), the transmitted Fileparts are incomplete and don't include a closing boundary.
The requests look like this
browser -> playApp -> backend_server
[asynchttpclient_bug.zip](https://github.com/AsyncHttpClient/async-http-client/files/194538/asynchttpclient_bug.zip)
The files
smallfile_browser_playapp.raw + smallfile_play_backendserver.raw + test.jpg
show how the communication should be ( the fileparts end correct and include the closing boundary)
The files
conversation_browser_playapp.raw + conversation_play_backendserver.raw + 10003452_245377232464587_7349773111777690908_n.jpg
show how the communication is broken. the filepart in conversation_play_backendserver.raw suddendly stops and does not contain a closing boundary.
it looks like this is a problem with some buffer having not enough space or something like that.
Any help appreciated!
[asynchttpclient_bug.zip](https://github.com/AsyncHttpClient/async-http-client/files/194537/asynchttpclient_bug.zip)
| 1.0 | Multipart/Form-Data: uploaded FilePart is corrupt/incomplete - I'm creating a kind of Proxy using the PlayFramework 2.5.1 and the bundled AsyncHttpClient 2.0-RC16.
When uploading files over a certain size (not sure where's the limit), the transmitted Fileparts are incomplete and don't include a closing boundary.
The requests look like this
browser -> playApp -> backend_server
[asynchttpclient_bug.zip](https://github.com/AsyncHttpClient/async-http-client/files/194538/asynchttpclient_bug.zip)
The files
smallfile_browser_playapp.raw + smallfile_play_backendserver.raw + test.jpg
show how the communication should be ( the fileparts end correct and include the closing boundary)
The files
conversation_browser_playapp.raw + conversation_play_backendserver.raw + 10003452_245377232464587_7349773111777690908_n.jpg
show how the communication is broken. the filepart in conversation_play_backendserver.raw suddendly stops and does not contain a closing boundary.
it looks like this is a problem with some buffer having not enough space or something like that.
Any help appreciated!
[asynchttpclient_bug.zip](https://github.com/AsyncHttpClient/async-http-client/files/194537/asynchttpclient_bug.zip)
| non_priority | multipart form data uploaded filepart is corrupt incomplete i m creating a kind of proxy using the playframework and the bundled asynchttpclient when uploading files over a certain size not sure where s the limit the transmitted fileparts are incomplete and don t include a closing boundary the requests look like this browser playapp backend server the files smallfile browser playapp raw smallfile play backendserver raw test jpg show how the communication should be the fileparts end correct and include the closing boundary the files conversation browser playapp raw conversation play backendserver raw n jpg show how the communication is broken the filepart in conversation play backendserver raw suddendly stops and does not contain a closing boundary it looks like this is a problem with some buffer having not enough space or something like that any help appreciated | 0 |
53,794 | 6,344,448,947 | IssuesEvent | 2017-07-27 19:56:50 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Convert if to switch loses certain trivia | Area-IDE Bug Depth Testing | Convert ```if (/* t0 */args.Length /* t1*/ == /* t2 */ 2) return /* t3 */ 0 /* t4 */; /* t5 */ else /* t6 */ return /* t7 */ 3 /* t8 */;```
The comments "t0", "t2," and "t6" disappear.
```C#
switch (args.Length /* t1*/ )
{
case 2:
return /* t3 */ 0 /* t4 */; /* t5 */
default:
return /* t7 */ 3 /* t8 */;
}
```
| 1.0 | Convert if to switch loses certain trivia - Convert ```if (/* t0 */args.Length /* t1*/ == /* t2 */ 2) return /* t3 */ 0 /* t4 */; /* t5 */ else /* t6 */ return /* t7 */ 3 /* t8 */;```
The comments "t0", "t2," and "t6" disappear.
```C#
switch (args.Length /* t1*/ )
{
case 2:
return /* t3 */ 0 /* t4 */; /* t5 */
default:
return /* t7 */ 3 /* t8 */;
}
```
| non_priority | convert if to switch loses certain trivia convert if args length return else return the comments and disappear c switch args length case return default return | 0 |
89,111 | 15,823,732,809 | IssuesEvent | 2021-04-06 01:30:54 | thomasklwong/profile | https://api.github.com/repos/thomasklwong/profile | opened | CVE-2020-7656 (Medium) detected in jquery-1.7.1.min.js | security vulnerability | ## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: profile/node_modules/sockjs/examples/multiplex/index.html</p>
<p>Path to vulnerable library: profile/node_modules/sockjs/examples/multiplex/index.html,profile/node_modules/sockjs/examples/hapi/html/index.html,profile/node_modules/sockjs/examples/express/index.html,profile/node_modules/vm-browserify/example/run/index.html,profile/node_modules/sockjs/examples/echo/index.html,profile/node_modules/sockjs/examples/express-3.x/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568">https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: jquery-rails - 2.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7656 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: profile/node_modules/sockjs/examples/multiplex/index.html</p>
<p>Path to vulnerable library: profile/node_modules/sockjs/examples/multiplex/index.html,profile/node_modules/sockjs/examples/hapi/html/index.html,profile/node_modules/sockjs/examples/express/index.html,profile/node_modules/vm-browserify/example/run/index.html,profile/node_modules/sockjs/examples/echo/index.html,profile/node_modules/sockjs/examples/express-3.x/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568">https://github.com/rails/jquery-rails/commit/8f601cbfa08749ee5bbd2bffb6e509db9d753568</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: jquery-rails - 2.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file profile node modules sockjs examples multiplex index html path to vulnerable library profile node modules sockjs examples multiplex index html profile node modules sockjs examples hapi html index html profile node modules sockjs examples express index html profile node modules vm browserify example run index html profile node modules sockjs examples echo index html profile node modules sockjs examples express x index html dependency hierarchy x jquery min js vulnerable library vulnerability details jquery prior to allows cross site scripting attacks via the load method the load method fails to recognize and remove html tags that contain a whitespace character i e which results in the enclosed script logic to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery rails step up your open source security game with whitesource | 0 |
850 | 2,915,370,792 | IssuesEvent | 2015-06-23 12:09:04 | UnixJunkie/daft | https://api.github.com/repos/UnixJunkie/daft | closed | protect against message replays | feature security | At send time:
unique_id = host ^ ":" ^ port ^ ":" ^ message_num
put it in a StringSet
at reception time:
look for unique_id into the StringSet
create a None and warn about it in case the message
was already seen
else give the message to the rest of the pipelin
| True | protect against message replays - At send time:
unique_id = host ^ ":" ^ port ^ ":" ^ message_num
put it in a StringSet
at reception time:
look for unique_id into the StringSet
create a None and warn about it in case the message
was already seen
else give the message to the rest of the pipelin
| non_priority | protect against message replays at send time unique id host port message num put it in a stringset at reception time look for unique id into the stringset create a none and warn about it in case the message was already seen else give the message to the rest of the pipelin | 0 |
13,527 | 8,557,082,976 | IssuesEvent | 2018-11-08 14:54:12 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Renaming/Moving Files Should Also Update AutoLoad Tab | enhancement topic:editor usability | I think that when you rename or move a file, the autoloaded node paths should be updated as well. Currently when you rename or move autoloaded scenes and scripts, you have to re-add them in the AutoLoad tab. | True | Renaming/Moving Files Should Also Update AutoLoad Tab - I think that when you rename or move a file, the autoloaded node paths should be updated as well. Currently when you rename or move autoloaded scenes and scripts, you have to re-add them in the AutoLoad tab. | non_priority | renaming moving files should also update autoload tab i think that when you rename or move a file the autoloaded node paths should be updated as well currently when you rename or move autoloaded scenes and scripts you have to re add them in the autoload tab | 0 |
140,743 | 18,912,448,510 | IssuesEvent | 2021-11-16 15:20:37 | berviantoleo/berviantoleo.github.io | https://api.github.com/repos/berviantoleo/berviantoleo.github.io | opened | CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz | security vulnerability | ## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>Path to dependency file: berviantoleo.github.io/package.json</p>
<p>Path to vulnerable library: berviantoleo.github.io/node_modules/json-schema/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-6.0.1.tgz (Root Library)
- request-2.88.2.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/berviantoleo/berviantoleo.github.io/commit/5b2fce12a39288784bade60f73e3d3c5293f66d3">5b2fce12a39288784bade60f73e3d3c5293f66d3</a></p>
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3918">https://nvd.nist.gov/vuln/detail/CVE-2021-3918</a></p>
<p>Release Date: 2021-11-13</p>
<p>Fix Resolution: json-schema - 0.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz - ## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>Path to dependency file: berviantoleo.github.io/package.json</p>
<p>Path to vulnerable library: berviantoleo.github.io/node_modules/json-schema/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-6.0.1.tgz (Root Library)
- request-2.88.2.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/berviantoleo/berviantoleo.github.io/commit/5b2fce12a39288784bade60f73e3d3c5293f66d3">5b2fce12a39288784bade60f73e3d3c5293f66d3</a></p>
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3918">https://nvd.nist.gov/vuln/detail/CVE-2021-3918</a></p>
<p>Release Date: 2021-11-13</p>
<p>Fix Resolution: json-schema - 0.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in json schema tgz cve high severity vulnerability vulnerable library json schema tgz json schema validation and specifications library home page a href path to dependency file berviantoleo github io package json path to vulnerable library berviantoleo github io node modules json schema package json dependency hierarchy node sass tgz root library request tgz http signature tgz jsprim tgz x json schema tgz vulnerable library found in head commit a href found in base branch development vulnerability details json schema is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution json schema step up your open source security game with whitesource | 0 |
165,976 | 12,887,320,083 | IssuesEvent | 2020-07-13 11:01:19 | w3c/epubcheck | https://api.github.com/repos/w3c/epubcheck | closed | harmonize early BDD tests to the latest conventions | status: ready for implem type: tests | Check the early tests added in fdd32d7c8e5985948914910e404baeb37331dcec and align them with the latest naming conventions and templates used in the new test suite. | 1.0 | harmonize early BDD tests to the latest conventions - Check the early tests added in fdd32d7c8e5985948914910e404baeb37331dcec and align them with the latest naming conventions and templates used in the new test suite. | non_priority | harmonize early bdd tests to the latest conventions check the early tests added in and align them with the latest naming conventions and templates used in the new test suite | 0 |
70,327 | 30,633,416,592 | IssuesEvent | 2023-07-24 15:57:27 | dockstore/dockstore | https://api.github.com/repos/dockstore/dockstore | closed | Some endpoints expecting JSON fail with 500 errors if JSON not sent | bug web-service review qa | **Describe the bug**
Three endpoints found that expect a JSON body, where if you pass a non-JSON body, a 500 status code is returned:
* /auth/tokens/google
* /api/ga4gh/v2/extended/tools/entry/_search
* /workflows/hostedEntry/\<id>
**To Reproduce**
Steps to reproduce the behavior:
```
curl -X 'POST' \
'https://qa.dockstore.org/api/auth/tokens/google' \
-H 'accept: application/json' \
-H 'Authorization: Bearer <redacted>' \
-H 'Content-Type: */*' \
-d 'not valid json'
{"code":500,"message":"There was an error processing your request. It has been logged (ID dc449b8713742c40)."}
```
```
curl -X 'POST' \
'https://qa.dockstore.org/api/api/ga4gh/v2/extended/tools/entry/_search' \
-H 'accept: application/json' \
-H 'Content-Type: */*' \
-d 'string'
{"code":500,"message":"There was an error processing your request. It has been logged (ID 9982324538a8cc75)."}
```
**Expected behavior**
Should be a 4xx.
Bonus: The endpoints should be declared to only consume `application/json`; I think if so declared, the call would fail in the framework before even getting to our code.
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-2391)
┆Fix Versions: Dockstore 1.14.x
┆Issue Number: DOCK-2391
┆Sprint: 115 - Niagara
┆Issue Type: Story
| 1.0 | Some endpoints expecting JSON fail with 500 errors if JSON not sent - **Describe the bug**
Three endpoints found that expect a JSON body, where if you pass a non-JSON body, a 500 status code is returned:
* /auth/tokens/google
* /api/ga4gh/v2/extended/tools/entry/_search
* /workflows/hostedEntry/\<id>
**To Reproduce**
Steps to reproduce the behavior:
```
curl -X 'POST' \
'https://qa.dockstore.org/api/auth/tokens/google' \
-H 'accept: application/json' \
-H 'Authorization: Bearer <redacted>' \
-H 'Content-Type: */*' \
-d 'not valid json'
{"code":500,"message":"There was an error processing your request. It has been logged (ID dc449b8713742c40)."}
```
```
curl -X 'POST' \
'https://qa.dockstore.org/api/api/ga4gh/v2/extended/tools/entry/_search' \
-H 'accept: application/json' \
-H 'Content-Type: */*' \
-d 'string'
{"code":500,"message":"There was an error processing your request. It has been logged (ID 9982324538a8cc75)."}
```
**Expected behavior**
Should be a 4xx.
Bonus: The endpoints should be declared to only consume `application/json`; I think if so declared, the call would fail in the framework before even getting to our code.
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-2391)
┆Fix Versions: Dockstore 1.14.x
┆Issue Number: DOCK-2391
┆Sprint: 115 - Niagara
┆Issue Type: Story
| non_priority | some endpoints expecting json fail with errors if json not sent describe the bug three endpoints found that expect a json body where if you pass a non json body a status code is returned auth tokens google api extended tools entry search workflows hostedentry to reproduce steps to reproduce the behavior curl x post h accept application json h authorization bearer h content type d not valid json code message there was an error processing your request it has been logged id curl x post h accept application json h content type d string code message there was an error processing your request it has been logged id expected behavior should be a bonus the endpoints should be declared to only consume application json i think if so declared the call would fail in the framework before even getting to our code ┆issue is synchronized with this ┆fix versions dockstore x ┆issue number dock ┆sprint niagara ┆issue type story | 0 |
6,205 | 8,607,237,737 | IssuesEvent | 2018-11-17 20:14:05 | FourthState/plasma-mvp-sidechain | https://api.github.com/repos/FourthState/plasma-mvp-sidechain | closed | Change Confirmation Hash to be compatible with rootchain | compatibility | When deciding upon what the confirmation hash should consist of, we need to keep a couple a things in mind.
- Confirmation Hash needs to be secure against a cross plasma chain attack outlined [here](https://github.com/FourthState/plasma-mvp-rootchain/issues/48)
- It needs to be unique so an attacker cannot reuse an old confirmation signature on a new tx.
Solution 1 (currently implemented on the rootchain): Sign over the `Hash(Hash(txBytes) + block_hash)`. By taking the hash of txbytes, we have uniqueness since each set of txbytes contains at least one position and every position is unqiue. Signing over the block hash allows us to avoid a cross chain attack since the block hash is unique and can only be recreated if another chain has identical history up to that block. Deposits and fees will not have confirm signatures associated with them and therefore not cause any issues with this scheme.
Solution 2: Sign over the `Hash(Hash(contract address) + Hash(position priority)) `. The ethereum contract address allows us to avoid the cross plasma chain attack and the hash of the priority provides uniqueness. However, hashes of uints in golang can produce varying results to hashes on uints in solidity.
Solution 3: Sign over the `Hash(Hash(contract address) + Hash(rlp encoded position))`. This maintains the same properties as above except the hashes in golang and solidity will be consistent. Since solidity to the best of my knowledge does not have an onchain rlp encoder at the moment, we will pass in the position bytes into the start exit functions.
Solution 1 is our current decision on how to handle confirmation hashes.
| True | Change Confirmation Hash to be compatible with rootchain - When deciding upon what the confirmation hash should consist of, we need to keep a couple a things in mind.
- Confirmation Hash needs to be secure against a cross plasma chain attack outlined [here](https://github.com/FourthState/plasma-mvp-rootchain/issues/48)
- It needs to be unique so an attacker cannot reuse an old confirmation signature on a new tx.
Solution 1 (currently implemented on the rootchain): Sign over the `Hash(Hash(txBytes) + block_hash)`. By taking the hash of txbytes, we have uniqueness since each set of txbytes contains at least one position and every position is unqiue. Signing over the block hash allows us to avoid a cross chain attack since the block hash is unique and can only be recreated if another chain has identical history up to that block. Deposits and fees will not have confirm signatures associated with them and therefore not cause any issues with this scheme.
Solution 2: Sign over the `Hash(Hash(contract address) + Hash(position priority)) `. The ethereum contract address allows us to avoid the cross plasma chain attack and the hash of the priority provides uniqueness. However, hashes of uints in golang can produce varying results to hashes on uints in solidity.
Solution 3: Sign over the `Hash(Hash(contract address) + Hash(rlp encoded position))`. This maintains the same properties as above except the hashes in golang and solidity will be consistent. Since solidity to the best of my knowledge does not have an onchain rlp encoder at the moment, we will pass in the position bytes into the start exit functions.
Solution 1 is our current decision on how to handle confirmation hashes.
| non_priority | change confirmation hash to be compatible with rootchain when deciding upon what the confirmation hash should consist of we need to keep a couple a things in mind confirmation hash needs to be secure against a cross plasma chain attack outlined it needs to be unique so an attacker cannot reuse an old confirmation signature on a new tx solution currently implemented on the rootchain sign over the hash hash txbytes block hash by taking the hash of txbytes we have uniqueness since each set of txbytes contains at least one position and every position is unqiue signing over the block hash allows us to avoid a cross chain attack since the block hash is unique and can only be recreated if another chain has identical history up to that block deposits and fees will not have confirm signatures associated with them and therefore not cause any issues with this scheme solution sign over the hash hash contract address hash position priority the ethereum contract address allows us to avoid the cross plasma chain attack and the hash of the priority provides uniqueness however hashes of uints in golang can produce varying results to hashes on uints in solidity solution sign over the hash hash contract address hash rlp encoded position this maintains the same properties as above except the hashes in golang and solidity will be consistent since solidity to the best of my knowledge does not have an onchain rlp encoder at the moment we will pass in the position bytes into the start exit functions solution is our current decision on how to handle confirmation hashes | 0 |
12,768 | 15,034,202,536 | IssuesEvent | 2021-02-02 12:34:06 | jiangdashao/Matrix-Issues | https://api.github.com/repos/jiangdashao/Matrix-Issues | closed | [INCOMPATIBILITY] GPFlags | Incompatibility Invalid | ## Troubleshooting Information
- [x] The incompatible plugin is up-to-date
- [X] Matrix and ProtocolLib are up-to-date
- [X] Matrix is running on a 1.8, 1.12, 1.13, 1.14, 1.15, or 1.16 server
- [X] The issue happens on default config.yml and checks.yml
- [X] I've tested if the issue happens on default config
## Issue Information
**Server version**: Purpur 966 (1.16.5)
**Incompatible plugin**: [GriefPreventionFlags](https://github.com/ShaneBeee/GriefPreventionFlags)
**Verbose messages (or) console errors**: none
**How/when does this happen**: Someone flies or falls in a claim with ownerfly/ownermemberfly or nofall flags.
**Video of incompatibility**:
**Other information**: | True | [INCOMPATIBILITY] GPFlags - ## Troubleshooting Information
- [x] The incompatible plugin is up-to-date
- [X] Matrix and ProtocolLib are up-to-date
- [X] Matrix is running on a 1.8, 1.12, 1.13, 1.14, 1.15, or 1.16 server
- [X] The issue happens on default config.yml and checks.yml
- [X] I've tested if the issue happens on default config
## Issue Information
**Server version**: Purpur 966 (1.16.5)
**Incompatible plugin**: [GriefPreventionFlags](https://github.com/ShaneBeee/GriefPreventionFlags)
**Verbose messages (or) console errors**: none
**How/when does this happen**: Someone flies or falls in a claim with ownerfly/ownermemberfly or nofall flags.
**Video of incompatibility**:
**Other information**: | non_priority | gpflags troubleshooting information the incompatible plugin is up to date matrix and protocollib are up to date matrix is running on a or server the issue happens on default config yml and checks yml i ve tested if the issue happens on default config issue information server version purpur incompatible plugin verbose messages or console errors none how when does this happen someone flies or falls in a claim with ownerfly ownermemberfly or nofall flags video of incompatibility other information | 0 |
182,172 | 14,107,477,989 | IssuesEvent | 2020-11-06 16:21:12 | phetsims/projectile-motion | https://api.github.com/repos/phetsims/projectile-motion | opened | CT No client guides | type:automated-testing | ```
projectile-motion : build
Build failed with status code 1:
Running "report-media" task
Running "clean" task
Running "build" task
Building runnable repository (projectile-motion, brands: phet, phet-io)
Building brand: phet
>> Webpack build complete: 14633ms
>> Production minification complete: 43489ms (2105689 bytes)
>> Debug minification complete: 19383ms (3858532 bytes)
Building brand: phet-io
>> Webpack build complete: 3149ms
>> Production minification complete: 22788ms (2119337 bytes)
>> Debug minification complete: 22153ms (2380146 bytes)
>> No client guides found at ../phet-io-client-guides/projectile-motion/, no guides being built.
Fatal error: Perennial task failed: Error: TypeError: this.ariaValueText.replaceAll is not a function
at qb.updateAriaValueText (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:1418390)
at http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:1418124
at s (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:27486)
at r.emit (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:98176)
at P._notifyListeners (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:30205)
at P.set (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:29573)
at http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:1430727
at b.link (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:31034)
at new Nb (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:1430707)
at new qb (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:1433323)
(node:155221) Warning: Accessing non-existent property 'padLevels' of module exports inside circular dependency
(Use `node --trace-warnings ...` to show where the warning was created)
Browserslist: caniuse-lite is outdated. Please run:
npx browserslist@latest --update-db
Snapshot from 11/6/2020, 1:53:07 AM
``` | 1.0 | CT No client guides - ```
projectile-motion : build
Build failed with status code 1:
Running "report-media" task
Running "clean" task
Running "build" task
Building runnable repository (projectile-motion, brands: phet, phet-io)
Building brand: phet
>> Webpack build complete: 14633ms
>> Production minification complete: 43489ms (2105689 bytes)
>> Debug minification complete: 19383ms (3858532 bytes)
Building brand: phet-io
>> Webpack build complete: 3149ms
>> Production minification complete: 22788ms (2119337 bytes)
>> Debug minification complete: 22153ms (2380146 bytes)
>> No client guides found at ../phet-io-client-guides/projectile-motion/, no guides being built.
Fatal error: Perennial task failed: Error: TypeError: this.ariaValueText.replaceAll is not a function
at qb.updateAriaValueText (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:1418390)
at http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:1418124
at s (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:27486)
at r.emit (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:98176)
at P._notifyListeners (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:30205)
at P.set (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:29573)
at http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:1430727
at b.link (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:31034)
at new Nb (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:1430707)
at new qb (http://localhost:37167/projectile-motion/build/phet-io/projectile-motion_all_phet-io.html?ea&brand=phet-io&phetioStandalone&phetioPrintAPI:846:1433323)
(node:155221) Warning: Accessing non-existent property 'padLevels' of module exports inside circular dependency
(Use `node --trace-warnings ...` to show where the warning was created)
Browserslist: caniuse-lite is outdated. Please run:
npx browserslist@latest --update-db
Snapshot from 11/6/2020, 1:53:07 AM
``` | non_priority | ct no client guides projectile motion build build failed with status code running report media task running clean task running build task building runnable repository projectile motion brands phet phet io building brand phet webpack build complete production minification complete bytes debug minification complete bytes building brand phet io webpack build complete production minification complete bytes debug minification complete bytes no client guides found at phet io client guides projectile motion no guides being built fatal error perennial task failed error typeerror this ariavaluetext replaceall is not a function at qb updateariavaluetext at at s at r emit at p notifylisteners at p set at at b link at new nb at new qb node warning accessing non existent property padlevels of module exports inside circular dependency use node trace warnings to show where the warning was created browserslist caniuse lite is outdated please run npx browserslist latest update db snapshot from am | 0 |
8,123 | 2,963,685,376 | IssuesEvent | 2015-07-10 12:19:01 | trendwerk/trendpress | https://api.github.com/repos/trendwerk/trendpress | closed | Timber: the_excerpt | improvement needs-testing | `loop-search.php` contains a function call to `the_excerpt()`. We should look for alternatives. | 1.0 | Timber: the_excerpt - `loop-search.php` contains a function call to `the_excerpt()`. We should look for alternatives. | non_priority | timber the excerpt loop search php contains a function call to the excerpt we should look for alternatives | 0 |
110,126 | 9,436,163,127 | IssuesEvent | 2019-04-13 03:40:34 | att/ast | https://api.github.com/repos/att/ast | opened | The `tstfile()` and related functions need to be changed | cleanup testing | I recently modified the API tests to be run via a test runner script. That script is responsible for creating a unique temp dir for the test and removing its contents if the test passes. It is no longer necessary for individual API tests to remove transient files they create when the test terminates. That they do so is actually counter productive since it makes debugging test failures more difficult since the removal of such file system artifacts makes it harder to understand why the test failed. | 1.0 | The `tstfile()` and related functions need to be changed - I recently modified the API tests to be run via a test runner script. That script is responsible for creating a unique temp dir for the test and removing its contents if the test passes. It is no longer necessary for individual API tests to remove transient files they create when the test terminates. That they do so is actually counter productive since it makes debugging test failures more difficult since the removal of such file system artifacts makes it harder to understand why the test failed. | non_priority | the tstfile and related functions need to be changed i recently modified the api tests to be run via a test runner script that script is responsible for creating a unique temp dir for the test and removing its contents if the test passes it is no longer necessary for individual api tests to remove transient files they create when the test terminates that they do so is actually counter productive since it makes debugging test failures more difficult since the removal of such file system artifacts makes it harder to understand why the test failed | 0 |
261,749 | 19,724,526,916 | IssuesEvent | 2022-01-13 18:34:25 | openthread/ot-docs | https://api.github.com/repos/openthread/ot-docs | closed | No ping6 in codelab_otsim | bug documentation | When following https://openthread.io/codelabs/openthread-simulation#4 step 5, I get the following:
```
root@a761053b08c5:~/src/openthread# ping6 -c 4 fdd5:a079:4a43:2154:7d6a:709b:42d:a16d
bash: ping6: command not found
```
If I manually install the `iputils-ping` package, it works OK, so it looks like `ping6` is missing from the Docker image.
It also says:
> on your host machine's command line, ping Node 1
whereas it makes more sense to me to say something like:
> on the container's command line, ping Node 1 | 1.0 | No ping6 in codelab_otsim - When following https://openthread.io/codelabs/openthread-simulation#4 step 5, I get the following:
```
root@a761053b08c5:~/src/openthread# ping6 -c 4 fdd5:a079:4a43:2154:7d6a:709b:42d:a16d
bash: ping6: command not found
```
If I manually install the `iputils-ping` package, it works OK, so it looks like `ping6` is missing from the Docker image.
It also says:
> on your host machine's command line, ping Node 1
whereas it makes more sense to me to say something like:
> on the container's command line, ping Node 1 | non_priority | no in codelab otsim when following step i get the following root src openthread c bash command not found if i manually install the iputils ping package it works ok so it looks like is missing from the docker image it also says on your host machine s command line ping node whereas it makes more sense to me to say something like on the container s command line ping node | 0 |
40,300 | 12,758,144,692 | IssuesEvent | 2020-06-29 01:04:45 | kenferrara/arcgis-rest-js | https://api.github.com/repos/kenferrara/arcgis-rest-js | opened | CVE-2018-11696 (High) detected in node-sass-4.12.0.tgz | security vulnerability | ## CVE-2018-11696 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.12.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/arcgis-rest-js/package.json</p>
<p>Path to vulnerable library: /arcgis-rest-js/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.12.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Inspect::operator which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11696>CVE-2018-11696</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/libsass/issues/2665">https://github.com/sass/libsass/issues/2665</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: Libsass:3.5.5, Node-sass:4.14.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.12.0","isTransitiveDependency":false,"dependencyTree":"node-sass:4.12.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"Libsass:3.5.5, Node-sass:4.14.0"}],"vulnerabilityIdentifier":"CVE-2018-11696","vulnerabilityDetails":"An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Inspect::operator which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11696","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-11696 (High) detected in node-sass-4.12.0.tgz - ## CVE-2018-11696 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.12.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/arcgis-rest-js/package.json</p>
<p>Path to vulnerable library: /arcgis-rest-js/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.12.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Inspect::operator which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11696>CVE-2018-11696</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/libsass/issues/2665">https://github.com/sass/libsass/issues/2665</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: Libsass:3.5.5, Node-sass:4.14.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.12.0","isTransitiveDependency":false,"dependencyTree":"node-sass:4.12.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"Libsass:3.5.5, Node-sass:4.14.0"}],"vulnerabilityIdentifier":"CVE-2018-11696","vulnerabilityDetails":"An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Inspect::operator which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11696","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in node sass tgz cve high severity vulnerability vulnerable library node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm arcgis rest js package json path to vulnerable library arcgis rest js node modules node sass package json dependency hierarchy x node sass tgz vulnerable library vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass inspect operator which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass node sass rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails an issue was discovered in libsass through a null pointer dereference was found in the function sass inspect operator which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact vulnerabilityurl | 0 |
7,459 | 17,931,806,937 | IssuesEvent | 2021-09-10 10:14:17 | RasaHQ/rasa | https://api.github.com/repos/RasaHQ/rasa | opened | Prepare `Processor` for integration | type:enhancement :sparkles: area:rasa-oss :ferris_wheel: feature:rasa-3.0/architecture | **Description**
To make the integration phase a little bit easier we want to get the old `Processor` class as close as possible to the `GraphProcessor` interface. To do so, we want to move out all infrastructure (tracker_store, lock_store) related code.
**Processor**
- [ ] The processor doesn't contain any infrastructure related code | 1.0 | Prepare `Processor` for integration - **Description**
To make the integration phase a little bit easier we want to get the old `Processor` class as close as possible to the `GraphProcessor` interface. To do so, we want to move out all infrastructure (tracker_store, lock_store) related code.
**Processor**
- [ ] The processor doesn't contain any infrastructure related code | non_priority | prepare processor for integration description to make the integration phase a little bit easier we want to get the old processor class as close as possible to the graphprocessor interface to do so we want to move out all infrastructure tracker store lock store related code processor the processor doesn t contain any infrastructure related code | 0 |
99,003 | 8,687,433,532 | IssuesEvent | 2018-12-03 13:45:25 | FreeRDP/FreeRDP | https://api.github.com/repos/FreeRDP/FreeRDP | closed | FreeRDP crash on startup in Windows when started in service mode | client fixed-waiting-test help-wanted windows | **Description of the bug**
When FreeRDP is started from a service (SYSTEM user) it crashes with access violation on startup.
The crash happens in function **wfreerdp_client_global_init** (client\Windows\wf_client.c) on
strcat(home, getenv("HOMEDRIVE"));
as getenv returns NULL.
As SYSTEM user there is no HOME, HOMEDRIVE, HOMEPATH environment variables by default.
**To Reproduce**
Steps to reproduce the behavior:
1. Start FreeRDP from a service to just login a user
**Expected behavior**
Proper check of environment variables that does not exist
** Application details
* Latest version of FreeRDP (3Oct2018)
* OS version connecting to is Windows 8.1
**Desktop (please complete the following information):**
- OS: Windows 8.1
| 1.0 | FreeRDP crash on startup in Windows when started in service mode - **Description of the bug**
When FreeRDP is started from a service (SYSTEM user) it crashes with access violation on startup.
The crash happens in function **wfreerdp_client_global_init** (client\Windows\wf_client.c) on
strcat(home, getenv("HOMEDRIVE"));
as getenv returns NULL.
As SYSTEM user there is no HOME, HOMEDRIVE, HOMEPATH environment variables by default.
**To Reproduce**
Steps to reproduce the behavior:
1. Start FreeRDP from a service to just login a user
**Expected behavior**
Proper check of environment variables that does not exist
** Application details
* Latest version of FreeRDP (3Oct2018)
* OS version connecting to is Windows 8.1
**Desktop (please complete the following information):**
- OS: Windows 8.1
| non_priority | freerdp crash on startup in windows when started in service mode description of the bug when freerdp is started from a service system user it crashes with access violation on startup the crash happens in function wfreerdp client global init client windows wf client c on strcat home getenv homedrive as getenv returns null as system user there is no home homedrive homepath environment variables by default to reproduce steps to reproduce the behavior start freerdp from a service to just login a user expected behavior proper check of environment variables that does not exist application details latest version of freerdp os version connecting to is windows desktop please complete the following information os windows | 0 |
45,014 | 23,864,446,045 | IssuesEvent | 2022-09-07 09:48:15 | sapphiredev/shapeshift | https://api.github.com/repos/sapphiredev/shapeshift | opened | request: `setValidationEnabled` should [un]wrap the validator into an `PartialValidator<T>` | performance | ### Is there an existing issue or pull request for this?
- [X] I have searched the existing issues and pull requests
### Feature description
Right now, all validators have checks for whether or not they should run validations, as seen below:
https://github.com/sapphiredev/shapeshift/blob/e9a029a995d6863dfa07ef3493b7da1568ddabef/src/validators/BaseValidator.ts#L74-L78
This comes with a large performance impact, specially from those who desire to use the library without conditional validation. Also goes against Shapeshift's internal design of running the least amount of conditionals as possible.
Before we added conditional validation, Shapeshift was comfortably among the fastest libraries in our benchmarks.
### Desired solution
A wrapper would solve the performance impact by making the validators always run the logic and constraints, where the `PartialValidator<T>` would exclusively only run the handler and never the constraints (with no extra checks, of course).
For function (dynamic validation), we can also add a second class, or add a check in `PartialValidator<T>`, invalidating the last sentence in the previous paragraph.
Unwrapping a `PartialValidator<T>` should give back the underlying, fully-checked validator.
### Alternatives considered
N/a.
### Additional context
_No response_ | True | request: `setValidationEnabled` should [un]wrap the validator into an `PartialValidator<T>` - ### Is there an existing issue or pull request for this?
- [X] I have searched the existing issues and pull requests
### Feature description
Right now, all validators have checks for whether or not they should run validations, as seen below:
https://github.com/sapphiredev/shapeshift/blob/e9a029a995d6863dfa07ef3493b7da1568ddabef/src/validators/BaseValidator.ts#L74-L78
This comes with a large performance impact, specially from those who desire to use the library without conditional validation. Also goes against Shapeshift's internal design of running the least amount of conditionals as possible.
Before we added conditional validation, Shapeshift was comfortably among the fastest libraries in our benchmarks.
### Desired solution
A wrapper would solve the performance impact by making the validators always run the logic and constraints, where the `PartialValidator<T>` would exclusively only run the handler and never the constraints (with no extra checks, of course).
For function (dynamic validation), we can also add a second class, or add a check in `PartialValidator<T>`, invalidating the last sentence in the previous paragraph.
Unwrapping a `PartialValidator<T>` should give back the underlying, fully-checked validator.
### Alternatives considered
N/a.
### Additional context
_No response_ | non_priority | request setvalidationenabled should wrap the validator into an partialvalidator is there an existing issue or pull request for this i have searched the existing issues and pull requests feature description right now all validators have checks for whether or not they should run validations as seen below this comes with a large performance impact specially from those who desire to use the library without conditional validation also goes against shapeshift s internal design of running the least amount of conditionals as possible before we added conditional validation shapeshift was comfortably among the fastest libraries in our benchmarks desired solution a wrapper would solve the performance impact by making the validators always run the logic and constraints where the partialvalidator would exclusively only run the handler and never the constraints with no extra checks of course for function dynamic validation we can also add a second class or add a check in partialvalidator invalidating the last sentence in the previous paragraph unwrapping a partialvalidator should give back the underlying fully checked validator alternatives considered n a additional context no response | 0 |
417,349 | 28,110,369,032 | IssuesEvent | 2023-03-31 06:36:33 | euph00/ped | https://api.github.com/repos/euph00/ped | opened | command summary table in UG is out of date | type.DocumentationBug severity.Low | delete command suspected to refer to delete_patient
find command suspected to refer to find_patient
find_details command suspected to refer to find_patient details
etc


<!--session: 1680241960933-9b72ee5c-285f-4f6d-9a4f-a1ade050869a-->
<!--Version: Web v3.4.7--> | 1.0 | command summary table in UG is out of date - delete command suspected to refer to delete_patient
find command suspected to refer to find_patient
find_details command suspected to refer to find_patient details
etc


<!--session: 1680241960933-9b72ee5c-285f-4f6d-9a4f-a1ade050869a-->
<!--Version: Web v3.4.7--> | non_priority | command summary table in ug is out of date delete command suspected to refer to delete patient find command suspected to refer to find patient find details command suspected to refer to find patient details etc | 0 |
23,681 | 22,586,340,262 | IssuesEvent | 2022-06-28 15:32:21 | informalsystems/apalache | https://api.github.com/repos/informalsystems/apalache | closed | [FEATURE] Ensure post type checking is optional | FTC-Snowcat usability | We run a second pass of the type checker at the end of apalache's runs as a sanity check to ensure none of the other passes broke typing. However, this sanity check is actually required in some scenarios where some polymorphism stays unresolved at the end of the execution - and this problem is only correctly reported by the post type checker pass. Therefore, something that should be optional is actually required in some scenarios.
Example:
```
$ apalache-mc check --inv=Inv test/tla/Bug931.tla
```
[Expected] Output with post type checker:
```
PASS #12: PostTypeCheckerSnowcat I@09:53:52.837
> Running Snowcat .::. I@09:53:52.837
Found a polymorphic type: Set(b) E@09:53:52.889
Probable causes: an empty set { } needs a type annotation or an incorrect record field is used E@09:53:52.890
```
[Actual] Output without post type checker:
```
PASS #12: BoundedChecker I@09:57:38.003
State 0: Checking 1 state invariants I@09:57:38.386
<unknown>: internal error in type checking: Unexpected type VarT1 E@09:57:38.392
at.forsyte.apalache.tla.lir.TypingException: Unexpected type VarT1
at at.forsyte.apalache.tla.bmcmt.types.package$CellT$.fromType1(package.scala:171)
at at.forsyte.apalache.tla.bmcmt.types.package$CellT$.fromType1(package.scala:173)
at at.forsyte.apalache.tla.bmcmt.types.package$CellT$.fromTypeTag(package.scala:196)
at at.forsyte.apalache.tla.bmcmt.rules.SetCtorRule.apply(SetCtorRule.scala:33)
```
Additionally: Should we make this pass run only when `--debug` is active? | True | [FEATURE] Ensure post type checking is optional - We run a second pass of the type checker at the end of apalache's runs as a sanity check to ensure none of the other passes broke typing. However, this sanity check is actually required in some scenarios where some polymorphism stays unresolved at the end of the execution - and this problem is only correctly reported by the post type checker pass. Therefore, something that should be optional is actually required in some scenarios.
Example:
```
$ apalache-mc check --inv=Inv test/tla/Bug931.tla
```
[Expected] Output with post type checker:
```
PASS #12: PostTypeCheckerSnowcat I@09:53:52.837
> Running Snowcat .::. I@09:53:52.837
Found a polymorphic type: Set(b) E@09:53:52.889
Probable causes: an empty set { } needs a type annotation or an incorrect record field is used E@09:53:52.890
```
[Actual] Output without post type checker:
```
PASS #12: BoundedChecker I@09:57:38.003
State 0: Checking 1 state invariants I@09:57:38.386
<unknown>: internal error in type checking: Unexpected type VarT1 E@09:57:38.392
at.forsyte.apalache.tla.lir.TypingException: Unexpected type VarT1
at at.forsyte.apalache.tla.bmcmt.types.package$CellT$.fromType1(package.scala:171)
at at.forsyte.apalache.tla.bmcmt.types.package$CellT$.fromType1(package.scala:173)
at at.forsyte.apalache.tla.bmcmt.types.package$CellT$.fromTypeTag(package.scala:196)
at at.forsyte.apalache.tla.bmcmt.rules.SetCtorRule.apply(SetCtorRule.scala:33)
```
Additionally: Should we make this pass run only when `--debug` is active? | non_priority | ensure post type checking is optional we run a second pass of the type checker at the end of apalache s runs as a sanity check to ensure none of the other passes broke typing however this sanity check is actually required in some scenarios where some polymorphism stays unresolved at the end of the execution and this problem is only correctly reported by the post type checker pass therefore something that should be optional is actually required in some scenarios example apalache mc check inv inv test tla tla output with post type checker pass posttypecheckersnowcat i running snowcat i found a polymorphic type set b e probable causes an empty set needs a type annotation or an incorrect record field is used e output without post type checker pass boundedchecker i state checking state invariants i internal error in type checking unexpected type e at forsyte apalache tla lir typingexception unexpected type at at forsyte apalache tla bmcmt types package cellt package scala at at forsyte apalache tla bmcmt types package cellt package scala at at forsyte apalache tla bmcmt types package cellt fromtypetag package scala at at forsyte apalache tla bmcmt rules setctorrule apply setctorrule scala additionally should we make this pass run only when debug is active | 0 |
74,871 | 9,810,935,097 | IssuesEvent | 2019-06-12 21:49:30 | OctoPerf/kraken | https://api.github.com/repos/OctoPerf/kraken | opened | CI Documentation | backend documentation | Add Swagger (or other tools https://dzone.com/articles/rest-api-documentation-generators-for-java) to all Java backends that expose a REST API.
Add CURL sample to call the REST API in order to:
* upload/download files
* start debug
* start test
* get gatling result
* get grafana result
| 1.0 | CI Documentation - Add Swagger (or other tools https://dzone.com/articles/rest-api-documentation-generators-for-java) to all Java backends that expose a REST API.
Add CURL sample to call the REST API in order to:
* upload/download files
* start debug
* start test
* get gatling result
* get grafana result
| non_priority | ci documentation add swagger or other tools to all java backends that expose a rest api add curl sample to call the rest api in order to upload download files start debug start test get gatling result get grafana result | 0 |
56,057 | 14,915,252,772 | IssuesEvent | 2021-01-22 16:28:30 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | opened | MenuButton: allow to use p:divider instead of p:separator in PF10 | defect | **Describe the defect**
```
class org.primefaces.component.divider.Divider cannot be cast to class org.primefaces.model.menu.MenuElement (org.primefaces.component.divider.Divider and org.primefaces.model.menu.MenuElement are in unnamed module of loader org.apache.catalina.loader.ParallelWebappClassLoader @5b6b17fa)
```
**Reproducer**
Replace `p:separator` (which was deprecated in PF10) with `p:divider` in this example
https://www.primefaces.org/showcase/ui/menu/menuButton.xhtml?jfwid=b7b31
**Environment:**
- PF Version: 10-RC1
| 1.0 | MenuButton: allow to use p:divider instead of p:separator in PF10 - **Describe the defect**
```
class org.primefaces.component.divider.Divider cannot be cast to class org.primefaces.model.menu.MenuElement (org.primefaces.component.divider.Divider and org.primefaces.model.menu.MenuElement are in unnamed module of loader org.apache.catalina.loader.ParallelWebappClassLoader @5b6b17fa)
```
**Reproducer**
Replace `p:separator` (which was deprecated in PF10) with `p:divider` in this example
https://www.primefaces.org/showcase/ui/menu/menuButton.xhtml?jfwid=b7b31
**Environment:**
- PF Version: 10-RC1
| non_priority | menubutton allow to use p divider instead of p separator in describe the defect class org primefaces component divider divider cannot be cast to class org primefaces model menu menuelement org primefaces component divider divider and org primefaces model menu menuelement are in unnamed module of loader org apache catalina loader parallelwebappclassloader reproducer replace p separator which was deprecated in with p divider in this example environment pf version | 0 |
99,890 | 21,053,288,080 | IssuesEvent | 2022-03-31 22:48:33 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | [4.0?][4.1] No more 404 error with inexistent urls | No Code Attached Yet | ### Steps to reproduce the issue
Launch:
**www.mywebsite.com/%$£-wrong-or-inexistent-url**
### Expected result
A 404 error page
### Actual result
It loads up to the "first slash" of THE VALID URL, in this case it loads homepage, without any 404 error
**www.mywebsite.com/%$£-wrong-or-inexistent-url** => J4.1 loads: www.mywebsite.com
### System information (as much as possible)
Joomla 4.1 on Apache+Nginx using Mysql
Joomla 4.1 on OpenLitespeed using MariaDB
### Additional comments
**For wrong url with a second / "slash" it returns correctly a 404 error page**
i.e.: **www.mywebsite.com/category/%$£-wrong-or-inexistent-url** => J4.1 returns correctly a 404 error | 1.0 | [4.0?][4.1] No more 404 error with inexistent urls - ### Steps to reproduce the issue
Launch:
**www.mywebsite.com/%$£-wrong-or-inexistent-url**
### Expected result
A 404 error page
### Actual result
It loads up to the "first slash" of THE VALID URL, in this case it loads homepage, without any 404 error
**www.mywebsite.com/%$£-wrong-or-inexistent-url** => J4.1 loads: www.mywebsite.com
### System information (as much as possible)
Joomla 4.1 on Apache+Nginx using Mysql
Joomla 4.1 on OpenLitespeed using MariaDB
### Additional comments
**For wrong url with a second / "slash" it returns correctly a 404 error page**
i.e.: **www.mywebsite.com/category/%$£-wrong-or-inexistent-url** => J4.1 returns correctly a 404 error | non_priority | no more error with inexistent urls steps to reproduce the issue launch expected result a error page actual result it loads up to the first slash of the valid url in this case it loads homepage without any error loads system information as much as possible joomla on apache nginx using mysql joomla on openlitespeed using mariadb additional comments for wrong url with a second slash it returns correctly a error page i e returns correctly a error | 0 |
144,704 | 19,296,153,206 | IssuesEvent | 2021-12-12 16:17:44 | AlexRogalskiy/typescript-tools | https://api.github.com/repos/AlexRogalskiy/typescript-tools | closed | CVE-2021-32796 (Medium) detected in xmldom-0.6.0.tgz | security vulnerability potential duplicate Status: Invalid | ## CVE-2021-32796 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmldom-0.6.0.tgz</b></p></summary>
<p>A pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmldom/-/xmldom-0.6.0.tgz">https://registry.npmjs.org/xmldom/-/xmldom-0.6.0.tgz</a></p>
<p>Path to dependency file: typescript-tools/package.json</p>
<p>Path to vulnerable library: /node_modules/xmldom/package.json</p>
<p>
Dependency Hierarchy:
- :x: **xmldom-0.6.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/typescript-tools/commit/a18e5af080b78d64b4a8d452840600495eaaf3fa">a18e5af080b78d64b4a8d452840600495eaaf3fa</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
xmldom is an open source pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module. xmldom versions 0.6.0 and older do not correctly escape special characters when serializing elements removed from their ancestor. This may lead to unexpected syntactic changes during XML processing in some downstream applications. This issue has been resolved in version 0.7.0. As a workaround downstream applications can validate the input and reject the maliciously crafted documents.
<p>Publish Date: 2021-07-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32796>CVE-2021-32796</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xmldom/xmldom/security/advisories/GHSA-5fg8-2547-mr8q">https://github.com/xmldom/xmldom/security/advisories/GHSA-5fg8-2547-mr8q</a></p>
<p>Release Date: 2021-07-27</p>
<p>Fix Resolution: xmldom - 0.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-32796 (Medium) detected in xmldom-0.6.0.tgz - ## CVE-2021-32796 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmldom-0.6.0.tgz</b></p></summary>
<p>A pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmldom/-/xmldom-0.6.0.tgz">https://registry.npmjs.org/xmldom/-/xmldom-0.6.0.tgz</a></p>
<p>Path to dependency file: typescript-tools/package.json</p>
<p>Path to vulnerable library: /node_modules/xmldom/package.json</p>
<p>
Dependency Hierarchy:
- :x: **xmldom-0.6.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/typescript-tools/commit/a18e5af080b78d64b4a8d452840600495eaaf3fa">a18e5af080b78d64b4a8d452840600495eaaf3fa</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
xmldom is an open source pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module. xmldom versions 0.6.0 and older do not correctly escape special characters when serializing elements removed from their ancestor. This may lead to unexpected syntactic changes during XML processing in some downstream applications. This issue has been resolved in version 0.7.0. As a workaround downstream applications can validate the input and reject the maliciously crafted documents.
<p>Publish Date: 2021-07-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32796>CVE-2021-32796</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xmldom/xmldom/security/advisories/GHSA-5fg8-2547-mr8q">https://github.com/xmldom/xmldom/security/advisories/GHSA-5fg8-2547-mr8q</a></p>
<p>Release Date: 2021-07-27</p>
<p>Fix Resolution: xmldom - 0.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in xmldom tgz cve medium severity vulnerability vulnerable library xmldom tgz a pure javascript standard based xml dom level core domparser and xmlserializer module library home page a href path to dependency file typescript tools package json path to vulnerable library node modules xmldom package json dependency hierarchy x xmldom tgz vulnerable library found in head commit a href vulnerability details xmldom is an open source pure javascript standard based xml dom level core domparser and xmlserializer module xmldom versions and older do not correctly escape special characters when serializing elements removed from their ancestor this may lead to unexpected syntactic changes during xml processing in some downstream applications this issue has been resolved in version as a workaround downstream applications can validate the input and reject the maliciously crafted documents publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmldom step up your open source security game with whitesource | 0 |
347,989 | 31,391,161,665 | IssuesEvent | 2023-08-26 11:06:32 | patrick-rivos/riscv-gnu-toolchain | https://api.github.com/repos/patrick-rivos/riscv-gnu-toolchain | opened | Testsuite Status b88636400f0e8e9c4155f802475e65018a4425d2 | testsuite-failure bug | # Summary
|Testsuite Failures|Additional Info|
|---|---|
|gcc-linux-rv32gcv-ilp32d-b88636400f0e8e9c4155f802475e65018a4425d2-non-multilib|Cannot find testsuite artifact. Likely caused by testsuite timeout.|
|gcc-linux-rv64imafdcv_zicond_zawrs_zbc_zvkng_zvksg_zvbb_zvbc_zicsr_zba_zbb_zbs_zicbom_zicbop_zicboz_zfhmin_zkt-lp64d-b88636400f0e8e9c4155f802475e65018a4425d2-non-multilib|Cannot find testsuite artifact. Likely caused by testsuite timeout.|
|gcc-newlib-rv64gc-lp64d-b88636400f0e8e9c4155f802475e65018a4425d2-multilib|Cannot find testsuite artifact. Likely caused by testsuite timeout.|
|New Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|linux: rv64 Bitmanip lp64d medlow|3/1|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: RVA23U64 profile lp64d medlow|10/2|5/1|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|Resolved Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|linux: rv32 Bitmanip ilp32d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv32 Vector Crypto ilp32d medlow|21/5|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64 Bitmanip lp64d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64 Vector Crypto lp64d medlow|16/2|0/0|0/0|[470da3b27e6dbeb3286b09dcb1c1b810ac75b276](https://github.com/gcc-mirror/gcc/compare/470da3b27e6dbeb3286b09dcb1c1b810ac75b276...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64gcv lp64d medlow|16/2|0/0|0/0|[470da3b27e6dbeb3286b09dcb1c1b810ac75b276](https://github.com/gcc-mirror/gcc/compare/470da3b27e6dbeb3286b09dcb1c1b810ac75b276...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64imafdc lp64d medlow multilib|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: RVA23U64 profile lp64d medlow|44/17|4/1|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32 Bitmanip ilp32d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32 Vector Crypto ilp32d medlow|19/5|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32gcv ilp32d medlow|19/5|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64 Bitmanip lp64d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64 Vector Crypto lp64d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64gcv lp64d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|Unresolved Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|linux: rv32 Bitmanip ilp32d medlow|42/33|11/4|12/2|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv32 Vector Crypto ilp32d medlow|75/63|15/8|79/14|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64 Bitmanip lp64d medlow|33/26|9/3|12/2|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64 Vector Crypto lp64d medlow|51/44|10/4|79/14|[470da3b27e6dbeb3286b09dcb1c1b810ac75b276](https://github.com/gcc-mirror/gcc/compare/470da3b27e6dbeb3286b09dcb1c1b810ac75b276...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64gcv lp64d medlow|54/45|10/4|79/14|[470da3b27e6dbeb3286b09dcb1c1b810ac75b276](https://github.com/gcc-mirror/gcc/compare/470da3b27e6dbeb3286b09dcb1c1b810ac75b276...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64imafdc lp64d medlow multilib|16/9|9/3|12/2|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: RVA23U64 profile lp64d medlow|1230/363|1168/330|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32 Bitmanip ilp32d medlow|64/15|17/6|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32 Vector Crypto ilp32d medlow|104/50|21/10|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32gcv ilp32d medlow|100/46|21/10|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64 Bitmanip lp64d medlow|63/20|15/5|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64 Vector Crypto lp64d medlow|85/42|16/6|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64gcv lp64d medlow|81/38|16/6|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
## Architecture Specific New Failures
linux rv64gc_zba_zbb_zbc_zbs lp64d:
```
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 0" 2
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 2" 1
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 3" 1
```
newlib rv64imafdcv_zicond_zawrs_zbc_zvkng_zvksg_zvbb_zvbc_zicsr_zba_zbb_zbs_zicbom_zicbop_zicboz_zfhmin_zkt lp64d:
```
FAIL: g++.dg/torture/pr35634.C -O2 execution test
FAIL: g++.dg/torture/pr35634.C -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: g++.dg/torture/pr35634.C -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: g++.dg/torture/pr35634.C -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
FAIL: g++.dg/torture/pr35634.C -O3 -g execution test
FAIL: gcc.c-torture/execute/pr92140.c -O2 execution test
FAIL: gcc.c-torture/execute/pr92140.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.c-torture/execute/pr92140.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.c-torture/execute/pr92140.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
FAIL: gcc.c-torture/execute/pr92140.c -O3 -g execution test
FAIL: gcc.dg/torture/pr35634.c -O2 execution test
FAIL: gcc.dg/torture/pr35634.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.dg/torture/pr35634.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.dg/torture/pr35634.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
FAIL: gcc.dg/torture/pr35634.c -O3 -g execution test
```
## Resolved Failures Across All Affected Targets (13 targets / 13 total targets)
```
FAIL: gcc.target/riscv/stack_save_restore_1.c -O0 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O1 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O2 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O2 -flto -fno-use-linker-plugin -flto-partition=none check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O3 -g check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -Os check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O0 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O1 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O2 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O2 -flto -fno-use-linker-plugin -flto-partition=none check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O3 -g check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -Os check-function-bodies bar
```
## Architecture Specific Resolved Failures
linux rv32gcv_zvbb_zvbc_zvkg_zvkn_zvknc_zvkned_zvkng_zvknha_zvknhb_zvks_zvksc_zvksed_zvksg_zvksh_zvkt ilp32d:
```
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 0" 2
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 2" 1
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 3" 1
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-7.c execution test
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-8.c execution test
```
newlib rv32gcv ilp32d:
```
FAIL: gcc.target/riscv/rvv/autovec/cond/cond_arith_run-9.c (test for excess errors)
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-1.c (test for excess errors)
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-2.c (test for excess errors)
```
newlib rv64imafdcv_zicond_zawrs_zbc_zvkng_zvksg_zvbb_zvbc_zicsr_zba_zbb_zbs_zicbom_zicbop_zicboz_zfhmin_zkt lp64d:
```
FAIL: g++.dg/tree-ssa/pr66726.C -std=gnu++14 execution test
FAIL: g++.dg/tree-ssa/pr66726.C -std=gnu++17 execution test
FAIL: g++.dg/tree-ssa/pr66726.C -std=gnu++20 execution test
FAIL: g++.dg/tree-ssa/pr66726.C -std=gnu++98 execution test
FAIL: gcc.c-torture/execute/20071216-1.c -O2 execution test
FAIL: gcc.c-torture/execute/20071216-1.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.c-torture/execute/20140212-2.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.c-torture/execute/20141125-1.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.c-torture/execute/960302-1.c -O1 execution test
FAIL: gcc.c-torture/execute/compndlit-1.c -O1 execution test
FAIL: gcc.c-torture/execute/compndlit-1.c -O2 execution test
FAIL: gcc.c-torture/execute/compndlit-1.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.c-torture/execute/compndlit-1.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.c-torture/execute/compndlit-1.c -O3 -g execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O1 execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O2 execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O3 -g execution test
FAIL: gcc.c-torture/execute/pr46909-2.c -O1 execution test
FAIL: gcc.c-torture/execute/pr59014-2.c -O1 execution test
FAIL: gcc.c-torture/execute/pr59014-2.c -O2 execution test
FAIL: gcc.c-torture/execute/pr59014-2.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.c-torture/execute/pr59014-2.c -O3 -g execution test
FAIL: gcc.c-torture/execute/pr68841.c -O1 execution test
FAIL: gcc.c-torture/execute/pr78617.c -O1 execution test
FAIL: gcc.dg/pr68841.c execution test
FAIL: gcc.dg/torture/pr45830.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.dg/torture/pr68264.c -Os execution test
FAIL: gcc.dg/tree-ssa/ifc-pr47271.c execution test
FAIL: gcc.dg/tree-ssa/ssa-dom-cse-3.c execution test
```
newlib rv32gcv_zvbb_zvbc_zvkg_zvkn_zvknc_zvkned_zvkng_zvknha_zvknhb_zvks_zvksc_zvksed_zvksg_zvksh_zvkt ilp32d:
```
FAIL: gcc.target/riscv/rvv/autovec/cond/cond_arith_run-9.c (test for excess errors)
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-1.c (test for excess errors)
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-2.c (test for excess errors)
```
Associated run is: https://github.com/patrick-rivos/riscv-gnu-toolchain/actions/runs/5981903667
| 1.0 | Testsuite Status b88636400f0e8e9c4155f802475e65018a4425d2 - # Summary
|Testsuite Failures|Additional Info|
|---|---|
|gcc-linux-rv32gcv-ilp32d-b88636400f0e8e9c4155f802475e65018a4425d2-non-multilib|Cannot find testsuite artifact. Likely caused by testsuite timeout.|
|gcc-linux-rv64imafdcv_zicond_zawrs_zbc_zvkng_zvksg_zvbb_zvbc_zicsr_zba_zbb_zbs_zicbom_zicbop_zicboz_zfhmin_zkt-lp64d-b88636400f0e8e9c4155f802475e65018a4425d2-non-multilib|Cannot find testsuite artifact. Likely caused by testsuite timeout.|
|gcc-newlib-rv64gc-lp64d-b88636400f0e8e9c4155f802475e65018a4425d2-multilib|Cannot find testsuite artifact. Likely caused by testsuite timeout.|
|New Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|linux: rv64 Bitmanip lp64d medlow|3/1|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: RVA23U64 profile lp64d medlow|10/2|5/1|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|Resolved Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|linux: rv32 Bitmanip ilp32d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv32 Vector Crypto ilp32d medlow|21/5|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64 Bitmanip lp64d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64 Vector Crypto lp64d medlow|16/2|0/0|0/0|[470da3b27e6dbeb3286b09dcb1c1b810ac75b276](https://github.com/gcc-mirror/gcc/compare/470da3b27e6dbeb3286b09dcb1c1b810ac75b276...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64gcv lp64d medlow|16/2|0/0|0/0|[470da3b27e6dbeb3286b09dcb1c1b810ac75b276](https://github.com/gcc-mirror/gcc/compare/470da3b27e6dbeb3286b09dcb1c1b810ac75b276...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64imafdc lp64d medlow multilib|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: RVA23U64 profile lp64d medlow|44/17|4/1|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32 Bitmanip ilp32d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32 Vector Crypto ilp32d medlow|19/5|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32gcv ilp32d medlow|19/5|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64 Bitmanip lp64d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64 Vector Crypto lp64d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64gcv lp64d medlow|16/2|0/0|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|Unresolved Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|linux: rv32 Bitmanip ilp32d medlow|42/33|11/4|12/2|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv32 Vector Crypto ilp32d medlow|75/63|15/8|79/14|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64 Bitmanip lp64d medlow|33/26|9/3|12/2|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64 Vector Crypto lp64d medlow|51/44|10/4|79/14|[470da3b27e6dbeb3286b09dcb1c1b810ac75b276](https://github.com/gcc-mirror/gcc/compare/470da3b27e6dbeb3286b09dcb1c1b810ac75b276...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64gcv lp64d medlow|54/45|10/4|79/14|[470da3b27e6dbeb3286b09dcb1c1b810ac75b276](https://github.com/gcc-mirror/gcc/compare/470da3b27e6dbeb3286b09dcb1c1b810ac75b276...b88636400f0e8e9c4155f802475e65018a4425d2)|
|linux: rv64imafdc lp64d medlow multilib|16/9|9/3|12/2|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: RVA23U64 profile lp64d medlow|1230/363|1168/330|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32 Bitmanip ilp32d medlow|64/15|17/6|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32 Vector Crypto ilp32d medlow|104/50|21/10|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv32gcv ilp32d medlow|100/46|21/10|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64 Bitmanip lp64d medlow|63/20|15/5|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64 Vector Crypto lp64d medlow|85/42|16/6|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
|newlib: rv64gcv lp64d medlow|81/38|16/6|0/0|[6df8dcec7196e42ca2eed69e1ae455bae8d0fe93](https://github.com/gcc-mirror/gcc/compare/6df8dcec7196e42ca2eed69e1ae455bae8d0fe93...b88636400f0e8e9c4155f802475e65018a4425d2)|
## Architecture Specific New Failures
linux rv64gc_zba_zbb_zbc_zbs lp64d:
```
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 0" 2
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 2" 1
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 3" 1
```
newlib rv64imafdcv_zicond_zawrs_zbc_zvkng_zvksg_zvbb_zvbc_zicsr_zba_zbb_zbs_zicbom_zicbop_zicboz_zfhmin_zkt lp64d:
```
FAIL: g++.dg/torture/pr35634.C -O2 execution test
FAIL: g++.dg/torture/pr35634.C -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: g++.dg/torture/pr35634.C -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: g++.dg/torture/pr35634.C -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
FAIL: g++.dg/torture/pr35634.C -O3 -g execution test
FAIL: gcc.c-torture/execute/pr92140.c -O2 execution test
FAIL: gcc.c-torture/execute/pr92140.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.c-torture/execute/pr92140.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.c-torture/execute/pr92140.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
FAIL: gcc.c-torture/execute/pr92140.c -O3 -g execution test
FAIL: gcc.dg/torture/pr35634.c -O2 execution test
FAIL: gcc.dg/torture/pr35634.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.dg/torture/pr35634.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.dg/torture/pr35634.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
FAIL: gcc.dg/torture/pr35634.c -O3 -g execution test
```
## Resolved Failures Across All Affected Targets (13 targets / 13 total targets)
```
FAIL: gcc.target/riscv/stack_save_restore_1.c -O0 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O1 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O2 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O2 -flto -fno-use-linker-plugin -flto-partition=none check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -O3 -g check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_1.c -Os check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O0 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O1 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O2 check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O2 -flto -fno-use-linker-plugin -flto-partition=none check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -O3 -g check-function-bodies bar
FAIL: gcc.target/riscv/stack_save_restore_2.c -Os check-function-bodies bar
```
## Architecture Specific Resolved Failures
linux rv32gcv_zvbb_zvbc_zvkg_zvkn_zvknc_zvkned_zvkng_zvknha_zvknhb_zvks_zvksc_zvksed_zvksg_zvksh_zvkt ilp32d:
```
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 0" 2
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 2" 1
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 3" 1
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-7.c execution test
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-8.c execution test
```
newlib rv32gcv ilp32d:
```
FAIL: gcc.target/riscv/rvv/autovec/cond/cond_arith_run-9.c (test for excess errors)
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-1.c (test for excess errors)
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-2.c (test for excess errors)
```
newlib rv64imafdcv_zicond_zawrs_zbc_zvkng_zvksg_zvbb_zvbc_zicsr_zba_zbb_zbs_zicbom_zicbop_zicboz_zfhmin_zkt lp64d:
```
FAIL: g++.dg/tree-ssa/pr66726.C -std=gnu++14 execution test
FAIL: g++.dg/tree-ssa/pr66726.C -std=gnu++17 execution test
FAIL: g++.dg/tree-ssa/pr66726.C -std=gnu++20 execution test
FAIL: g++.dg/tree-ssa/pr66726.C -std=gnu++98 execution test
FAIL: gcc.c-torture/execute/20071216-1.c -O2 execution test
FAIL: gcc.c-torture/execute/20071216-1.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.c-torture/execute/20140212-2.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.c-torture/execute/20141125-1.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.c-torture/execute/960302-1.c -O1 execution test
FAIL: gcc.c-torture/execute/compndlit-1.c -O1 execution test
FAIL: gcc.c-torture/execute/compndlit-1.c -O2 execution test
FAIL: gcc.c-torture/execute/compndlit-1.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.c-torture/execute/compndlit-1.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.c-torture/execute/compndlit-1.c -O3 -g execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O1 execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O2 execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
FAIL: gcc.c-torture/execute/pr46909-1.c -O3 -g execution test
FAIL: gcc.c-torture/execute/pr46909-2.c -O1 execution test
FAIL: gcc.c-torture/execute/pr59014-2.c -O1 execution test
FAIL: gcc.c-torture/execute/pr59014-2.c -O2 execution test
FAIL: gcc.c-torture/execute/pr59014-2.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
FAIL: gcc.c-torture/execute/pr59014-2.c -O3 -g execution test
FAIL: gcc.c-torture/execute/pr68841.c -O1 execution test
FAIL: gcc.c-torture/execute/pr78617.c -O1 execution test
FAIL: gcc.dg/pr68841.c execution test
FAIL: gcc.dg/torture/pr45830.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects execution test
FAIL: gcc.dg/torture/pr68264.c -Os execution test
FAIL: gcc.dg/tree-ssa/ifc-pr47271.c execution test
FAIL: gcc.dg/tree-ssa/ssa-dom-cse-3.c execution test
```
newlib rv32gcv_zvbb_zvbc_zvkg_zvkn_zvknc_zvkned_zvkng_zvknha_zvknhb_zvks_zvksc_zvksed_zvksg_zvksh_zvkt ilp32d:
```
FAIL: gcc.target/riscv/rvv/autovec/cond/cond_arith_run-9.c (test for excess errors)
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-1.c (test for excess errors)
FAIL: gcc.target/riscv/rvv/autovec/gather-scatter/mask_scatter_store_run-2.c (test for excess errors)
```
Associated run is: https://github.com/patrick-rivos/riscv-gnu-toolchain/actions/runs/5981903667
| non_priority | testsuite status summary testsuite failures additional info gcc linux non multilib cannot find testsuite artifact likely caused by testsuite timeout gcc linux zicond zawrs zbc zvkng zvksg zvbb zvbc zicsr zba zbb zbs zicbom zicbop zicboz zfhmin zkt non multilib cannot find testsuite artifact likely caused by testsuite timeout gcc newlib multilib cannot find testsuite artifact likely caused by testsuite timeout new failures gcc g gfortran previous hash linux bitmanip medlow newlib profile medlow resolved failures gcc g gfortran previous hash linux bitmanip medlow linux vector crypto medlow linux bitmanip medlow linux vector crypto medlow linux medlow linux medlow multilib newlib profile medlow newlib bitmanip medlow newlib vector crypto medlow newlib medlow newlib bitmanip medlow newlib vector crypto medlow newlib medlow unresolved failures gcc g gfortran previous hash linux bitmanip medlow linux vector crypto medlow linux bitmanip medlow linux vector crypto medlow linux medlow linux medlow multilib newlib profile medlow newlib bitmanip medlow newlib vector crypto medlow newlib medlow newlib bitmanip medlow newlib vector crypto medlow newlib medlow architecture specific new failures linux zba zbb zbc zbs fail gcc dg tree prof time profiler c scan ipa dump times profile read tp first run fail gcc dg tree prof time profiler c scan ipa dump times profile read tp first run fail gcc dg tree prof time profiler c scan ipa dump times profile read tp first run newlib zicond zawrs zbc zvkng zvksg zvbb zvbc zicsr zba zbb zbs zicbom zicbop zicboz zfhmin zkt fail g dg torture c execution test fail g dg torture c flto fno use linker plugin flto partition none execution test fail g dg torture c flto fuse linker plugin fno fat lto objects execution test fail g dg torture c fomit frame pointer funroll loops fpeel loops ftracer finline functions execution test fail g dg torture c g execution test fail gcc c torture execute c execution test fail gcc c torture execute c flto fno use linker plugin flto partition none execution test fail gcc c torture execute c flto fuse linker plugin fno fat lto objects execution test fail gcc c torture execute c fomit frame pointer funroll loops fpeel loops ftracer finline functions execution test fail gcc c torture execute c g execution test fail gcc dg torture c execution test fail gcc dg torture c flto fno use linker plugin flto partition none execution test fail gcc dg torture c flto fuse linker plugin fno fat lto objects execution test fail gcc dg torture c fomit frame pointer funroll loops fpeel loops ftracer finline functions execution test fail gcc dg torture c g execution test resolved failures across all affected targets targets total targets fail gcc target riscv stack save restore c check function bodies bar fail gcc target riscv stack save restore c check function bodies bar fail gcc target riscv stack save restore c check function bodies bar fail gcc target riscv stack save restore c flto fno use linker plugin flto partition none check function bodies bar fail gcc target riscv stack save restore c flto fuse linker plugin fno fat lto objects check function bodies bar fail gcc target riscv stack save restore c fomit frame pointer funroll loops fpeel loops ftracer finline functions check function bodies bar fail gcc target riscv stack save restore c g check function bodies bar fail gcc target riscv stack save restore c os check function bodies bar fail gcc target riscv stack save restore c check function bodies bar fail gcc target riscv stack save restore c check function bodies bar fail gcc target riscv stack save restore c check function bodies bar fail gcc target riscv stack save restore c flto fno use linker plugin flto partition none check function bodies bar fail gcc target riscv stack save restore c flto fuse linker plugin fno fat lto objects check function bodies bar fail gcc target riscv stack save restore c fomit frame pointer funroll loops fpeel loops ftracer finline functions check function bodies bar fail gcc target riscv stack save restore c g check function bodies bar fail gcc target riscv stack save restore c os check function bodies bar architecture specific resolved failures linux zvbb zvbc zvkg zvkn zvknc zvkned zvkng zvknha zvknhb zvks zvksc zvksed zvksg zvksh zvkt fail gcc dg tree prof time profiler c scan ipa dump times profile read tp first run fail gcc dg tree prof time profiler c scan ipa dump times profile read tp first run fail gcc dg tree prof time profiler c scan ipa dump times profile read tp first run fail gcc target riscv rvv autovec gather scatter mask scatter store run c execution test fail gcc target riscv rvv autovec gather scatter mask scatter store run c execution test newlib fail gcc target riscv rvv autovec cond cond arith run c test for excess errors fail gcc target riscv rvv autovec gather scatter mask scatter store run c test for excess errors fail gcc target riscv rvv autovec gather scatter mask scatter store run c test for excess errors newlib zicond zawrs zbc zvkng zvksg zvbb zvbc zicsr zba zbb zbs zicbom zicbop zicboz zfhmin zkt fail g dg tree ssa c std gnu execution test fail g dg tree ssa c std gnu execution test fail g dg tree ssa c std gnu execution test fail g dg tree ssa c std gnu execution test fail gcc c torture execute c execution test fail gcc c torture execute c flto fno use linker plugin flto partition none execution test fail gcc c torture execute c flto fuse linker plugin fno fat lto objects execution test fail gcc c torture execute c flto fuse linker plugin fno fat lto objects execution test fail gcc c torture execute c execution test fail gcc c torture execute compndlit c execution test fail gcc c torture execute compndlit c execution test fail gcc c torture execute compndlit c flto fno use linker plugin flto partition none execution test fail gcc c torture execute compndlit c flto fuse linker plugin fno fat lto objects execution test fail gcc c torture execute compndlit c g execution test fail gcc c torture execute c execution test fail gcc c torture execute c execution test fail gcc c torture execute c flto fno use linker plugin flto partition none execution test fail gcc c torture execute c flto fuse linker plugin fno fat lto objects execution test fail gcc c torture execute c fomit frame pointer funroll loops fpeel loops ftracer finline functions execution test fail gcc c torture execute c g execution test fail gcc c torture execute c execution test fail gcc c torture execute c execution test fail gcc c torture execute c execution test fail gcc c torture execute c flto fno use linker plugin flto partition none execution test fail gcc c torture execute c g execution test fail gcc c torture execute c execution test fail gcc c torture execute c execution test fail gcc dg c execution test fail gcc dg torture c flto fuse linker plugin fno fat lto objects execution test fail gcc dg torture c os execution test fail gcc dg tree ssa ifc c execution test fail gcc dg tree ssa ssa dom cse c execution test newlib zvbb zvbc zvkg zvkn zvknc zvkned zvkng zvknha zvknhb zvks zvksc zvksed zvksg zvksh zvkt fail gcc target riscv rvv autovec cond cond arith run c test for excess errors fail gcc target riscv rvv autovec gather scatter mask scatter store run c test for excess errors fail gcc target riscv rvv autovec gather scatter mask scatter store run c test for excess errors associated run is | 0 |
117,288 | 11,945,866,095 | IssuesEvent | 2020-04-03 06:56:06 | alcen/ped | https://api.github.com/repos/alcen/ped | opened | User Guide: Profit command not specific about exact time period used | severity.Low type.DocumentationBug | It seems like for Profit command, the range is inclusive of the start date (includes the start date in its calculations) but exclusive of the end date (does not include the end date in its calculations)
For example, transactions conducted on 3/4/2020 do not show up when this command is run:
`profit sd/2020-04-02 ed/2020-04-03`
but show up when this command is run:
`profit sd/2020-04-03 ed/2020-04-04`
This could be made clearer in the User Guide
| 1.0 | User Guide: Profit command not specific about exact time period used - It seems like for Profit command, the range is inclusive of the start date (includes the start date in its calculations) but exclusive of the end date (does not include the end date in its calculations)
For example, transactions conducted on 3/4/2020 do not show up when this command is run:
`profit sd/2020-04-02 ed/2020-04-03`
but show up when this command is run:
`profit sd/2020-04-03 ed/2020-04-04`
This could be made clearer in the User Guide
| non_priority | user guide profit command not specific about exact time period used it seems like for profit command the range is inclusive of the start date includes the start date in its calculations but exclusive of the end date does not include the end date in its calculations for example transactions conducted on do not show up when this command is run profit sd ed but show up when this command is run profit sd ed this could be made clearer in the user guide | 0 |
296,537 | 25,557,548,311 | IssuesEvent | 2022-11-30 08:13:06 | apache/pulsar | https://api.github.com/repos/apache/pulsar | opened | Flaky-test: ProxyAuthenticationTest.authenticatedSocketTest | component/test flaky-tests | ### Search before asking
- [X] I searched in the [issues](https://github.com/apache/pulsar/issues) and found nothing similar.
### Example failure
https://github.com/apache/pulsar/actions/runs/3581022993/jobs/6023785454#step:10:1392
### Exception stacktrace
<!-- optionally provide the full stacktrace -->
<summary>Full exception stacktrace</summary>
<pre><code>
org.apache.pulsar.websocket.proxy.ProxyAuthenticationTest.authenticatedSocketTest(org.apache.pulsar.websocket.proxy.ProxyAuthenticationTest)
[INFO] Run 1: PASS
Error: Run 2: ProxyAuthenticationTest.authenticatedSocketTest:144->checkSocket:133 » ThreadTimeout
</code></pre>
### Are you willing to submit a PR?
- [ ] I'm willing to submit a PR! | 2.0 | Flaky-test: ProxyAuthenticationTest.authenticatedSocketTest - ### Search before asking
- [X] I searched in the [issues](https://github.com/apache/pulsar/issues) and found nothing similar.
### Example failure
https://github.com/apache/pulsar/actions/runs/3581022993/jobs/6023785454#step:10:1392
### Exception stacktrace
<!-- optionally provide the full stacktrace -->
<summary>Full exception stacktrace</summary>
<pre><code>
org.apache.pulsar.websocket.proxy.ProxyAuthenticationTest.authenticatedSocketTest(org.apache.pulsar.websocket.proxy.ProxyAuthenticationTest)
[INFO] Run 1: PASS
Error: Run 2: ProxyAuthenticationTest.authenticatedSocketTest:144->checkSocket:133 » ThreadTimeout
</code></pre>
### Are you willing to submit a PR?
- [ ] I'm willing to submit a PR! | non_priority | flaky test proxyauthenticationtest authenticatedsockettest search before asking i searched in the and found nothing similar example failure exception stacktrace full exception stacktrace org apache pulsar websocket proxy proxyauthenticationtest authenticatedsockettest org apache pulsar websocket proxy proxyauthenticationtest run pass error run proxyauthenticationtest authenticatedsockettest checksocket » threadtimeout are you willing to submit a pr i m willing to submit a pr | 0 |
282,882 | 21,315,979,916 | IssuesEvent | 2022-04-16 09:26:50 | ivanaitzliddat/pe | https://api.github.com/repos/ivanaitzliddat/pe | opened | Inconsistent formatting | severity.VeryLow type.DocumentationBug | The format of the commands are inconsistent across different features. Unsure if those between < and > are optional or no difference.


<!--session: 1650095959978-2d35ac18-cc5d-4738-853a-84b4d2bcc82b-->
<!--Version: Web v3.4.2--> | 1.0 | Inconsistent formatting - The format of the commands are inconsistent across different features. Unsure if those between < and > are optional or no difference.


<!--session: 1650095959978-2d35ac18-cc5d-4738-853a-84b4d2bcc82b-->
<!--Version: Web v3.4.2--> | non_priority | inconsistent formatting the format of the commands are inconsistent across different features unsure if those between lt and gt are optional or no difference | 0 |
350,778 | 31,932,311,571 | IssuesEvent | 2023-09-19 08:16:13 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix jax_lax_operators.test_jax_reciprocal | JAX Frontend Sub Task Failing Test | | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6011246999"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6011246999"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6011246999"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6011246999"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6059097283"><img src=https://img.shields.io/badge/-failure-red></a>
| 1.0 | Fix jax_lax_operators.test_jax_reciprocal - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6011246999"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6011246999"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6011246999"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6011246999"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6059097283"><img src=https://img.shields.io/badge/-failure-red></a>
| non_priority | fix jax lax operators test jax reciprocal numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src | 0 |
1,133 | 5,154,972,740 | IssuesEvent | 2017-01-15 06:35:19 | pi-engine/pi | https://api.github.com/repos/pi-engine/pi | closed | Add support LiteSpeed | Architecture | Hello
itposhible to add support LiteSpeed on setup? I have about 8 websites by pi on LiteSpeed server and I don't have any problem yet , all configs is like apache ,
If needed I can send test host for you
Some websites on LiteSpeed :
- http://www.payamakyab.com/
- http://www.fantasy.ir/
- http://www.faragostaresh.com/
- http://www.nmgroup.ir/
- http://www.4vade.com/
| 1.0 | Add support LiteSpeed - Hello
itposhible to add support LiteSpeed on setup? I have about 8 websites by pi on LiteSpeed server and I don't have any problem yet , all configs is like apache ,
If needed I can send test host for you
Some websites on LiteSpeed :
- http://www.payamakyab.com/
- http://www.fantasy.ir/
- http://www.faragostaresh.com/
- http://www.nmgroup.ir/
- http://www.4vade.com/
| non_priority | add support litespeed hello itposhible to add support litespeed on setup i have about websites by pi on litespeed server and i don t have any problem yet all configs is like apache if needed i can send test host for you some websites on litespeed | 0 |
46,324 | 24,477,019,303 | IssuesEvent | 2022-10-08 09:49:41 | galaxyproject/galaxy | https://api.github.com/repos/galaxyproject/galaxy | closed | Workflow Invocation list performance issues | area/workflows area/performance | The Workflow Invocation list loads pretty slow after a couple hundred invocations. Suggesting revising it for lazy loading. | True | Workflow Invocation list performance issues - The Workflow Invocation list loads pretty slow after a couple hundred invocations. Suggesting revising it for lazy loading. | non_priority | workflow invocation list performance issues the workflow invocation list loads pretty slow after a couple hundred invocations suggesting revising it for lazy loading | 0 |
21,159 | 16,600,173,796 | IssuesEvent | 2021-06-01 18:16:36 | MPMG-DCC-UFMG/C01 | https://api.github.com/repos/MPMG-DCC-UFMG/C01 | opened | Campos comuns preenchidos por padrão | usabilidade | Observando como o sistema tem sido utilizado, algumas configurações acabam se repetindo, desta forma alguns campos poderiam vir pré-selecionados para agilizar o processo.
Como exemplo, os campos de seguir links e baixar arquivos já vir selecionado e o campo das extensões já vir preenchido com as mais comuns , como pdf, doc e docx. E claro, ainda dando a opção da pessoa desmarcá-los e alterá-los.

| True | Campos comuns preenchidos por padrão - Observando como o sistema tem sido utilizado, algumas configurações acabam se repetindo, desta forma alguns campos poderiam vir pré-selecionados para agilizar o processo.
Como exemplo, os campos de seguir links e baixar arquivos já vir selecionado e o campo das extensões já vir preenchido com as mais comuns , como pdf, doc e docx. E claro, ainda dando a opção da pessoa desmarcá-los e alterá-los.

| non_priority | campos comuns preenchidos por padrão observando como o sistema tem sido utilizado algumas configurações acabam se repetindo desta forma alguns campos poderiam vir pré selecionados para agilizar o processo como exemplo os campos de seguir links e baixar arquivos já vir selecionado e o campo das extensões já vir preenchido com as mais comuns como pdf doc e docx e claro ainda dando a opção da pessoa desmarcá los e alterá los | 0 |
3,336 | 4,343,245,005 | IssuesEvent | 2016-07-29 00:32:48 | NyxStudios/TShock | https://api.github.com/repos/NyxStudios/TShock | closed | Validation on UpdateItemDrop | Security Problem | Presumably this can be used to crash clients by sending invalid item IDs. | True | Validation on UpdateItemDrop - Presumably this can be used to crash clients by sending invalid item IDs. | non_priority | validation on updateitemdrop presumably this can be used to crash clients by sending invalid item ids | 0 |
144,947 | 19,318,937,206 | IssuesEvent | 2021-12-14 01:41:25 | vlaship/build-docker-image | https://api.github.com/repos/vlaship/build-docker-image | opened | CVE-2021-33037 (Medium) detected in tomcat-embed-core-9.0.26.jar | security vulnerability | ## CVE-2021-33037 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.26.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: build-docker-image/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.26/6312ba542bc58fa9ee789a43516ce4d862548a6b/tomcat-embed-core-9.0.26.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.26/6312ba542bc58fa9ee789a43516ce4d862548a6b/tomcat-embed-core-9.0.26.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.9.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.9.RELEASE.jar
- tomcat-embed-websocket-9.0.26.jar
- :x: **tomcat-embed-core-9.0.26.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Tomcat 10.0.0-M1 to 10.0.6, 9.0.0.M1 to 9.0.46 and 8.5.0 to 8.5.66 did not correctly parse the HTTP transfer-encoding request header in some circumstances leading to the possibility to request smuggling when used with a reverse proxy. Specifically: - Tomcat incorrectly ignored the transfer encoding header if the client declared it would only accept an HTTP/1.0 response; - Tomcat honoured the identify encoding; and - Tomcat did not ensure that, if present, the chunked encoding was the final encoding.
<p>Publish Date: 2021-07-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33037>CVE-2021-33037</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rd84fae1f474597bdf358f5bdc0a5c453c507bd527b83e8be6b5ea3f4%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rd84fae1f474597bdf358f5bdc0a5c453c507bd527b83e8be6b5ea3f4%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-07-12</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.68, 9.0.48, 10.0.7, org.apache.tomcat.embed:tomcat-embed-core:8.5.68, 9.0.48, 10.0.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-33037 (Medium) detected in tomcat-embed-core-9.0.26.jar - ## CVE-2021-33037 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.26.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: build-docker-image/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.26/6312ba542bc58fa9ee789a43516ce4d862548a6b/tomcat-embed-core-9.0.26.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.26/6312ba542bc58fa9ee789a43516ce4d862548a6b/tomcat-embed-core-9.0.26.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.9.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.9.RELEASE.jar
- tomcat-embed-websocket-9.0.26.jar
- :x: **tomcat-embed-core-9.0.26.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Tomcat 10.0.0-M1 to 10.0.6, 9.0.0.M1 to 9.0.46 and 8.5.0 to 8.5.66 did not correctly parse the HTTP transfer-encoding request header in some circumstances leading to the possibility to request smuggling when used with a reverse proxy. Specifically: - Tomcat incorrectly ignored the transfer encoding header if the client declared it would only accept an HTTP/1.0 response; - Tomcat honoured the identify encoding; and - Tomcat did not ensure that, if present, the chunked encoding was the final encoding.
<p>Publish Date: 2021-07-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33037>CVE-2021-33037</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rd84fae1f474597bdf358f5bdc0a5c453c507bd527b83e8be6b5ea3f4%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rd84fae1f474597bdf358f5bdc0a5c453c507bd527b83e8be6b5ea3f4%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-07-12</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.68, 9.0.48, 10.0.7, org.apache.tomcat.embed:tomcat-embed-core:8.5.68, 9.0.48, 10.0.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in tomcat embed core jar cve medium severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file build docker image build gradle path to vulnerable library root gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar root gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar tomcat embed websocket jar x tomcat embed core jar vulnerable library vulnerability details apache tomcat to to and to did not correctly parse the http transfer encoding request header in some circumstances leading to the possibility to request smuggling when used with a reverse proxy specifically tomcat incorrectly ignored the transfer encoding header if the client declared it would only accept an http response tomcat honoured the identify encoding and tomcat did not ensure that if present the chunked encoding was the final encoding publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat coyote org apache tomcat embed tomcat embed core step up your open source security game with whitesource | 0 |
19,296 | 3,438,003,814 | IssuesEvent | 2015-12-13 17:32:10 | eloipuertas/ES2015A | https://api.github.com/repos/eloipuertas/ES2015A | closed | Defining buildings in campaigns | design Group D | ### DESCRIPTION:
We want to put cubes with tags in the 4 campaign scenes to place buildings.
### OUTCOME EXPECTED / ACCEPTANCE CRITERIA:
Campaigns with scattered cubes.
### Estimated time effort: 3h | 1.0 | Defining buildings in campaigns - ### DESCRIPTION:
We want to put cubes with tags in the 4 campaign scenes to place buildings.
### OUTCOME EXPECTED / ACCEPTANCE CRITERIA:
Campaigns with scattered cubes.
### Estimated time effort: 3h | non_priority | defining buildings in campaigns description we want to put cubes with tags in the campaign scenes to place buildings outcome expected acceptance criteria campaigns with scattered cubes estimated time effort | 0 |
6,001 | 8,674,599,627 | IssuesEvent | 2018-11-30 08:16:12 | FundacionParaguaya/MentorApp | https://api.github.com/repos/FundacionParaguaya/MentorApp | reopened | Validation on day and year of birth date | UX Requirement bug | - [ ] Validation of Day and Year in the dropdown its delayed - An error message only appears when the user clicks on the next input field - When the user clicks on the next field it hides the invalid message - thus creating a usability issue
- [ ] Validation on year is incorrect - This allows any number even those such as 4444 | 1.0 | Validation on day and year of birth date - - [ ] Validation of Day and Year in the dropdown its delayed - An error message only appears when the user clicks on the next input field - When the user clicks on the next field it hides the invalid message - thus creating a usability issue
- [ ] Validation on year is incorrect - This allows any number even those such as 4444 | non_priority | validation on day and year of birth date validation of day and year in the dropdown its delayed an error message only appears when the user clicks on the next input field when the user clicks on the next field it hides the invalid message thus creating a usability issue validation on year is incorrect this allows any number even those such as | 0 |
113,925 | 9,668,686,942 | IssuesEvent | 2019-05-21 15:37:34 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: copy/bank/rows=10000000,nodes=9,txn=false failed | C-test-failure O-roachtest O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/c8bda1de440cfe90cf23a433119d77795cfa0047
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=copy/bank/rows=10000000,nodes=9,txn=false PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1292152&tab=buildLog
```
The test failed on branch=release-19.1, cloud=gce:
copy.go:119,cluster.go:1812,errgroup.go:57: failed to copy rows: pq: result is ambiguous (removing replica)
cluster.go:1833,copy.go:132,copy.go:144,test.go:1251: Goexit() was called
``` | 2.0 | roachtest: copy/bank/rows=10000000,nodes=9,txn=false failed - SHA: https://github.com/cockroachdb/cockroach/commits/c8bda1de440cfe90cf23a433119d77795cfa0047
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=copy/bank/rows=10000000,nodes=9,txn=false PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1292152&tab=buildLog
```
The test failed on branch=release-19.1, cloud=gce:
copy.go:119,cluster.go:1812,errgroup.go:57: failed to copy rows: pq: result is ambiguous (removing replica)
cluster.go:1833,copy.go:132,copy.go:144,test.go:1251: Goexit() was called
``` | non_priority | roachtest copy bank rows nodes txn false failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests copy bank rows nodes txn false pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch release cloud gce copy go cluster go errgroup go failed to copy rows pq result is ambiguous removing replica cluster go copy go copy go test go goexit was called | 0 |
62,393 | 25,984,510,543 | IssuesEvent | 2022-12-19 22:14:07 | hashicorp/terraform-provider-azurerm | https://api.github.com/repos/hashicorp/terraform-provider-azurerm | closed | Support for deploymentScripts resource | new-resource service/resources | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
With provisioners being considered a last resort and TF Enterprise executions happening in ephemeral containers, it would be nice to have a way to execute commands on resources when there is no supported resource in the provider or it is a command that needs to be executed on a resource, like a VM.
See MS documentation - https://docs.microsoft.com/en-us/azure/templates/microsoft.resources/deploymentscripts?tabs=json
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* azurerm_deployment_script
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
resource "azurerm_windows_virtual_machine" "example" {
# property definitions
}
resource "azurerm_deployment_script" "jit" {
depends_on = [azurerm_windows_virtual_machine.example]
name = "myScript"
location = "centralus"
kind = "AzurePowerShell"
script = <<SCRIPT
$ddressPrefixes = New-Object System.Collections.Generic.List[string]
$AddressPrefixes.Add('x.x.x.x')
$AddressPrefixes.Add('x.x.x.x')
$JitPolicy = (@{
id = "/subscriptions/$(az_sub_id)/resourceGroups/$(resource_group_name)/providers/Microsoft.Compute/virtualMachines/${{ vm.vm_name }}";
ports = (@{
number = 3389;
protocol = "*";
allowedSourceAddressPrefixes = $AddressPrefixes;
maxRequestAccessDuration = "PT3H"
})
})
$JitPolicyArr = @($JitPolicy)
Set-AzJitNetworkAccessPolicy -Kind "Basic" -Location $(location) -Name "default" -ResourceGroupName $(resource_group_name) -VirtualMachine $JitPolicyArr
SCRIPT
identity {
type = "UserAssigned"
id = azurerm_managed_identity.example.id
}
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
--->
* #0000
| 1.0 | Support for deploymentScripts resource - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
With provisioners being considered a last resort and TF Enterprise executions happening in ephemeral containers, it would be nice to have a way to execute commands on resources when there is no supported resource in the provider or it is a command that needs to be executed on a resource, like a VM.
See MS documentation - https://docs.microsoft.com/en-us/azure/templates/microsoft.resources/deploymentscripts?tabs=json
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* azurerm_deployment_script
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
resource "azurerm_windows_virtual_machine" "example" {
# property definitions
}
resource "azurerm_deployment_script" "jit" {
depends_on = [azurerm_windows_virtual_machine.example]
name = "myScript"
location = "centralus"
kind = "AzurePowerShell"
script = <<SCRIPT
$ddressPrefixes = New-Object System.Collections.Generic.List[string]
$AddressPrefixes.Add('x.x.x.x')
$AddressPrefixes.Add('x.x.x.x')
$JitPolicy = (@{
id = "/subscriptions/$(az_sub_id)/resourceGroups/$(resource_group_name)/providers/Microsoft.Compute/virtualMachines/${{ vm.vm_name }}";
ports = (@{
number = 3389;
protocol = "*";
allowedSourceAddressPrefixes = $AddressPrefixes;
maxRequestAccessDuration = "PT3H"
})
})
$JitPolicyArr = @($JitPolicy)
Set-AzJitNetworkAccessPolicy -Kind "Basic" -Location $(location) -Name "default" -ResourceGroupName $(resource_group_name) -VirtualMachine $JitPolicyArr
SCRIPT
identity {
type = "UserAssigned"
id = azurerm_managed_identity.example.id
}
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
--->
* #0000
| non_priority | support for deploymentscripts resource community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description with provisioners being considered a last resort and tf enterprise executions happening in ephemeral containers it would be nice to have a way to execute commands on resources when there is no supported resource in the provider or it is a command that needs to be executed on a resource like a vm see ms documentation new or affected resource s azurerm deployment script potential terraform configuration hcl copy paste your terraform configurations here for large terraform configs please use a service like dropbox and share a link to the zip file for security you can also encrypt the files using our gpg public key resource azurerm windows virtual machine example property definitions resource azurerm deployment script jit depends on name myscript location centralus kind azurepowershell script script ddressprefixes new object system collections generic list addressprefixes add x x x x addressprefixes add x x x x jitpolicy id subscriptions az sub id resourcegroups resource group name providers microsoft compute virtualmachines vm vm name ports number protocol allowedsourceaddressprefixes addressprefixes maxrequestaccessduration jitpolicyarr jitpolicy set azjitnetworkaccesspolicy kind basic location location name default resourcegroupname resource group name virtualmachine jitpolicyarr script identity type userassigned id azurerm managed identity example id references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example | 0 |
388,496 | 26,768,958,296 | IssuesEvent | 2023-01-31 12:47:41 | bakdata/kpops | https://api.github.com/repos/bakdata/kpops | closed | Tidy structure of documentation | area/documentation | ## User guide
- ~~User Guide~~ → What is KPOps?
- [Getting Started](https://bakdata.github.io/kpops/0.5/user/getting-started/)
- [Setup KPOps](https://bakdata.github.io/kpops/0.5/user/getting-started/setup/)
- Deploy Word Count pipeline
- Producer
- Streams App
- Sink → Redis
- _Destroy_
- _Clean_
- _Reset_
- [~~Teardown resources~~](https://bakdata.github.io/kpops/0.5/user/getting-started/teardown/)
- [Examples](https://bakdata.github.io/kpops/0.5/user/examples/)
- [ATM fraud detection pipeline](https://bakdata.github.io/kpops/0.5/user/examples/atm-fraud-pipeline/)
- [References](https://bakdata.github.io/kpops/0.5/user/references/)
- [CLI usage](https://bakdata.github.io/kpops/0.5/user/references/cli-commands/)
- [Variables](https://bakdata.github.io/kpops/0.5/user/references/variables/)
---
## _Developer guide_
_Everything written in italic will be done later_ | 1.0 | Tidy structure of documentation - ## User guide
- ~~User Guide~~ → What is KPOps?
- [Getting Started](https://bakdata.github.io/kpops/0.5/user/getting-started/)
- [Setup KPOps](https://bakdata.github.io/kpops/0.5/user/getting-started/setup/)
- Deploy Word Count pipeline
- Producer
- Streams App
- Sink → Redis
- _Destroy_
- _Clean_
- _Reset_
- [~~Teardown resources~~](https://bakdata.github.io/kpops/0.5/user/getting-started/teardown/)
- [Examples](https://bakdata.github.io/kpops/0.5/user/examples/)
- [ATM fraud detection pipeline](https://bakdata.github.io/kpops/0.5/user/examples/atm-fraud-pipeline/)
- [References](https://bakdata.github.io/kpops/0.5/user/references/)
- [CLI usage](https://bakdata.github.io/kpops/0.5/user/references/cli-commands/)
- [Variables](https://bakdata.github.io/kpops/0.5/user/references/variables/)
---
## _Developer guide_
_Everything written in italic will be done later_ | non_priority | tidy structure of documentation user guide user guide → what is kpops deploy word count pipeline producer streams app sink → redis destroy clean reset developer guide everything written in italic will be done later | 0 |
6,914 | 9,212,024,709 | IssuesEvent | 2019-03-09 20:23:24 | wpCloud/wp-stateless | https://api.github.com/repos/wpCloud/wp-stateless | opened | GCS URL isn't being set until image optimized | type/bug type/compatibility workflow/ready | https://wordpress.org/support/topic/new-media-items-not-updating-with-correct-url/
Affected compatibility: Imagify and Smush.
The sync is being done after Imagify complete the optimization. So we don't get GCS URL until the optimization is completed.
We previously solved it by syncing the full size image first then sync all sizes again when optimization completes.
But the issue in 2.2.4 arrived when we changed (in issue #359) how Imagify prevent the sync on image upload.
Now we have to make sure that Optimization plugins don't prevent when we try to upload full size image first.
We had to do this in this way for performance measure. Unless we would have to sync twice once after upload and again when image optimized. | True | GCS URL isn't being set until image optimized - https://wordpress.org/support/topic/new-media-items-not-updating-with-correct-url/
Affected compatibility: Imagify and Smush.
The sync is being done after Imagify complete the optimization. So we don't get GCS URL until the optimization is completed.
We previously solved it by syncing the full size image first then sync all sizes again when optimization completes.
But the issue in 2.2.4 arrived when we changed (in issue #359) how Imagify prevent the sync on image upload.
Now we have to make sure that Optimization plugins don't prevent when we try to upload full size image first.
We had to do this in this way for performance measure. Unless we would have to sync twice once after upload and again when image optimized. | non_priority | gcs url isn t being set until image optimized affected compatibility imagify and smush the sync is being done after imagify complete the optimization so we don t get gcs url until the optimization is completed we previously solved it by syncing the full size image first then sync all sizes again when optimization completes but the issue in arrived when we changed in issue how imagify prevent the sync on image upload now we have to make sure that optimization plugins don t prevent when we try to upload full size image first we had to do this in this way for performance measure unless we would have to sync twice once after upload and again when image optimized | 0 |
213,737 | 16,534,405,640 | IssuesEvent | 2021-05-27 10:03:35 | systemd/systemd | https://api.github.com/repos/systemd/systemd | opened | test-suite failures inside Docker / Podman | tests | **systemd version the issue has been seen with**
248.3
**Used distribution**
Debian sid
Triggered by #19733 I ran the systemd test suite inside Docker and also Podman.
If you want to reproduce the issue, install podman, then run the following command:
```
podman run --name debian-sid -e LANG=C.UTF-8 -it debian:sid /bin/bash -x -c 'echo "deb-src http://ftp.debian.org/debian/ experimental main" >> /etc/apt/sources.list; apt update && apt upgrade -y && apt build-dep -y systemd; apt install -y systemd; systemd-detect-virt; systemd-detect-virt --container; apt-get source -b systemd'
```
This results in
```
Summary of Failures:
324/611 udev-test SKIP 0.13s
352/611 test-bus-marshal SKIP 0.04s
356/611 test-bus-chat SKIP 0.01s
357/611 test-bus-cleanup SKIP 0.01s
358/611 test-bus-track SKIP 0.01s
362/611 test-bus-gvariant SKIP 0.01s
364/611 test-bus-match SKIP 0.01s
372/611 test-sd-device-monitor SKIP 0.01s
383/611 test-dhcp-server SKIP 0.12s
399/611 test-oomd-util SKIP 0.01s
417/611 test-boot-timestamps SKIP 0.01s
425/611 test-capability FAIL 0.62s (killed by signal 6 SIGABRT)
425/611 test-capability FAIL 0.62s (killed by signal 6 SIGABRT)
442/611 test-mount-util FAIL 0.47s (killed by signal 6 SIGABRT)
442/611 test-mount-util FAIL 0.47s (killed by signal 6 SIGABRT)
480/611 test-barrier SKIP 0.01s
482/611 test-namespace SKIP 0.01s
486/611 test-seccomp FAIL 0.37s (killed by signal 6 SIGABRT)
486/611 test-seccomp FAIL 0.37s (killed by signal 6 SIGABRT)
489/611 test-loop-block SKIP 0.01s
492/611 test-bpf-devices SKIP 0.01s
493/611 test-bpf-firewall SKIP 0.01s
512/611 test-firewall-util SKIP 9.06s
535/611 test-execute FAIL 0.47s (killed by signal 6 SIGABRT)
535/611 test-execute FAIL 0.47s (killed by signal 6 SIGABRT)
552/611 test-sd-hwdb SKIP 0.01s
Ok: 589
Expected Fail: 0
Fail: 4
Unexpected Pass: 0
Skipped: 18
Timeout: 0
```
A full build log is attached. Notice, how the failing tests are duplicated in the summary. Probably worth another bug report?
As you can see from the log, systemd-detect-virt does properly detect the Podman environment.
[build.txt](https://github.com/systemd/systemd/files/6553050/build.txt)
| 1.0 | test-suite failures inside Docker / Podman - **systemd version the issue has been seen with**
248.3
**Used distribution**
Debian sid
Triggered by #19733 I ran the systemd test suite inside Docker and also Podman.
If you want to reproduce the issue, install podman, then run the following command:
```
podman run --name debian-sid -e LANG=C.UTF-8 -it debian:sid /bin/bash -x -c 'echo "deb-src http://ftp.debian.org/debian/ experimental main" >> /etc/apt/sources.list; apt update && apt upgrade -y && apt build-dep -y systemd; apt install -y systemd; systemd-detect-virt; systemd-detect-virt --container; apt-get source -b systemd'
```
This results in
```
Summary of Failures:
324/611 udev-test SKIP 0.13s
352/611 test-bus-marshal SKIP 0.04s
356/611 test-bus-chat SKIP 0.01s
357/611 test-bus-cleanup SKIP 0.01s
358/611 test-bus-track SKIP 0.01s
362/611 test-bus-gvariant SKIP 0.01s
364/611 test-bus-match SKIP 0.01s
372/611 test-sd-device-monitor SKIP 0.01s
383/611 test-dhcp-server SKIP 0.12s
399/611 test-oomd-util SKIP 0.01s
417/611 test-boot-timestamps SKIP 0.01s
425/611 test-capability FAIL 0.62s (killed by signal 6 SIGABRT)
425/611 test-capability FAIL 0.62s (killed by signal 6 SIGABRT)
442/611 test-mount-util FAIL 0.47s (killed by signal 6 SIGABRT)
442/611 test-mount-util FAIL 0.47s (killed by signal 6 SIGABRT)
480/611 test-barrier SKIP 0.01s
482/611 test-namespace SKIP 0.01s
486/611 test-seccomp FAIL 0.37s (killed by signal 6 SIGABRT)
486/611 test-seccomp FAIL 0.37s (killed by signal 6 SIGABRT)
489/611 test-loop-block SKIP 0.01s
492/611 test-bpf-devices SKIP 0.01s
493/611 test-bpf-firewall SKIP 0.01s
512/611 test-firewall-util SKIP 9.06s
535/611 test-execute FAIL 0.47s (killed by signal 6 SIGABRT)
535/611 test-execute FAIL 0.47s (killed by signal 6 SIGABRT)
552/611 test-sd-hwdb SKIP 0.01s
Ok: 589
Expected Fail: 0
Fail: 4
Unexpected Pass: 0
Skipped: 18
Timeout: 0
```
A full build log is attached. Notice, how the failing tests are duplicated in the summary. Probably worth another bug report?
As you can see from the log, systemd-detect-virt does properly detect the Podman environment.
[build.txt](https://github.com/systemd/systemd/files/6553050/build.txt)
| non_priority | test suite failures inside docker podman systemd version the issue has been seen with used distribution debian sid triggered by i ran the systemd test suite inside docker and also podman if you want to reproduce the issue install podman then run the following command podman run name debian sid e lang c utf it debian sid bin bash x c echo deb src experimental main etc apt sources list apt update apt upgrade y apt build dep y systemd apt install y systemd systemd detect virt systemd detect virt container apt get source b systemd this results in summary of failures udev test skip test bus marshal skip test bus chat skip test bus cleanup skip test bus track skip test bus gvariant skip test bus match skip test sd device monitor skip test dhcp server skip test oomd util skip test boot timestamps skip test capability fail killed by signal sigabrt test capability fail killed by signal sigabrt test mount util fail killed by signal sigabrt test mount util fail killed by signal sigabrt test barrier skip test namespace skip test seccomp fail killed by signal sigabrt test seccomp fail killed by signal sigabrt test loop block skip test bpf devices skip test bpf firewall skip test firewall util skip test execute fail killed by signal sigabrt test execute fail killed by signal sigabrt test sd hwdb skip ok expected fail fail unexpected pass skipped timeout a full build log is attached notice how the failing tests are duplicated in the summary probably worth another bug report as you can see from the log systemd detect virt does properly detect the podman environment | 0 |
42,963 | 5,559,653,714 | IssuesEvent | 2017-03-24 17:27:27 | bounswe/bounswe2017group3 | https://api.github.com/repos/bounswe/bounswe2017group3 | opened | Enhancing the Project Plan | design planning | There are some misunderstandings in Project Plan.
What is the difference between Class Diagram in Behavioral Modeling and System Classes in Structural Modeling?
What is the difference between Activity Diagrams and Action Diagrams?
Funding and analysis parts are unclear and have no assignees (resources). | 1.0 | Enhancing the Project Plan - There are some misunderstandings in Project Plan.
What is the difference between Class Diagram in Behavioral Modeling and System Classes in Structural Modeling?
What is the difference between Activity Diagrams and Action Diagrams?
Funding and analysis parts are unclear and have no assignees (resources). | non_priority | enhancing the project plan there are some misunderstandings in project plan what is the difference between class diagram in behavioral modeling and system classes in structural modeling what is the difference between activity diagrams and action diagrams funding and analysis parts are unclear and have no assignees resources | 0 |
6,769 | 4,540,538,862 | IssuesEvent | 2016-09-09 14:56:41 | Chicago/opengrid | https://api.github.com/repos/Chicago/opengrid | reopened | More accessible language | CUTGroup dev branch usability | From the CUTGroup test, we saw that there are a lot of opportunities to make the language more accessible. On the homepage, we heard that a lot of the content seemed targeted to developers or the tech community. While on the advanced search, words like "geo-spatial filters" could be switched to "location" or "queries" could be switched to "searches." It would be worth going through the site again and noting words that could be switched to more accessible, plain-language options.
I would also think about the map markers, too, and making those easier to understand from a resident perspective. | True | More accessible language - From the CUTGroup test, we saw that there are a lot of opportunities to make the language more accessible. On the homepage, we heard that a lot of the content seemed targeted to developers or the tech community. While on the advanced search, words like "geo-spatial filters" could be switched to "location" or "queries" could be switched to "searches." It would be worth going through the site again and noting words that could be switched to more accessible, plain-language options.
I would also think about the map markers, too, and making those easier to understand from a resident perspective. | non_priority | more accessible language from the cutgroup test we saw that there are a lot of opportunities to make the language more accessible on the homepage we heard that a lot of the content seemed targeted to developers or the tech community while on the advanced search words like geo spatial filters could be switched to location or queries could be switched to searches it would be worth going through the site again and noting words that could be switched to more accessible plain language options i would also think about the map markers too and making those easier to understand from a resident perspective | 0 |
63,038 | 26,234,970,356 | IssuesEvent | 2023-01-05 06:03:13 | Azure/azure-powershell | https://api.github.com/repos/Azure/azure-powershell | closed | Set-AzPolicyDefinition - white spaces are removed for all provided string values | ARM Service Attention bug customer-reported issue-addressed | ### Description
New Bug within Az-Resource 6.5 version.
When calling Set-AzPolicyDefinition, all (!) spaces are removed for all properties. It's very easy to reproduce. The spaces can be provided for every parameter.
For example:
Set-AzPolicyDefinition -Name $name `
>> -DisplayName $properties.DisplayName `
>> -Description **"This policy creates a Resource Group to subscription for RSVs."** `
>> -ManagementGroupName $mgmtGroupName `
>> -Mode $properties.Mode `
>> -Policy $policy `
>> -Parameter $parameters `
>> -Metadata $metadata `
>> -Debug
Look at the spaces in -Description. All other parameters are built beforhand, and are irrelevant for this showcase.
When looking at the PUT REQUEST in the Debug output, the following description is shown:
DEBUG: ============================ HTTP REQUEST ============================
HTTP Method:
PUT
Absolute Uri:
https://management.azure.com/providers/Microsoft.Management/managementGroups/RBHQ/providers/Microsoft.Authorization/policydefinitions/dine-vmaas-backupvault-rg?api-version=2021-06-01
Headers:
User-Agent : Az.Resources/6.5.0,PSVersion/v7.3.0,AzurePowershell/v9.2.0
ParameterSetName : ManagementGroupNameParameterSet
CommandName : Set-AzPolicyDefinition
Body:
{
"name": "dine-vmaas-backupvault-rg",
"properties": {
"description": "**ThispolicycreatesaResourceGrouptosubscriptionforRSVs.**",
So all spaces are removed. This happens for every provided parameter/property/JSON-fragment to Set-AzPolicyDefinition. Tested with allmost every combination.
Under Az CmdLet v < 6.5 this was definitely not an issue.
This issue may be related to #20386:
https://github.com/Azure/azure-powershell/issues/20386
### Issue script & Debug output
```PowerShell
# pls. provde a valid policy object and set $name, $mgmtGroupName, $policy, $parameters, $metadata accordingly
Set-AzPolicyDefinition -Name $name `
-DisplayName $displayName `
-Description **"This policy creates a Resource Group to subscription for RSVs."** `
-ManagementGroupName $mgmtGroupName `
-Mode $mode `
-Policy $policy `
-Parameter $parameters `
-Metadata $metadata `
-Debug
DEBUG OUTPUT (I had to remove all sensitive information):
DEBUG: 16:24:19 - SetAzurePolicyDefinitionCmdlet begin processing with ParameterSet 'ManagementGroupNameParameterSet'.
DEBUG: 16:24:19 - using account id '<accountname>'...
DEBUG: 16:24:19 - [ConfigManager] Got [False] from [DisplayBreakingChangeWarning], Module = [], Cmdlet = [].
DEBUG: [Common.Authentication]: Authenticating using Account: '<accountname>', environment: 'AzureCloud', tenant: '<id>'
DEBUG: 16:24:19 - [SilentAuthenticator] Calling SharedTokenCacheCredential.GetTokenAsync - TenantId:'<id>', Scopes:'https://management.core.windows.net//.default', AuthorityHost:'https://login.microsoftonline.com/', UserId:'<account>'
DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z] Found 2 cache accounts and 0 broker accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z] Returning 2 accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] MSAL MSAL.NetCore with assembly version '4.46.2.0'. CorrelationId(42cd9b14-3655-4117-8b8f-b2e723e910fd)
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] === AcquireTokenSilent Parameters ===
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] LoginHint provided: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Account provided: True
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] ForceRefresh: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd]
=== Request Data ===
Authority Provided? - True
Scopes - https://management.core.windows.net//.default
Extra Query Params Keys (space separated) -
ApiId - AcquireTokenSilent
IsConfidentialClient - False
SendX5C - False
LoginHint ? False
IsBrokerConfigured - False
HomeAccountId - False
CorrelationId - 42cd9b14-3655-4117-8b8f-b2e723e910fd
UserAssertion set: False
LongRunningOboCacheKey set: False
Region configured:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] === Token Acquisition (SilentRequest) started:
Scopes: https://management.core.windows.net//.default
Authority Host: login.microsoftonline.com
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Access token has expired or about to expire. [Current time (12/14/2022 15:24:19) - Expiration Time (12/14/2022 14:52:29 +00:00) - Extended Expiration Time (12/14/2022 14:52:29 +00:00)]
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [FindRefreshTokenAsync] Refresh token found in the cache? - True
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Fetching instance discovery from the network from host login.microsoftonline.com.
DEBUG: Request [8e931ce2-b92b-4ac9-9375-2e59b7a48abc] GET https://login.microsoftonline.com/common/discovery/instance?api-version=1.1&authorization_endpoint=REDACTED
x-client-SKU:REDACTED
x-client-Ver:REDACTED
x-client-CPU:REDACTED
x-client-OS:REDACTED
client-request-id:REDACTED
return-client-request-id:REDACTED
x-app-name:REDACTED
x-app-ver:REDACTED
x-ms-client-request-id:8e931ce2-b92b-4ac9-9375-2e59b7a48abc
x-ms-return-client-request-id:true
User-Agent:azsdk-net-Identity/1.6.1,(.NET 7.0.0; Microsoft Windows 10.0.22000)
client assembly: Azure.Identity
DEBUG: Response [8e931ce2-b92b-4ac9-9375-2e59b7a48abc] 200 OK (00.3s)
Cache-Control:max-age=86400, private
Strict-Transport-Security:REDACTED
X-Content-Type-Options:REDACTED
Access-Control-Allow-Origin:REDACTED
Access-Control-Allow-Methods:REDACTED
P3P:REDACTED
client-request-id:REDACTED
x-ms-request-id:327265c4-98ff-4492-b187-58ccb1848900
x-ms-ests-server:REDACTED
X-XSS-Protection:REDACTED
Set-Cookie:REDACTED
Date:Wed, 14 Dec 2022 15:24:19 GMT
Content-Type:application/json; charset=utf-8
Content-Length:980
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Authority validation enabled? True.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Authority validation - is known env? True.
DEBUG: Request [ce62bc5b-e44b-447d-b03d-a2e2b4f20a17] POST https://login.microsoftonline.com/<id>/oauth2/v2.0/token
x-client-SKU:REDACTED
x-client-Ver:REDACTED
x-client-CPU:REDACTED
x-client-OS:REDACTED
x-anchormailbox:REDACTED
x-client-current-telemetry:REDACTED
x-client-last-telemetry:REDACTED
x-ms-lib-capability:REDACTED
client-request-id:REDACTED
return-client-request-id:REDACTED
x-app-name:REDACTED
x-app-ver:REDACTED
x-ms-client-request-id:ce62bc5b-e44b-447d-b03d-a2e2b4f20a17
x-ms-return-client-request-id:true
User-Agent:azsdk-net-Identity/1.6.1,(.NET 7.0.0; Microsoft Windows 10.0.22000)
Content-Type:application/x-www-form-urlencoded
client assembly: Azure.Identity
DEBUG: Response [ce62bc5b-e44b-447d-b03d-a2e2b4f20a17] 200 OK (00.2s)
Cache-Control:no-store, no-cache
Pragma:no-cache
Strict-Transport-Security:REDACTED
X-Content-Type-Options:REDACTED
P3P:REDACTED
client-request-id:REDACTED
x-ms-request-id:3e724063-a045-40ee-83d4-fc481188d200
x-ms-ests-server:REDACTED
x-ms-clitelem:REDACTED
X-XSS-Protection:REDACTED
Set-Cookie:REDACTED
Date:Wed, 14 Dec 2022 15:24:19 GMT
Content-Type:application/json; charset=utf-8
Expires:-1
Content-Length:6278
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Checking client info returned from the server..
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Saving token response to cache..
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [SaveTokenResponseAsync] Saving AT in cache and removing overlapping ATs...
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Looking for scopes for the authority in the cache which intersect with https://management.core.windows.net//.default
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Intersecting scope entries count - 1
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Matching entries after filtering by user - 1
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [SaveTokenResponseAsync] Saving Id Token and Account in cache ...
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [SaveTokenResponseAsync] Saving RT in cache...
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Not writing FRT in ADAL legacy cache.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd]
=== Token Acquisition finished successfully:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] AT expiration time: 14.12.2022 16:43:05 +00:00, scopes: https://management.core.windows.net//user_impersonation https://management.core.windows.net//.default. source: IdentityProvider
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Fetched access token from host login.microsoftonline.com.
DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2022-12-14T16:43:05.3631445+00:00
DEBUG: [Common.Authentication]: Received token with LoginType 'User', Tenant: '<id>', UserId: '<account>'
DEBUG: [Common.Authentication]: Authenticating using Account: '<account>', environment: 'AzureCloud', tenant: '<id>'
DEBUG: 16:24:20 - [SilentAuthenticator] Calling SharedTokenCacheCredential.GetTokenAsync - TenantId:'<id>', Scopes:'https://management.core.windows.net//.default', AuthorityHost:'https://login.microsoftonline.com/', UserId:'<account>'
DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z] Found 2 cache accounts and 0 broker accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z] Returning 2 accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] MSAL MSAL.NetCore with assembly version '4.46.2.0'. CorrelationId(5dfd53c3-b511-4672-a862-504a494c30db)
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] === AcquireTokenSilent Parameters ===
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] LoginHint provided: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] Account provided: True
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] ForceRefresh: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db]
=== Request Data ===
Authority Provided? - True
Scopes - https://management.core.windows.net//.default
Extra Query Params Keys (space separated) -
ApiId - AcquireTokenSilent
IsConfidentialClient - False
SendX5C - False
LoginHint ? False
IsBrokerConfigured - False
HomeAccountId - False
CorrelationId - 5dfd53c3-b511-4672-a862-504a494c30db
UserAssertion set: False
LongRunningOboCacheKey set: False
Region configured:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] === Token Acquisition (SilentRequest) started:
Scopes: https://management.core.windows.net//.default
Authority Host: login.microsoftonline.com
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] Access token is not expired. Returning the found cache entry. [Current time (12/14/2022 15:24:20) - Expiration Time (12/14/2022 16:43:05 +00:00) - Extended Expiration Time (12/14/2022 16:43:05 +00:00)]
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] Returning access token found in cache. RefreshOn exists ? False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db]
=== Token Acquisition finished successfully:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] AT expiration time: 14.12.2022 16:43:05 +00:00, scopes: https://management.core.windows.net//user_impersonation https://management.core.windows.net//.default. source: Cache
DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2022-12-14T16:43:05.0000000+00:00
DEBUG: [Common.Authentication]: Received token with LoginType 'User', Tenant: '<id>', UserId: '<account>'
DEBUG: ============================ HTTP REQUEST ============================
HTTP Method:
GET
Absolute Uri:
https://management.azure.com/providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policydefinitions/dine-vmaas-backupvault-rg?api-version=2021-06-01
Headers:
User-Agent : Az.Resources/6.5.0,PSVersion/v7.3.0,AzurePowershell/v9.2.0
ParameterSetName : ManagementGroupNameParameterSet
CommandName : Set-AzPolicyDefinition
Body:
DEBUG: ============================ HTTP RESPONSE ============================
Status Code:
OK
Headers:
Cache-Control : no-cache
Pragma : no-cache
Strict-Transport-Security : max-age=31536000; includeSubDomains
Server : Kestrel
x-ms-ratelimit-remaining-tenant-reads: 11999
x-ms-request-id : 88af0f68-c1ba-4b0e-b1fb-e692dd8cd82c
x-ms-correlation-request-id : 88af0f68-c1ba-4b0e-b1fb-e692dd8cd82c
x-ms-routing-request-id : GERMANYNORTH:20221214T152420Z:88af0f68-c1ba-4b0e-b1fb-e692dd8cd82c
X-Content-Type-Options : nosniff
Date : Wed, 14 Dec 2022 15:24:19 GMT
Body:
{
"properties": {
"displayName": "dine-vmaas-backupvault-rg",
"policyType": "Custom",
"mode": "All",
"description": "ThispolicycreatesaResourceGrouptosubscriptionforRSVs.()",
"metadata": {
"createdBy": "9223d10b-9415-40b9-85e3-acd39f51d237",
"createdOn": "2022-06-07T10:30:23.5028101Z",
"updatedBy": "9223d10b-9415-40b9-85e3-acd39f51d237",
"updatedOn": "2022-12-14T11:53:35.7731819Z"
},
"parameters": {},
"policyRule": {
"if": {
"equals": "Microsoft.Resources/subscriptions",
"field": "type"
},
"then": {
"effect": "deployIfNotExists",
"details": {
"DeploymentScope": "subscription",
"ExistenceScope": "subscription",
"deployment": {
"properties": {
"mode": "incremental",
"template": {
"contentVersion": "1.0.0.1",
"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
"parameters": {},
"resources": [
{
"properties": {},
"location": "westeurope",
"tags": {},
"apiVersion": "2018-05-01",
"name": "dcserver-backupVaults-rg",
"type": "Microsoft.Resources/resourceGroups"
}
]
},
"parameters": {}
},
"location": "westeurope"
},
"name": "dcserver-backupVaults-rg",
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
],
"type": "Microsoft.Resources/subscriptions/resourceGroups"
}
}
}
},
"id": "/providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policyDefinitions/dine-vmaas-backupvault-rg",
"type": "Microsoft.Authorization/policyDefinitions",
"name": "dine-vmaas-backupvault-rg",
"systemData": {
"createdBy": "<account>",
"createdByType": "User",
"createdAt": "2022-06-07T10:30:23.4719617Z",
"lastModifiedBy": "<account>",
"lastModifiedByType": "User",
"lastModifiedAt": "2022-12-14T11:53:35.7040992Z"
}
}
DEBUG: ============================ HTTP REQUEST ============================
HTTP Method:
PUT
Absolute Uri:
https://management.azure.com/providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policydefinitions/dine-vmaas-backupvault-rg?api-version=2021-06-01
Headers:
User-Agent : Az.Resources/6.5.0,PSVersion/v7.3.0,AzurePowershell/v9.2.0
ParameterSetName : ManagementGroupNameParameterSet
CommandName : Set-AzPolicyDefinition
Body:
{
"name": "dine-vmaas-backupvault-rg",
"properties": {
"description": "**ThispolicycreatesaResourceGrouptosubscriptionforRSVs.**",
"displayName": "dine-vmaas-backupvault-rg",
"policyRule": {
"if": {
"equals": "Microsoft.Resources/subscriptions",
"field": "type"
},
"then": {
"effect": "deployIfNotExists",
"details": {
"DeploymentScope": "subscription",
"ExistenceScope": "subscription",
"deployment": {
"properties": {
"mode": "incremental",
"template": {
"contentVersion": "1.0.0.1",
"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
"parameters": {},
"resources": [
{
"properties": {},
"location": "westeurope",
"tags": {},
"apiVersion": "2018-05-01",
"name": "dcserver-backupVaults-rg",
"type": "Microsoft.Resources/resourceGroups"
}
]
},
"parameters": {}
},
"location": "westeurope"
},
"name": "dcserver-backupVaults-rg",
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
],
"type": "Microsoft.Resources/subscriptions/resourceGroups"
}
}
},
"metadata": {},
"parameters": {},
"mode": "All",
"policyType": "Custom"
}
}
DEBUG: ============================ HTTP RESPONSE ============================
Status Code:
Created
Headers:
Cache-Control : no-cache
Pragma : no-cache
Strict-Transport-Security : max-age=31536000; includeSubDomains
Server : Kestrel
x-ms-ratelimit-remaining-tenant-writes: 1199
x-ms-request-id : 1556c2d7-4522-4ebd-b34d-8b8578cf3074
x-ms-correlation-request-id : 1556c2d7-4522-4ebd-b34d-8b8578cf3074
x-ms-routing-request-id : GERMANYNORTH:20221214T152421Z:1556c2d7-4522-4ebd-b34d-8b8578cf3074
X-Content-Type-Options : nosniff
Date : Wed, 14 Dec 2022 15:24:21 GMT
Body:
{
"properties": {
"displayName": "dine-vmaas-backupvault-rg",
"policyType": "Custom",
"mode": "All",
"description": "ThispolicycreatesaResourceGrouptosubscriptionforRSVs.",
"metadata": {
"createdBy": "9223d10b-9415-40b9-85e3-acd39f51d237",
"createdOn": "2022-06-07T10:30:23.5028101Z",
"updatedBy": "9223d10b-9415-40b9-85e3-acd39f51d237",
"updatedOn": "2022-12-14T15:24:21.7332995Z"
},
"parameters": {},
"policyRule": {
"if": {
"equals": "Microsoft.Resources/subscriptions",
"field": "type"
},
"then": {
"effect": "deployIfNotExists",
"details": {
"DeploymentScope": "subscription",
"ExistenceScope": "subscription",
"deployment": {
"properties": {
"mode": "incremental",
"template": {
"contentVersion": "1.0.0.1",
"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
"parameters": {},
"resources": [
{
"properties": {},
"location": "westeurope",
"tags": {},
"apiVersion": "2018-05-01",
"name": "dcserver-backupVaults-rg",
"type": "Microsoft.Resources/resourceGroups"
}
]
},
"parameters": {}
},
"location": "westeurope"
},
"name": "dcserver-backupVaults-rg",
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
],
"type": "Microsoft.Resources/subscriptions/resourceGroups"
}
}
}
},
"id": "/providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policyDefinitions/dine-vmaas-backupvault-rg",
"type": "Microsoft.Authorization/policyDefinitions",
"name": "dine-vmaas-backupvault-rg",
"systemData": {
"createdBy": "<account>",
"createdByType": "User",
"createdAt": "2022-06-07T10:30:23.4719617Z",
"lastModifiedBy": "<account>",
"lastModifiedByType": "User",
"lastModifiedAt": "2022-12-14T15:24:21.6016531Z"
}
}
DEBUG: [Common.Authentication]: Authenticating using Account: '<account>', environment: 'AzureCloud', tenant: '<id>'
DEBUG: 16:24:21 - [SilentAuthenticator] Calling SharedTokenCacheCredential.GetTokenAsync - TenantId:'<id>', Scopes:'https://management.core.windows.net//.default', AuthorityHost:'https://login.microsoftonline.com/', UserId:'<account>'
DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z] Found 2 cache accounts and 0 broker accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z] Returning 2 accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] MSAL MSAL.NetCore with assembly version '4.46.2.0'. CorrelationId(6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee)
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] === AcquireTokenSilent Parameters ===
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] LoginHint provided: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] Account provided: True
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] ForceRefresh: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee]
=== Request Data ===
Authority Provided? - True
Scopes - https://management.core.windows.net//.default
Extra Query Params Keys (space separated) -
ApiId - AcquireTokenSilent
IsConfidentialClient - False
SendX5C - False
LoginHint ? False
IsBrokerConfigured - False
HomeAccountId - False
CorrelationId - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee
UserAssertion set: False
LongRunningOboCacheKey set: False
Region configured:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] === Token Acquisition (SilentRequest) started:
Scopes: https://management.core.windows.net//.default
Authority Host: login.microsoftonline.com
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] Access token is not expired. Returning the found cacName : dine-vmaas-backupvault-rg
ResourceId : /providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policyDefinitions/dine-vmaas-backupvault-rgResourceName : dine-vmaas-backupvault-rg
ResourceType : Microsoft.Authorization/policyDefinitions
SubscriptionId :
Properties : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.Policy.PsPolicyDefinitionProperties
PolicyDefinitionId : /providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policyDefinitions/dine-vmaas-backupvault-rg
DEBUG: AzureQoSEvent: Module: Az.Resources:6.5.0; CommandName: Set-AzPolicyDefinition; PSVersion: 7.3.0; IsSuccess: True; Duration: 00:00:02.2504643
DEBUG: 16:24:22 - [ConfigManager] Got nothing from [EnableDataCollection], Module = [], Cmdlet = []. Returning default value [True].
DEBUG: 16:24:22 - SetAzurePolicyDefinitionCmdlet end processing.
```
### Environment data
```PowerShell
Name Value
---- -----
PSVersion 7.3.0
PSEdition Core
GitCommitId 7.3.0
OS Microsoft Windows 10.0.22000
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
```
### Module versions
```PowerShell
Get-Module Az*
ModuleType Version PreRelease Name ExportedCommands
---------- ------- ---------- ---- ----------------
Script 2.10.4 Az.Accounts {Add-AzEnvironment, Clear-AzConfig, Clear-AzContext, Clear-AzDefault…}
Script 6.5.0 Az.Resources {Export-AzResourceGroup, Export-AzTemplateSpec, Get-AzDenyAssignment, Get-AzDeployment…}
```
### Error output
_No response_ | 1.0 | Set-AzPolicyDefinition - white spaces are removed for all provided string values - ### Description
New Bug within Az-Resource 6.5 version.
When calling Set-AzPolicyDefinition, all (!) spaces are removed for all properties. It's very easy to reproduce. The spaces can be provided for every parameter.
For example:
Set-AzPolicyDefinition -Name $name `
>> -DisplayName $properties.DisplayName `
>> -Description **"This policy creates a Resource Group to subscription for RSVs."** `
>> -ManagementGroupName $mgmtGroupName `
>> -Mode $properties.Mode `
>> -Policy $policy `
>> -Parameter $parameters `
>> -Metadata $metadata `
>> -Debug
Look at the spaces in -Description. All other parameters are built beforhand, and are irrelevant for this showcase.
When looking at the PUT REQUEST in the Debug output, the following description is shown:
DEBUG: ============================ HTTP REQUEST ============================
HTTP Method:
PUT
Absolute Uri:
https://management.azure.com/providers/Microsoft.Management/managementGroups/RBHQ/providers/Microsoft.Authorization/policydefinitions/dine-vmaas-backupvault-rg?api-version=2021-06-01
Headers:
User-Agent : Az.Resources/6.5.0,PSVersion/v7.3.0,AzurePowershell/v9.2.0
ParameterSetName : ManagementGroupNameParameterSet
CommandName : Set-AzPolicyDefinition
Body:
{
"name": "dine-vmaas-backupvault-rg",
"properties": {
"description": "**ThispolicycreatesaResourceGrouptosubscriptionforRSVs.**",
So all spaces are removed. This happens for every provided parameter/property/JSON-fragment to Set-AzPolicyDefinition. Tested with allmost every combination.
Under Az CmdLet v < 6.5 this was definitely not an issue.
This issue may be related to #20386:
https://github.com/Azure/azure-powershell/issues/20386
### Issue script & Debug output
```PowerShell
# pls. provde a valid policy object and set $name, $mgmtGroupName, $policy, $parameters, $metadata accordingly
Set-AzPolicyDefinition -Name $name `
-DisplayName $displayName `
-Description **"This policy creates a Resource Group to subscription for RSVs."** `
-ManagementGroupName $mgmtGroupName `
-Mode $mode `
-Policy $policy `
-Parameter $parameters `
-Metadata $metadata `
-Debug
DEBUG OUTPUT (I had to remove all sensitive information):
DEBUG: 16:24:19 - SetAzurePolicyDefinitionCmdlet begin processing with ParameterSet 'ManagementGroupNameParameterSet'.
DEBUG: 16:24:19 - using account id '<accountname>'...
DEBUG: 16:24:19 - [ConfigManager] Got [False] from [DisplayBreakingChangeWarning], Module = [], Cmdlet = [].
DEBUG: [Common.Authentication]: Authenticating using Account: '<accountname>', environment: 'AzureCloud', tenant: '<id>'
DEBUG: 16:24:19 - [SilentAuthenticator] Calling SharedTokenCacheCredential.GetTokenAsync - TenantId:'<id>', Scopes:'https://management.core.windows.net//.default', AuthorityHost:'https://login.microsoftonline.com/', UserId:'<account>'
DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - a9069589-bba4-4a58-890b-bc4c16ab5c44] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z] Found 2 cache accounts and 0 broker accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z] Returning 2 accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] MSAL MSAL.NetCore with assembly version '4.46.2.0'. CorrelationId(42cd9b14-3655-4117-8b8f-b2e723e910fd)
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] === AcquireTokenSilent Parameters ===
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] LoginHint provided: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Account provided: True
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] ForceRefresh: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd]
=== Request Data ===
Authority Provided? - True
Scopes - https://management.core.windows.net//.default
Extra Query Params Keys (space separated) -
ApiId - AcquireTokenSilent
IsConfidentialClient - False
SendX5C - False
LoginHint ? False
IsBrokerConfigured - False
HomeAccountId - False
CorrelationId - 42cd9b14-3655-4117-8b8f-b2e723e910fd
UserAssertion set: False
LongRunningOboCacheKey set: False
Region configured:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] === Token Acquisition (SilentRequest) started:
Scopes: https://management.core.windows.net//.default
Authority Host: login.microsoftonline.com
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Access token has expired or about to expire. [Current time (12/14/2022 15:24:19) - Expiration Time (12/14/2022 14:52:29 +00:00) - Extended Expiration Time (12/14/2022 14:52:29 +00:00)]
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [FindRefreshTokenAsync] Refresh token found in the cache? - True
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:19Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Fetching instance discovery from the network from host login.microsoftonline.com.
DEBUG: Request [8e931ce2-b92b-4ac9-9375-2e59b7a48abc] GET https://login.microsoftonline.com/common/discovery/instance?api-version=1.1&authorization_endpoint=REDACTED
x-client-SKU:REDACTED
x-client-Ver:REDACTED
x-client-CPU:REDACTED
x-client-OS:REDACTED
client-request-id:REDACTED
return-client-request-id:REDACTED
x-app-name:REDACTED
x-app-ver:REDACTED
x-ms-client-request-id:8e931ce2-b92b-4ac9-9375-2e59b7a48abc
x-ms-return-client-request-id:true
User-Agent:azsdk-net-Identity/1.6.1,(.NET 7.0.0; Microsoft Windows 10.0.22000)
client assembly: Azure.Identity
DEBUG: Response [8e931ce2-b92b-4ac9-9375-2e59b7a48abc] 200 OK (00.3s)
Cache-Control:max-age=86400, private
Strict-Transport-Security:REDACTED
X-Content-Type-Options:REDACTED
Access-Control-Allow-Origin:REDACTED
Access-Control-Allow-Methods:REDACTED
P3P:REDACTED
client-request-id:REDACTED
x-ms-request-id:327265c4-98ff-4492-b187-58ccb1848900
x-ms-ests-server:REDACTED
X-XSS-Protection:REDACTED
Set-Cookie:REDACTED
Date:Wed, 14 Dec 2022 15:24:19 GMT
Content-Type:application/json; charset=utf-8
Content-Length:980
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Authority validation enabled? True.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Authority validation - is known env? True.
DEBUG: Request [ce62bc5b-e44b-447d-b03d-a2e2b4f20a17] POST https://login.microsoftonline.com/<id>/oauth2/v2.0/token
x-client-SKU:REDACTED
x-client-Ver:REDACTED
x-client-CPU:REDACTED
x-client-OS:REDACTED
x-anchormailbox:REDACTED
x-client-current-telemetry:REDACTED
x-client-last-telemetry:REDACTED
x-ms-lib-capability:REDACTED
client-request-id:REDACTED
return-client-request-id:REDACTED
x-app-name:REDACTED
x-app-ver:REDACTED
x-ms-client-request-id:ce62bc5b-e44b-447d-b03d-a2e2b4f20a17
x-ms-return-client-request-id:true
User-Agent:azsdk-net-Identity/1.6.1,(.NET 7.0.0; Microsoft Windows 10.0.22000)
Content-Type:application/x-www-form-urlencoded
client assembly: Azure.Identity
DEBUG: Response [ce62bc5b-e44b-447d-b03d-a2e2b4f20a17] 200 OK (00.2s)
Cache-Control:no-store, no-cache
Pragma:no-cache
Strict-Transport-Security:REDACTED
X-Content-Type-Options:REDACTED
P3P:REDACTED
client-request-id:REDACTED
x-ms-request-id:3e724063-a045-40ee-83d4-fc481188d200
x-ms-ests-server:REDACTED
x-ms-clitelem:REDACTED
X-XSS-Protection:REDACTED
Set-Cookie:REDACTED
Date:Wed, 14 Dec 2022 15:24:19 GMT
Content-Type:application/json; charset=utf-8
Expires:-1
Content-Length:6278
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Checking client info returned from the server..
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Saving token response to cache..
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [SaveTokenResponseAsync] Saving AT in cache and removing overlapping ATs...
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Looking for scopes for the authority in the cache which intersect with https://management.core.windows.net//.default
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Intersecting scope entries count - 1
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Matching entries after filtering by user - 1
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [SaveTokenResponseAsync] Saving Id Token and Account in cache ...
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] [SaveTokenResponseAsync] Saving RT in cache...
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Not writing FRT in ADAL legacy cache.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd]
=== Token Acquisition finished successfully:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] AT expiration time: 14.12.2022 16:43:05 +00:00, scopes: https://management.core.windows.net//user_impersonation https://management.core.windows.net//.default. source: IdentityProvider
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 42cd9b14-3655-4117-8b8f-b2e723e910fd] Fetched access token from host login.microsoftonline.com.
DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2022-12-14T16:43:05.3631445+00:00
DEBUG: [Common.Authentication]: Received token with LoginType 'User', Tenant: '<id>', UserId: '<account>'
DEBUG: [Common.Authentication]: Authenticating using Account: '<account>', environment: 'AzureCloud', tenant: '<id>'
DEBUG: 16:24:20 - [SilentAuthenticator] Calling SharedTokenCacheCredential.GetTokenAsync - TenantId:'<id>', Scopes:'https://management.core.windows.net//.default', AuthorityHost:'https://login.microsoftonline.com/', UserId:'<account>'
DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - d7510a03-9e4f-492a-b2ab-64fc0961db59] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z] Found 2 cache accounts and 0 broker accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z] Returning 2 accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] MSAL MSAL.NetCore with assembly version '4.46.2.0'. CorrelationId(5dfd53c3-b511-4672-a862-504a494c30db)
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] === AcquireTokenSilent Parameters ===
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] LoginHint provided: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] Account provided: True
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] ForceRefresh: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db]
=== Request Data ===
Authority Provided? - True
Scopes - https://management.core.windows.net//.default
Extra Query Params Keys (space separated) -
ApiId - AcquireTokenSilent
IsConfidentialClient - False
SendX5C - False
LoginHint ? False
IsBrokerConfigured - False
HomeAccountId - False
CorrelationId - 5dfd53c3-b511-4672-a862-504a494c30db
UserAssertion set: False
LongRunningOboCacheKey set: False
Region configured:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] === Token Acquisition (SilentRequest) started:
Scopes: https://management.core.windows.net//.default
Authority Host: login.microsoftonline.com
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] Access token is not expired. Returning the found cache entry. [Current time (12/14/2022 15:24:20) - Expiration Time (12/14/2022 16:43:05 +00:00) - Extended Expiration Time (12/14/2022 16:43:05 +00:00)]
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] Returning access token found in cache. RefreshOn exists ? False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db]
=== Token Acquisition finished successfully:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:20Z - 5dfd53c3-b511-4672-a862-504a494c30db] AT expiration time: 14.12.2022 16:43:05 +00:00, scopes: https://management.core.windows.net//user_impersonation https://management.core.windows.net//.default. source: Cache
DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2022-12-14T16:43:05.0000000+00:00
DEBUG: [Common.Authentication]: Received token with LoginType 'User', Tenant: '<id>', UserId: '<account>'
DEBUG: ============================ HTTP REQUEST ============================
HTTP Method:
GET
Absolute Uri:
https://management.azure.com/providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policydefinitions/dine-vmaas-backupvault-rg?api-version=2021-06-01
Headers:
User-Agent : Az.Resources/6.5.0,PSVersion/v7.3.0,AzurePowershell/v9.2.0
ParameterSetName : ManagementGroupNameParameterSet
CommandName : Set-AzPolicyDefinition
Body:
DEBUG: ============================ HTTP RESPONSE ============================
Status Code:
OK
Headers:
Cache-Control : no-cache
Pragma : no-cache
Strict-Transport-Security : max-age=31536000; includeSubDomains
Server : Kestrel
x-ms-ratelimit-remaining-tenant-reads: 11999
x-ms-request-id : 88af0f68-c1ba-4b0e-b1fb-e692dd8cd82c
x-ms-correlation-request-id : 88af0f68-c1ba-4b0e-b1fb-e692dd8cd82c
x-ms-routing-request-id : GERMANYNORTH:20221214T152420Z:88af0f68-c1ba-4b0e-b1fb-e692dd8cd82c
X-Content-Type-Options : nosniff
Date : Wed, 14 Dec 2022 15:24:19 GMT
Body:
{
"properties": {
"displayName": "dine-vmaas-backupvault-rg",
"policyType": "Custom",
"mode": "All",
"description": "ThispolicycreatesaResourceGrouptosubscriptionforRSVs.()",
"metadata": {
"createdBy": "9223d10b-9415-40b9-85e3-acd39f51d237",
"createdOn": "2022-06-07T10:30:23.5028101Z",
"updatedBy": "9223d10b-9415-40b9-85e3-acd39f51d237",
"updatedOn": "2022-12-14T11:53:35.7731819Z"
},
"parameters": {},
"policyRule": {
"if": {
"equals": "Microsoft.Resources/subscriptions",
"field": "type"
},
"then": {
"effect": "deployIfNotExists",
"details": {
"DeploymentScope": "subscription",
"ExistenceScope": "subscription",
"deployment": {
"properties": {
"mode": "incremental",
"template": {
"contentVersion": "1.0.0.1",
"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
"parameters": {},
"resources": [
{
"properties": {},
"location": "westeurope",
"tags": {},
"apiVersion": "2018-05-01",
"name": "dcserver-backupVaults-rg",
"type": "Microsoft.Resources/resourceGroups"
}
]
},
"parameters": {}
},
"location": "westeurope"
},
"name": "dcserver-backupVaults-rg",
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
],
"type": "Microsoft.Resources/subscriptions/resourceGroups"
}
}
}
},
"id": "/providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policyDefinitions/dine-vmaas-backupvault-rg",
"type": "Microsoft.Authorization/policyDefinitions",
"name": "dine-vmaas-backupvault-rg",
"systemData": {
"createdBy": "<account>",
"createdByType": "User",
"createdAt": "2022-06-07T10:30:23.4719617Z",
"lastModifiedBy": "<account>",
"lastModifiedByType": "User",
"lastModifiedAt": "2022-12-14T11:53:35.7040992Z"
}
}
DEBUG: ============================ HTTP REQUEST ============================
HTTP Method:
PUT
Absolute Uri:
https://management.azure.com/providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policydefinitions/dine-vmaas-backupvault-rg?api-version=2021-06-01
Headers:
User-Agent : Az.Resources/6.5.0,PSVersion/v7.3.0,AzurePowershell/v9.2.0
ParameterSetName : ManagementGroupNameParameterSet
CommandName : Set-AzPolicyDefinition
Body:
{
"name": "dine-vmaas-backupvault-rg",
"properties": {
"description": "**ThispolicycreatesaResourceGrouptosubscriptionforRSVs.**",
"displayName": "dine-vmaas-backupvault-rg",
"policyRule": {
"if": {
"equals": "Microsoft.Resources/subscriptions",
"field": "type"
},
"then": {
"effect": "deployIfNotExists",
"details": {
"DeploymentScope": "subscription",
"ExistenceScope": "subscription",
"deployment": {
"properties": {
"mode": "incremental",
"template": {
"contentVersion": "1.0.0.1",
"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
"parameters": {},
"resources": [
{
"properties": {},
"location": "westeurope",
"tags": {},
"apiVersion": "2018-05-01",
"name": "dcserver-backupVaults-rg",
"type": "Microsoft.Resources/resourceGroups"
}
]
},
"parameters": {}
},
"location": "westeurope"
},
"name": "dcserver-backupVaults-rg",
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
],
"type": "Microsoft.Resources/subscriptions/resourceGroups"
}
}
},
"metadata": {},
"parameters": {},
"mode": "All",
"policyType": "Custom"
}
}
DEBUG: ============================ HTTP RESPONSE ============================
Status Code:
Created
Headers:
Cache-Control : no-cache
Pragma : no-cache
Strict-Transport-Security : max-age=31536000; includeSubDomains
Server : Kestrel
x-ms-ratelimit-remaining-tenant-writes: 1199
x-ms-request-id : 1556c2d7-4522-4ebd-b34d-8b8578cf3074
x-ms-correlation-request-id : 1556c2d7-4522-4ebd-b34d-8b8578cf3074
x-ms-routing-request-id : GERMANYNORTH:20221214T152421Z:1556c2d7-4522-4ebd-b34d-8b8578cf3074
X-Content-Type-Options : nosniff
Date : Wed, 14 Dec 2022 15:24:21 GMT
Body:
{
"properties": {
"displayName": "dine-vmaas-backupvault-rg",
"policyType": "Custom",
"mode": "All",
"description": "ThispolicycreatesaResourceGrouptosubscriptionforRSVs.",
"metadata": {
"createdBy": "9223d10b-9415-40b9-85e3-acd39f51d237",
"createdOn": "2022-06-07T10:30:23.5028101Z",
"updatedBy": "9223d10b-9415-40b9-85e3-acd39f51d237",
"updatedOn": "2022-12-14T15:24:21.7332995Z"
},
"parameters": {},
"policyRule": {
"if": {
"equals": "Microsoft.Resources/subscriptions",
"field": "type"
},
"then": {
"effect": "deployIfNotExists",
"details": {
"DeploymentScope": "subscription",
"ExistenceScope": "subscription",
"deployment": {
"properties": {
"mode": "incremental",
"template": {
"contentVersion": "1.0.0.1",
"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
"parameters": {},
"resources": [
{
"properties": {},
"location": "westeurope",
"tags": {},
"apiVersion": "2018-05-01",
"name": "dcserver-backupVaults-rg",
"type": "Microsoft.Resources/resourceGroups"
}
]
},
"parameters": {}
},
"location": "westeurope"
},
"name": "dcserver-backupVaults-rg",
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
],
"type": "Microsoft.Resources/subscriptions/resourceGroups"
}
}
}
},
"id": "/providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policyDefinitions/dine-vmaas-backupvault-rg",
"type": "Microsoft.Authorization/policyDefinitions",
"name": "dine-vmaas-backupvault-rg",
"systemData": {
"createdBy": "<account>",
"createdByType": "User",
"createdAt": "2022-06-07T10:30:23.4719617Z",
"lastModifiedBy": "<account>",
"lastModifiedByType": "User",
"lastModifiedAt": "2022-12-14T15:24:21.6016531Z"
}
}
DEBUG: [Common.Authentication]: Authenticating using Account: '<account>', environment: 'AzureCloud', tenant: '<id>'
DEBUG: 16:24:21 - [SilentAuthenticator] Calling SharedTokenCacheCredential.GetTokenAsync - TenantId:'<id>', Scopes:'https://management.core.windows.net//.default', AuthorityHost:'https://login.microsoftonline.com/', UserId:'<account>'
DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 8da4d3e7-7e1d-4196-ba5f-6b2e64ba24db] IsLegacyAdalCacheEnabled: yes
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z] Found 2 cache accounts and 0 broker accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z] Returning 2 accounts
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] MSAL MSAL.NetCore with assembly version '4.46.2.0'. CorrelationId(6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee)
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] === AcquireTokenSilent Parameters ===
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] LoginHint provided: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] Account provided: True
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] ForceRefresh: False
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee]
=== Request Data ===
Authority Provided? - True
Scopes - https://management.core.windows.net//.default
Extra Query Params Keys (space separated) -
ApiId - AcquireTokenSilent
IsConfidentialClient - False
SendX5C - False
LoginHint ? False
IsBrokerConfigured - False
HomeAccountId - False
CorrelationId - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee
UserAssertion set: False
LongRunningOboCacheKey set: False
Region configured:
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] === Token Acquisition (SilentRequest) started:
Scopes: https://management.core.windows.net//.default
Authority Host: login.microsoftonline.com
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] [Region discovery] Not using a regional authority.
DEBUG: False MSAL 4.46.2.0 MSAL.NetCore .NET 7.0.0 Microsoft Windows 10.0.22000 [2022-12-14 15:24:21Z - 6384c39f-5b09-4c4b-bb72-da6eb4f1a3ee] Access token is not expired. Returning the found cacName : dine-vmaas-backupvault-rg
ResourceId : /providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policyDefinitions/dine-vmaas-backupvault-rgResourceName : dine-vmaas-backupvault-rg
ResourceType : Microsoft.Authorization/policyDefinitions
SubscriptionId :
Properties : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.Policy.PsPolicyDefinitionProperties
PolicyDefinitionId : /providers/Microsoft.Management/managementGroups/<groupname>/providers/Microsoft.Authorization/policyDefinitions/dine-vmaas-backupvault-rg
DEBUG: AzureQoSEvent: Module: Az.Resources:6.5.0; CommandName: Set-AzPolicyDefinition; PSVersion: 7.3.0; IsSuccess: True; Duration: 00:00:02.2504643
DEBUG: 16:24:22 - [ConfigManager] Got nothing from [EnableDataCollection], Module = [], Cmdlet = []. Returning default value [True].
DEBUG: 16:24:22 - SetAzurePolicyDefinitionCmdlet end processing.
```
### Environment data
```PowerShell
Name Value
---- -----
PSVersion 7.3.0
PSEdition Core
GitCommitId 7.3.0
OS Microsoft Windows 10.0.22000
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
```
### Module versions
```PowerShell
Get-Module Az*
ModuleType Version PreRelease Name ExportedCommands
---------- ------- ---------- ---- ----------------
Script 2.10.4 Az.Accounts {Add-AzEnvironment, Clear-AzConfig, Clear-AzContext, Clear-AzDefault…}
Script 6.5.0 Az.Resources {Export-AzResourceGroup, Export-AzTemplateSpec, Get-AzDenyAssignment, Get-AzDeployment…}
```
### Error output
_No response_ | non_priority | set azpolicydefinition white spaces are removed for all provided string values description new bug within az resource version when calling set azpolicydefinition all spaces are removed for all properties it s very easy to reproduce the spaces can be provided for every parameter for example set azpolicydefinition name name displayname properties displayname description this policy creates a resource group to subscription for rsvs managementgroupname mgmtgroupname mode properties mode policy policy parameter parameters metadata metadata debug look at the spaces in description all other parameters are built beforhand and are irrelevant for this showcase when looking at the put request in the debug output the following description is shown debug http request http method put absolute uri headers user agent az resources psversion azurepowershell parametersetname managementgroupnameparameterset commandname set azpolicydefinition body name dine vmaas backupvault rg properties description thispolicycreatesaresourcegrouptosubscriptionforrsvs so all spaces are removed this happens for every provided parameter property json fragment to set azpolicydefinition tested with allmost every combination under az cmdlet v this was definitely not an issue this issue may be related to issue script debug output powershell pls provde a valid policy object and set name mgmtgroupname policy parameters metadata accordingly set azpolicydefinition name name displayname displayname description this policy creates a resource group to subscription for rsvs managementgroupname mgmtgroupname mode mode policy policy parameter parameters metadata metadata debug debug output i had to remove all sensitive information debug setazurepolicydefinitioncmdlet begin processing with parameterset managementgroupnameparameterset debug using account id debug got from module cmdlet debug authenticating using account environment azurecloud tenant debug calling sharedtokencachecredential gettokenasync tenantid scopes authorityhost userid debug sharedtokencachecredential gettoken invoked scopes parentrequestid debug false msal msal netcore net microsoft windows islegacyadalcacheenabled yes debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows islegacyadalcacheenabled yes debug false msal msal netcore net microsoft windows islegacyadalcacheenabled yes debug false msal msal netcore net microsoft windows found cache accounts and broker accounts debug false msal msal netcore net microsoft windows returning accounts debug false msal msal netcore net microsoft windows msal msal netcore with assembly version correlationid debug false msal msal netcore net microsoft windows acquiretokensilent parameters debug false msal msal netcore net microsoft windows loginhint provided false debug false msal msal netcore net microsoft windows account provided true debug false msal msal netcore net microsoft windows forcerefresh false debug false msal msal netcore net microsoft windows request data authority provided true scopes extra query params keys space separated apiid acquiretokensilent isconfidentialclient false false loginhint false isbrokerconfigured false homeaccountid false correlationid userassertion set false longrunningobocachekey set false region configured debug false msal msal netcore net microsoft windows token acquisition silentrequest started scopes authority host login microsoftonline com debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows access token has expired or about to expire debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows refresh token found in the cache true debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows fetching instance discovery from the network from host login microsoftonline com debug request get x client sku redacted x client ver redacted x client cpu redacted x client os redacted client request id redacted return client request id redacted x app name redacted x app ver redacted x ms client request id x ms return client request id true user agent azsdk net identity net microsoft windows client assembly azure identity debug response ok cache control max age private strict transport security redacted x content type options redacted access control allow origin redacted access control allow methods redacted redacted client request id redacted x ms request id x ms ests server redacted x xss protection redacted set cookie redacted date wed dec gmt content type application json charset utf content length debug false msal msal netcore net microsoft windows authority validation enabled true debug false msal msal netcore net microsoft windows authority validation is known env true debug request post x client sku redacted x client ver redacted x client cpu redacted x client os redacted x anchormailbox redacted x client current telemetry redacted x client last telemetry redacted x ms lib capability redacted client request id redacted return client request id redacted x app name redacted x app ver redacted x ms client request id x ms return client request id true user agent azsdk net identity net microsoft windows content type application x www form urlencoded client assembly azure identity debug response ok cache control no store no cache pragma no cache strict transport security redacted x content type options redacted redacted client request id redacted x ms request id x ms ests server redacted x ms clitelem redacted x xss protection redacted set cookie redacted date wed dec gmt content type application json charset utf expires content length debug false msal msal netcore net microsoft windows checking client info returned from the server debug false msal msal netcore net microsoft windows saving token response to cache debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows saving at in cache and removing overlapping ats debug false msal msal netcore net microsoft windows looking for scopes for the authority in the cache which intersect with debug false msal msal netcore net microsoft windows intersecting scope entries count debug false msal msal netcore net microsoft windows matching entries after filtering by user debug false msal msal netcore net microsoft windows saving id token and account in cache debug false msal msal netcore net microsoft windows saving rt in cache debug false msal msal netcore net microsoft windows islegacyadalcacheenabled yes debug false msal msal netcore net microsoft windows not writing frt in adal legacy cache debug false msal msal netcore net microsoft windows token acquisition finished successfully debug false msal msal netcore net microsoft windows at expiration time scopes source identityprovider debug false msal msal netcore net microsoft windows fetched access token from host login microsoftonline com debug sharedtokencachecredential gettoken succeeded scopes parentrequestid expireson debug received token with logintype user tenant userid debug authenticating using account environment azurecloud tenant debug calling sharedtokencachecredential gettokenasync tenantid scopes authorityhost userid debug sharedtokencachecredential gettoken invoked scopes parentrequestid debug false msal msal netcore net microsoft windows islegacyadalcacheenabled yes debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows islegacyadalcacheenabled yes debug false msal msal netcore net microsoft windows islegacyadalcacheenabled yes debug false msal msal netcore net microsoft windows found cache accounts and broker accounts debug false msal msal netcore net microsoft windows returning accounts debug false msal msal netcore net microsoft windows msal msal netcore with assembly version correlationid debug false msal msal netcore net microsoft windows acquiretokensilent parameters debug false msal msal netcore net microsoft windows loginhint provided false debug false msal msal netcore net microsoft windows account provided true debug false msal msal netcore net microsoft windows forcerefresh false debug false msal msal netcore net microsoft windows request data authority provided true scopes extra query params keys space separated apiid acquiretokensilent isconfidentialclient false false loginhint false isbrokerconfigured false homeaccountid false correlationid userassertion set false longrunningobocachekey set false region configured debug false msal msal netcore net microsoft windows token acquisition silentrequest started scopes authority host login microsoftonline com debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows access token is not expired returning the found cache entry debug false msal msal netcore net microsoft windows returning access token found in cache refreshon exists false debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows token acquisition finished successfully debug false msal msal netcore net microsoft windows at expiration time scopes source cache debug sharedtokencachecredential gettoken succeeded scopes parentrequestid expireson debug received token with logintype user tenant userid debug http request http method get absolute uri headers user agent az resources psversion azurepowershell parametersetname managementgroupnameparameterset commandname set azpolicydefinition body debug http response status code ok headers cache control no cache pragma no cache strict transport security max age includesubdomains server kestrel x ms ratelimit remaining tenant reads x ms request id x ms correlation request id x ms routing request id germanynorth x content type options nosniff date wed dec gmt body properties displayname dine vmaas backupvault rg policytype custom mode all description thispolicycreatesaresourcegrouptosubscriptionforrsvs metadata createdby createdon updatedby updatedon parameters policyrule if equals microsoft resources subscriptions field type then effect deployifnotexists details deploymentscope subscription existencescope subscription deployment properties mode incremental template contentversion schema parameters resources properties location westeurope tags apiversion name dcserver backupvaults rg type microsoft resources resourcegroups parameters location westeurope name dcserver backupvaults rg roledefinitionids providers microsoft authorization roledefinitions type microsoft resources subscriptions resourcegroups id providers microsoft management managementgroups providers microsoft authorization policydefinitions dine vmaas backupvault rg type microsoft authorization policydefinitions name dine vmaas backupvault rg systemdata createdby createdbytype user createdat lastmodifiedby lastmodifiedbytype user lastmodifiedat debug http request http method put absolute uri headers user agent az resources psversion azurepowershell parametersetname managementgroupnameparameterset commandname set azpolicydefinition body name dine vmaas backupvault rg properties description thispolicycreatesaresourcegrouptosubscriptionforrsvs displayname dine vmaas backupvault rg policyrule if equals microsoft resources subscriptions field type then effect deployifnotexists details deploymentscope subscription existencescope subscription deployment properties mode incremental template contentversion schema parameters resources properties location westeurope tags apiversion name dcserver backupvaults rg type microsoft resources resourcegroups parameters location westeurope name dcserver backupvaults rg roledefinitionids providers microsoft authorization roledefinitions type microsoft resources subscriptions resourcegroups metadata parameters mode all policytype custom debug http response status code created headers cache control no cache pragma no cache strict transport security max age includesubdomains server kestrel x ms ratelimit remaining tenant writes x ms request id x ms correlation request id x ms routing request id germanynorth x content type options nosniff date wed dec gmt body properties displayname dine vmaas backupvault rg policytype custom mode all description thispolicycreatesaresourcegrouptosubscriptionforrsvs metadata createdby createdon updatedby updatedon parameters policyrule if equals microsoft resources subscriptions field type then effect deployifnotexists details deploymentscope subscription existencescope subscription deployment properties mode incremental template contentversion schema parameters resources properties location westeurope tags apiversion name dcserver backupvaults rg type microsoft resources resourcegroups parameters location westeurope name dcserver backupvaults rg roledefinitionids providers microsoft authorization roledefinitions type microsoft resources subscriptions resourcegroups id providers microsoft management managementgroups providers microsoft authorization policydefinitions dine vmaas backupvault rg type microsoft authorization policydefinitions name dine vmaas backupvault rg systemdata createdby createdbytype user createdat lastmodifiedby lastmodifiedbytype user lastmodifiedat debug authenticating using account environment azurecloud tenant debug calling sharedtokencachecredential gettokenasync tenantid scopes authorityhost userid debug sharedtokencachecredential gettoken invoked scopes parentrequestid debug false msal msal netcore net microsoft windows islegacyadalcacheenabled yes debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows islegacyadalcacheenabled yes debug false msal msal netcore net microsoft windows islegacyadalcacheenabled yes debug false msal msal netcore net microsoft windows found cache accounts and broker accounts debug false msal msal netcore net microsoft windows returning accounts debug false msal msal netcore net microsoft windows msal msal netcore with assembly version correlationid debug false msal msal netcore net microsoft windows acquiretokensilent parameters debug false msal msal netcore net microsoft windows loginhint provided false debug false msal msal netcore net microsoft windows account provided true debug false msal msal netcore net microsoft windows forcerefresh false debug false msal msal netcore net microsoft windows request data authority provided true scopes extra query params keys space separated apiid acquiretokensilent isconfidentialclient false false loginhint false isbrokerconfigured false homeaccountid false correlationid userassertion set false longrunningobocachekey set false region configured debug false msal msal netcore net microsoft windows token acquisition silentrequest started scopes authority host login microsoftonline com debug false msal msal netcore net microsoft windows not using a regional authority debug false msal msal netcore net microsoft windows access token is not expired returning the found cacname dine vmaas backupvault rg resourceid providers microsoft management managementgroups providers microsoft authorization policydefinitions dine vmaas backupvault rgresourcename dine vmaas backupvault rg resourcetype microsoft authorization policydefinitions subscriptionid properties microsoft azure commands resourcemanager cmdlets implementation policy pspolicydefinitionproperties policydefinitionid providers microsoft management managementgroups providers microsoft authorization policydefinitions dine vmaas backupvault rg debug azureqosevent module az resources commandname set azpolicydefinition psversion issuccess true duration debug got nothing from module cmdlet returning default value debug setazurepolicydefinitioncmdlet end processing environment data powershell name value psversion psedition core gitcommitid os microsoft windows platform pscompatibleversions … psremotingprotocolversion serializationversion wsmanstackversion module versions powershell get module az moduletype version prerelease name exportedcommands script az accounts add azenvironment clear azconfig clear azcontext clear azdefault… script az resources export azresourcegroup export aztemplatespec get azdenyassignment get azdeployment… error output no response | 0 |
119,333 | 10,039,261,569 | IssuesEvent | 2019-07-18 16:55:51 | GSA/tbm-scan | https://api.github.com/repos/GSA/tbm-scan | closed | TST: Set up CI and test suites | test | As a developer, I want to pipe all PRs through a test suite so that I can be assured of the code's functionality.
## Acceptance criteria
- [x] CircleCI config developed and implemented
- [x] Test coverage
- [x] Documentation on how to run tests in ReadMe and/or contributing.md | 1.0 | TST: Set up CI and test suites - As a developer, I want to pipe all PRs through a test suite so that I can be assured of the code's functionality.
## Acceptance criteria
- [x] CircleCI config developed and implemented
- [x] Test coverage
- [x] Documentation on how to run tests in ReadMe and/or contributing.md | non_priority | tst set up ci and test suites as a developer i want to pipe all prs through a test suite so that i can be assured of the code s functionality acceptance criteria circleci config developed and implemented test coverage documentation on how to run tests in readme and or contributing md | 0 |
44,244 | 23,528,788,295 | IssuesEvent | 2022-08-19 13:28:57 | hydroshare/hydroshare | https://api.github.com/repos/hydroshare/hydroshare | closed | My Resources Page: Performance Enhancements Needed | Performance | Currently it takes several seconds for the My Resources page to load. This is tedious for frequent users of HydroShare who are switching between the My Resources page and the landing pages for individual resources. It's gotten to the point that it is affecting the usability of the system. I suggest that it's time to take a critical look at the My Resources page to see if we can speed up the initial page load. The following are potential options @Maurier and I have discussed, although not in depth:
1. Add a paginator to the table of resources on the My Resources page
2. Create a different or modify the existing API call that is returning information for this page. Right now it returns way more information than is really needed to populate the table on the page.
3. Save the state of the filters in the browser's local storage to allow users to use their own defaults for the filter states.
Other strategies for speeding up this page may be needed. An additional performance consideration is the "Favoriting" functionality that is very slow if a large number of resources are selected for favoriting at the same time. | True | My Resources Page: Performance Enhancements Needed - Currently it takes several seconds for the My Resources page to load. This is tedious for frequent users of HydroShare who are switching between the My Resources page and the landing pages for individual resources. It's gotten to the point that it is affecting the usability of the system. I suggest that it's time to take a critical look at the My Resources page to see if we can speed up the initial page load. The following are potential options @Maurier and I have discussed, although not in depth:
1. Add a paginator to the table of resources on the My Resources page
2. Create a different or modify the existing API call that is returning information for this page. Right now it returns way more information than is really needed to populate the table on the page.
3. Save the state of the filters in the browser's local storage to allow users to use their own defaults for the filter states.
Other strategies for speeding up this page may be needed. An additional performance consideration is the "Favoriting" functionality that is very slow if a large number of resources are selected for favoriting at the same time. | non_priority | my resources page performance enhancements needed currently it takes several seconds for the my resources page to load this is tedious for frequent users of hydroshare who are switching between the my resources page and the landing pages for individual resources it s gotten to the point that it is affecting the usability of the system i suggest that it s time to take a critical look at the my resources page to see if we can speed up the initial page load the following are potential options maurier and i have discussed although not in depth add a paginator to the table of resources on the my resources page create a different or modify the existing api call that is returning information for this page right now it returns way more information than is really needed to populate the table on the page save the state of the filters in the browser s local storage to allow users to use their own defaults for the filter states other strategies for speeding up this page may be needed an additional performance consideration is the favoriting functionality that is very slow if a large number of resources are selected for favoriting at the same time | 0 |
98,471 | 8,677,933,414 | IssuesEvent | 2018-11-30 18:17:23 | mozilla-services/syncstorage-rs | https://api.github.com/repos/mozilla-services/syncstorage-rs | closed | Invalid BsoBodies should fall through extraction | bug e2e tests in progress | post_bsos returns a mapping of "failed" bsos that failed validation: but the presence of invalid bsos shouldn't prevent other bsos in the post from being written to the database:
https://github.com/mozilla-services/server-syncstorage/blob/master/syncstorage/views/__init__.py#L354
The BsoBodies extractor currently produces an error on these which bails out the entire request.
This causes test_storage::test_set_collection to fail | 1.0 | Invalid BsoBodies should fall through extraction - post_bsos returns a mapping of "failed" bsos that failed validation: but the presence of invalid bsos shouldn't prevent other bsos in the post from being written to the database:
https://github.com/mozilla-services/server-syncstorage/blob/master/syncstorage/views/__init__.py#L354
The BsoBodies extractor currently produces an error on these which bails out the entire request.
This causes test_storage::test_set_collection to fail | non_priority | invalid bsobodies should fall through extraction post bsos returns a mapping of failed bsos that failed validation but the presence of invalid bsos shouldn t prevent other bsos in the post from being written to the database the bsobodies extractor currently produces an error on these which bails out the entire request this causes test storage test set collection to fail | 0 |
158,599 | 20,028,799,138 | IssuesEvent | 2022-02-02 01:17:06 | ibm-skills-network/editor.md | https://api.github.com/repos/ibm-skills-network/editor.md | opened | CVE-2021-41184 (Medium) detected in jquery-ui-1.12.0.min.js | security vulnerability | ## CVE-2021-41184 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-ui-1.12.0.min.js</b></p></summary>
<p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.0/jquery-ui.min.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.0/jquery-ui.min.js</a></p>
<p>Path to dependency file: /lib/codemirror/mode/slim/index.html</p>
<p>Path to vulnerable library: /lib/codemirror/mode/slim/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-ui-1.12.0.min.js** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery-UI is the official jQuery user interface library. Prior to version 1.13.0, accepting the value of the `of` option of the `.position()` util from untrusted sources may execute untrusted code. The issue is fixed in jQuery UI 1.13.0. Any string value passed to the `of` option is now treated as a CSS selector. A workaround is to not accept the value of the `of` option from untrusted sources.
<p>Publish Date: 2021-10-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41184>CVE-2021-41184</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41184">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41184</a></p>
<p>Release Date: 2021-10-26</p>
<p>Fix Resolution: jquery-ui - 1.13.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-41184 (Medium) detected in jquery-ui-1.12.0.min.js - ## CVE-2021-41184 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-ui-1.12.0.min.js</b></p></summary>
<p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.0/jquery-ui.min.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.0/jquery-ui.min.js</a></p>
<p>Path to dependency file: /lib/codemirror/mode/slim/index.html</p>
<p>Path to vulnerable library: /lib/codemirror/mode/slim/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-ui-1.12.0.min.js** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery-UI is the official jQuery user interface library. Prior to version 1.13.0, accepting the value of the `of` option of the `.position()` util from untrusted sources may execute untrusted code. The issue is fixed in jQuery UI 1.13.0. Any string value passed to the `of` option is now treated as a CSS selector. A workaround is to not accept the value of the `of` option from untrusted sources.
<p>Publish Date: 2021-10-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41184>CVE-2021-41184</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41184">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41184</a></p>
<p>Release Date: 2021-10-26</p>
<p>Fix Resolution: jquery-ui - 1.13.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in jquery ui min js cve medium severity vulnerability vulnerable library jquery ui min js a curated set of user interface interactions effects widgets and themes built on top of the jquery javascript library library home page a href path to dependency file lib codemirror mode slim index html path to vulnerable library lib codemirror mode slim index html dependency hierarchy x jquery ui min js vulnerable library found in base branch master vulnerability details jquery ui is the official jquery user interface library prior to version accepting the value of the of option of the position util from untrusted sources may execute untrusted code the issue is fixed in jquery ui any string value passed to the of option is now treated as a css selector a workaround is to not accept the value of the of option from untrusted sources publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery ui step up your open source security game with whitesource | 0 |
92,653 | 8,374,601,148 | IssuesEvent | 2018-10-05 14:07:41 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachprod: restarted node stuck in `waiting for init` | A-testing C-investigation | I have a 24 node `roachprod` cluster that's trying to run several large (tpcc 10k) restores with slightly increased concurrency. A few (the same few fwiw) of my nodes keep crashing due to out-of-memory.
On restarting them (`roachprod start <cluster>:<node>`), they appear to be stuck in `waiting for init`.
```
$ roachprod ssh david-restore:23 "./cockroach node ls --insecure"
Error: unable to connect or connection lost.
Please check the address and credentials such as certificates (if attempting to
communicate with a secure cluster).
rpc error: code = Unavailable desc = node waiting for init; /cockroach.server.serverpb.Status/Nodes not available
Failed running "node"
```
Looking at the process in `ps aux`, I see that it _does_ have the expected `join` flag:
```
./cockroach start --insecure [...] --join=<ip of node 1>:26257
```
I can access the cluster's admin UI and SQL prompt on `<ip of node 1>` above.
Restarting again (`roachprod stop` ... `start`) sometimes seems to fix it, but not always.
| 1.0 | roachprod: restarted node stuck in `waiting for init` - I have a 24 node `roachprod` cluster that's trying to run several large (tpcc 10k) restores with slightly increased concurrency. A few (the same few fwiw) of my nodes keep crashing due to out-of-memory.
On restarting them (`roachprod start <cluster>:<node>`), they appear to be stuck in `waiting for init`.
```
$ roachprod ssh david-restore:23 "./cockroach node ls --insecure"
Error: unable to connect or connection lost.
Please check the address and credentials such as certificates (if attempting to
communicate with a secure cluster).
rpc error: code = Unavailable desc = node waiting for init; /cockroach.server.serverpb.Status/Nodes not available
Failed running "node"
```
Looking at the process in `ps aux`, I see that it _does_ have the expected `join` flag:
```
./cockroach start --insecure [...] --join=<ip of node 1>:26257
```
I can access the cluster's admin UI and SQL prompt on `<ip of node 1>` above.
Restarting again (`roachprod stop` ... `start`) sometimes seems to fix it, but not always.
| non_priority | roachprod restarted node stuck in waiting for init i have a node roachprod cluster that s trying to run several large tpcc restores with slightly increased concurrency a few the same few fwiw of my nodes keep crashing due to out of memory on restarting them roachprod start they appear to be stuck in waiting for init roachprod ssh david restore cockroach node ls insecure error unable to connect or connection lost please check the address and credentials such as certificates if attempting to communicate with a secure cluster rpc error code unavailable desc node waiting for init cockroach server serverpb status nodes not available failed running node looking at the process in ps aux i see that it does have the expected join flag cockroach start insecure join i can access the cluster s admin ui and sql prompt on above restarting again roachprod stop start sometimes seems to fix it but not always | 0 |
248,331 | 21,011,776,099 | IssuesEvent | 2022-03-30 07:23:43 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: restoreTPCCInc/nodes=10 failed | C-test-failure O-robot O-roachtest release-blocker branch-release-22.1 | roachtest.restoreTPCCInc/nodes=10 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4728040&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4728040&tab=artifacts#/restoreTPCCInc/nodes=10) on release-22.1 @ [ff4e23e53388d3639f4b1b23b66e87d436556a8f](https://github.com/cockroachdb/cockroach/commits/ff4e23e53388d3639f4b1b23b66e87d436556a8f):
```
Wraps: (2) output in run_070926.707980968_n1_cockroach_sql
Wraps: (3) ./cockroach sql --insecure -e "
| RESTORE FROM '2021/05/21-020411.00' IN
| 'gs://cockroach-fixtures/tpcc-incrementals?AUTH=implicit'
| AS OF SYSTEM TIME '2021-05-21 14:40:22'" returned
| stderr:
| ERROR: importing 12872 ranges: googleapi: got HTTP response code 416 with body: <?xml version='1.0' encoding='UTF-8'?><Error><Code>InvalidRange</Code><Message>The requested range cannot be satisfied.</Message><Details>bytes=20296907-</Details></Error>
| Failed running "sql"
|
| stdout:
Wraps: (4) COMMAND_PROBLEM
Wraps: (5) Node 1. Command with error:
| ``````
| ./cockroach sql --insecure -e "
| RESTORE FROM '2021/05/21-020411.00' IN
| 'gs://cockroach-fixtures/tpcc-incrementals?AUTH=implicit'
| AS OF SYSTEM TIME '2021-05-21 14:40:22'"
| ``````
Wraps: (6) exit status 1
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.Cmd (5) *hintdetail.withDetail (6) *exec.ExitError
monitor.go:127,restore.go:454,test_runner.go:866: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerRestore.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/restore.go:454
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:866
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func2
| main/pkg/cmd/roachtest/monitor.go:171
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| main/pkg/cmd/roachtest/monitor.go:80
| runtime.doInit
| GOROOT/src/runtime/proc.go:6498
| runtime.main
| GOROOT/src/runtime/proc.go:238
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/bulk-io
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*restoreTPCCInc/nodes=10.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: restoreTPCCInc/nodes=10 failed - roachtest.restoreTPCCInc/nodes=10 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4728040&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4728040&tab=artifacts#/restoreTPCCInc/nodes=10) on release-22.1 @ [ff4e23e53388d3639f4b1b23b66e87d436556a8f](https://github.com/cockroachdb/cockroach/commits/ff4e23e53388d3639f4b1b23b66e87d436556a8f):
```
Wraps: (2) output in run_070926.707980968_n1_cockroach_sql
Wraps: (3) ./cockroach sql --insecure -e "
| RESTORE FROM '2021/05/21-020411.00' IN
| 'gs://cockroach-fixtures/tpcc-incrementals?AUTH=implicit'
| AS OF SYSTEM TIME '2021-05-21 14:40:22'" returned
| stderr:
| ERROR: importing 12872 ranges: googleapi: got HTTP response code 416 with body: <?xml version='1.0' encoding='UTF-8'?><Error><Code>InvalidRange</Code><Message>The requested range cannot be satisfied.</Message><Details>bytes=20296907-</Details></Error>
| Failed running "sql"
|
| stdout:
Wraps: (4) COMMAND_PROBLEM
Wraps: (5) Node 1. Command with error:
| ``````
| ./cockroach sql --insecure -e "
| RESTORE FROM '2021/05/21-020411.00' IN
| 'gs://cockroach-fixtures/tpcc-incrementals?AUTH=implicit'
| AS OF SYSTEM TIME '2021-05-21 14:40:22'"
| ``````
Wraps: (6) exit status 1
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.Cmd (5) *hintdetail.withDetail (6) *exec.ExitError
monitor.go:127,restore.go:454,test_runner.go:866: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerRestore.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/restore.go:454
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:866
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func2
| main/pkg/cmd/roachtest/monitor.go:171
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| main/pkg/cmd/roachtest/monitor.go:80
| runtime.doInit
| GOROOT/src/runtime/proc.go:6498
| runtime.main
| GOROOT/src/runtime/proc.go:238
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/bulk-io
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*restoreTPCCInc/nodes=10.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_priority | roachtest restoretpccinc nodes failed roachtest restoretpccinc nodes with on release wraps output in run cockroach sql wraps cockroach sql insecure e restore from in gs cockroach fixtures tpcc incrementals auth implicit as of system time returned stderr error importing ranges googleapi got http response code with body invalidrange the requested range cannot be satisfied bytes failed running sql stdout wraps command problem wraps node command with error cockroach sql insecure e restore from in gs cockroach fixtures tpcc incrementals auth implicit as of system time wraps exit status error types withstack withstack errutil withprefix cluster withcommanddetails errors cmd hintdetail withdetail exec exiterror monitor go restore go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitorimpl waite main pkg cmd roachtest monitor go main monitorimpl wait main pkg cmd roachtest monitor go github com cockroachdb cockroach pkg cmd roachtest tests registerrestore github com cockroachdb cockroach pkg cmd roachtest tests restore go main testrunner runtest main pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitorimpl wait main pkg cmd roachtest monitor go wraps monitor task failed wraps attached stack trace stack trace main init main pkg cmd roachtest monitor go runtime doinit goroot src runtime proc go runtime main goroot src runtime proc go runtime goexit goroot src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror help see see cc cockroachdb bulk io | 0 |
39,435 | 19,977,455,317 | IssuesEvent | 2022-01-29 10:18:01 | mockingbirdnest/Principia | https://api.github.com/repos/mockingbirdnest/Principia | closed | Principia is slow on macOS (possibly due to Unity's allocator) | performance | TLDR: Unity seems to have a custom allocator that uses mutexes. Mutexes are slow on macOS. Consequently, on macOS the vast majority of Principia's time seems to be spent on memory management.
After getting back into KSP after a hiatus, I found the game to have poor performance. Running a trace led me to discover a bug in `Vessel::RepeatedlyFlowPrognostication` but even after fixing it (#2898), the poor performance persisted. Sometimes the game would even pause completely for several seconds. Further traces revealed the problem was due to four threads concurrently evaluating `OrbitAnalyser::AnalyseOrbit`. Profiling an AnalyseOrbit benchmark revealed nothing amiss.
However, I was able to get a flame graph ([attached](https://github.com/mockingbirdnest/Principia/files/6032965/principia_macos_flamegraph.svg.zip)) of a running game which revealed the problem:

The regions highlighted in magenta are mutexes used by functions in `UnityPlayer.dylib`. Based on the placement of the `UnityPlayer.dylib` functions in the call stack (and the fact that one of them is `operator new`), I suspect they are a custom allocator used by Unity. Unfortunately, the performance of the stock mutex on macOS is known to be bad (#1955). Thus, terrible performance. The reason this was not apparent in benchmarks is probably because the benchmarks are not run from Unity and hence use the system allocator. This isn't restricted to `AnalyzeOrbit` either; the rest of Principia was also affected. I estimate that over 80% of Principia's CPU cost on my machine is spent managing these mutexes!
Something similar was diagnosed and fixed in #1955. Unfortunately, this time the slow mutexes are not Principia's but Unity's mutexes. Replacing the misbehaving mutexes with `absl::Mutex` is not really an option here.
I will try to fix this by forcing Principia to use the system allocator even when run from Unity. Stay tuned.
Possibly related to #2247. | True | Principia is slow on macOS (possibly due to Unity's allocator) - TLDR: Unity seems to have a custom allocator that uses mutexes. Mutexes are slow on macOS. Consequently, on macOS the vast majority of Principia's time seems to be spent on memory management.
After getting back into KSP after a hiatus, I found the game to have poor performance. Running a trace led me to discover a bug in `Vessel::RepeatedlyFlowPrognostication` but even after fixing it (#2898), the poor performance persisted. Sometimes the game would even pause completely for several seconds. Further traces revealed the problem was due to four threads concurrently evaluating `OrbitAnalyser::AnalyseOrbit`. Profiling an AnalyseOrbit benchmark revealed nothing amiss.
However, I was able to get a flame graph ([attached](https://github.com/mockingbirdnest/Principia/files/6032965/principia_macos_flamegraph.svg.zip)) of a running game which revealed the problem:

The regions highlighted in magenta are mutexes used by functions in `UnityPlayer.dylib`. Based on the placement of the `UnityPlayer.dylib` functions in the call stack (and the fact that one of them is `operator new`), I suspect they are a custom allocator used by Unity. Unfortunately, the performance of the stock mutex on macOS is known to be bad (#1955). Thus, terrible performance. The reason this was not apparent in benchmarks is probably because the benchmarks are not run from Unity and hence use the system allocator. This isn't restricted to `AnalyzeOrbit` either; the rest of Principia was also affected. I estimate that over 80% of Principia's CPU cost on my machine is spent managing these mutexes!
Something similar was diagnosed and fixed in #1955. Unfortunately, this time the slow mutexes are not Principia's but Unity's mutexes. Replacing the misbehaving mutexes with `absl::Mutex` is not really an option here.
I will try to fix this by forcing Principia to use the system allocator even when run from Unity. Stay tuned.
Possibly related to #2247. | non_priority | principia is slow on macos possibly due to unity s allocator tldr unity seems to have a custom allocator that uses mutexes mutexes are slow on macos consequently on macos the vast majority of principia s time seems to be spent on memory management after getting back into ksp after a hiatus i found the game to have poor performance running a trace led me to discover a bug in vessel repeatedlyflowprognostication but even after fixing it the poor performance persisted sometimes the game would even pause completely for several seconds further traces revealed the problem was due to four threads concurrently evaluating orbitanalyser analyseorbit profiling an analyseorbit benchmark revealed nothing amiss however i was able to get a flame graph of a running game which revealed the problem the regions highlighted in magenta are mutexes used by functions in unityplayer dylib based on the placement of the unityplayer dylib functions in the call stack and the fact that one of them is operator new i suspect they are a custom allocator used by unity unfortunately the performance of the stock mutex on macos is known to be bad thus terrible performance the reason this was not apparent in benchmarks is probably because the benchmarks are not run from unity and hence use the system allocator this isn t restricted to analyzeorbit either the rest of principia was also affected i estimate that over of principia s cpu cost on my machine is spent managing these mutexes something similar was diagnosed and fixed in unfortunately this time the slow mutexes are not principia s but unity s mutexes replacing the misbehaving mutexes with absl mutex is not really an option here i will try to fix this by forcing principia to use the system allocator even when run from unity stay tuned possibly related to | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.