Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
127,635
17,346,584,361
IssuesEvent
2021-07-29 00:16:17
CDCgov/prime-reportstream
https://api.github.com/repos/CDCgov/prime-reportstream
opened
Loading spinner component
Design front-end website
## Problem statement As we start working with new test types or advanced filtering, users may experience a beat or two of load time. Create a reusable loading spinner/indicator that can work across RS. ## What you need to know - As we're moving to React ... is there anything built in there that might be helpful with this? Theoretically things should be pretty swift with React anyway, right? - USWDS doesn't have a default spinner. We could use some of their default icons and animate with css or js? - Font Awesome has a default spinner class, but would invovlve adding some weight to the project and may duplicate some of the icons we already have with USWDS. - It also doesn't have to be a "spinner" either, investigate other loading indicators that may work. ## Acceptance criteria - [ ] Users are able to see an indicator that something is loading on the page ## To do - [ ] ...
1.0
Loading spinner component - ## Problem statement As we start working with new test types or advanced filtering, users may experience a beat or two of load time. Create a reusable loading spinner/indicator that can work across RS. ## What you need to know - As we're moving to React ... is there anything built in there that might be helpful with this? Theoretically things should be pretty swift with React anyway, right? - USWDS doesn't have a default spinner. We could use some of their default icons and animate with css or js? - Font Awesome has a default spinner class, but would invovlve adding some weight to the project and may duplicate some of the icons we already have with USWDS. - It also doesn't have to be a "spinner" either, investigate other loading indicators that may work. ## Acceptance criteria - [ ] Users are able to see an indicator that something is loading on the page ## To do - [ ] ...
non_process
loading spinner component problem statement as we start working with new test types or advanced filtering users may experience a beat or two of load time create a reusable loading spinner indicator that can work across rs what you need to know as we re moving to react is there anything built in there that might be helpful with this theoretically things should be pretty swift with react anyway right uswds doesn t have a default spinner we could use some of their default icons and animate with css or js font awesome has a default spinner class but would invovlve adding some weight to the project and may duplicate some of the icons we already have with uswds it also doesn t have to be a spinner either investigate other loading indicators that may work acceptance criteria users are able to see an indicator that something is loading on the page to do
0
180,112
13,921,274,554
IssuesEvent
2020-10-21 11:42:09
oasisprotocol/oasis-core
https://api.github.com/repos/oasisprotocol/oasis-core
closed
Runtime upgrade E2E test should wait for old node expiration
c:bug c:testing
Currently [the runtime upgrade E2E test](https://github.com/oasisprotocol/oasis-core/blob/e79b4979c20447f4b47ba4e7a35ae67e74647a0e/go/oasis-test-runner/scenario/e2e/runtime/runtime_upgrade.go) shuts down the old compute nodes after upgrade and starts the client, assuming that the new compute nodes will process the transactions: https://github.com/oasisprotocol/oasis-core/blob/e79b4979c20447f4b47ba4e7a35ae67e74647a0e/go/oasis-test-runner/scenario/e2e/runtime/runtime_upgrade.go#L211-L220 The problem is that the committee scheduler can still schedule one of the old nodes, making the test flaky as it depends on what committee is elected. We should probably wait for the old nodes to expire before proceeding.
1.0
Runtime upgrade E2E test should wait for old node expiration - Currently [the runtime upgrade E2E test](https://github.com/oasisprotocol/oasis-core/blob/e79b4979c20447f4b47ba4e7a35ae67e74647a0e/go/oasis-test-runner/scenario/e2e/runtime/runtime_upgrade.go) shuts down the old compute nodes after upgrade and starts the client, assuming that the new compute nodes will process the transactions: https://github.com/oasisprotocol/oasis-core/blob/e79b4979c20447f4b47ba4e7a35ae67e74647a0e/go/oasis-test-runner/scenario/e2e/runtime/runtime_upgrade.go#L211-L220 The problem is that the committee scheduler can still schedule one of the old nodes, making the test flaky as it depends on what committee is elected. We should probably wait for the old nodes to expire before proceeding.
non_process
runtime upgrade test should wait for old node expiration currently shuts down the old compute nodes after upgrade and starts the client assuming that the new compute nodes will process the transactions the problem is that the committee scheduler can still schedule one of the old nodes making the test flaky as it depends on what committee is elected we should probably wait for the old nodes to expire before proceeding
0
2,248
5,088,648,204
IssuesEvent
2016-12-31 23:59:51
sw4j-org/tool-jpa-processor
https://api.github.com/repos/sw4j-org/tool-jpa-processor
opened
Handle @MapsId Annotation
annotation processor task
Handle the `@MapsId` annotation for a property or field. See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf) - 11.1.39 MapsId Annotation
1.0
Handle @MapsId Annotation - Handle the `@MapsId` annotation for a property or field. See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf) - 11.1.39 MapsId Annotation
process
handle mapsid annotation handle the mapsid annotation for a property or field see mapsid annotation
1
72,597
9,601,242,190
IssuesEvent
2019-05-10 11:36:23
gama-platform/gama
https://api.github.com/repos/gama-platform/gama
opened
Several items missing from categories in the online doc
> Bug Affects Usability Concerns Documentation Concerns GAML OS All Priority Critical Version Git
**Describe the bug** Some items are completely missing from the online, incl. types defined in plugins (like `emotion` in `simple_bdi`) or in the core, statements defined in the plugins, etc.
1.0
Several items missing from categories in the online doc - **Describe the bug** Some items are completely missing from the online, incl. types defined in plugins (like `emotion` in `simple_bdi`) or in the core, statements defined in the plugins, etc.
non_process
several items missing from categories in the online doc describe the bug some items are completely missing from the online incl types defined in plugins like emotion in simple bdi or in the core statements defined in the plugins etc
0
17,066
22,502,946,725
IssuesEvent
2022-06-23 13:22:12
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Avoid renaming non-DITA resources used in branch filtering with dvrResourceSuffix or dvrResourcePrefix specified
priority/medium preprocess/filtering enhancement preprocess/branch-filtering
Let's say I have in the DITA Map a portion like this: <topicref href="topics/introduction.dita"> <ditavalref href="test.ditaval"> <ditavalmeta> <dvrResourceSuffix>stuff</dvrResourceSuffix> </ditavalmeta> </ditavalref> <topicref href="test.pdf" format="pdf" navtitle="THIS IS MY PDF"/> </topicref> Right now (even with latest DITA OT 2.x built from the develop branch) when you generate XHTML, the table of contents will have a broken reference to "teststuff.pdf". In my opinion the dvrResourceSuffix and dvrResourcePrefix should only act upon topic references made to DITA resource, and not topic references made to non-DITA ones.
2.0
Avoid renaming non-DITA resources used in branch filtering with dvrResourceSuffix or dvrResourcePrefix specified - Let's say I have in the DITA Map a portion like this: <topicref href="topics/introduction.dita"> <ditavalref href="test.ditaval"> <ditavalmeta> <dvrResourceSuffix>stuff</dvrResourceSuffix> </ditavalmeta> </ditavalref> <topicref href="test.pdf" format="pdf" navtitle="THIS IS MY PDF"/> </topicref> Right now (even with latest DITA OT 2.x built from the develop branch) when you generate XHTML, the table of contents will have a broken reference to "teststuff.pdf". In my opinion the dvrResourceSuffix and dvrResourcePrefix should only act upon topic references made to DITA resource, and not topic references made to non-DITA ones.
process
avoid renaming non dita resources used in branch filtering with dvrresourcesuffix or dvrresourceprefix specified let s say i have in the dita map a portion like this stuff right now even with latest dita ot x built from the develop branch when you generate xhtml the table of contents will have a broken reference to teststuff pdf in my opinion the dvrresourcesuffix and dvrresourceprefix should only act upon topic references made to dita resource and not topic references made to non dita ones
1
893
3,355,307,589
IssuesEvent
2015-11-18 15:58:55
nodejs/node
https://api.github.com/repos/nodejs/node
closed
investigate potentially flaky test on centos
child_process test
https://ci.nodejs.org/job/node-test-commit-linux/1215/nodes=centos5-32/console ``` not ok 61 test-child-process-spawnsync-input.js # #assert.js:89 # throw new assert.AssertionError({ # ^ #AssertionError: <Buffer > deepEqual <Buffer 74 68 69 73 20 69 73 20 73 74 64 6f 75 74 0a> # at Object.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/centos5-32/test/parallel/test-child-process-spawnsync-input.js:98:8) # at Module._compile (module.js:423:26) # at Object.Module._extensions..js (module.js:430:10) # at Module.load (module.js:354:32) # at Function.Module._load (module.js:311:12) # at Function.Module.runMain (module.js:455:10) # at startup (node.js:138:18) # at node.js:974:3 ``` I've seen this before, only on centos.
1.0
investigate potentially flaky test on centos - https://ci.nodejs.org/job/node-test-commit-linux/1215/nodes=centos5-32/console ``` not ok 61 test-child-process-spawnsync-input.js # #assert.js:89 # throw new assert.AssertionError({ # ^ #AssertionError: <Buffer > deepEqual <Buffer 74 68 69 73 20 69 73 20 73 74 64 6f 75 74 0a> # at Object.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/centos5-32/test/parallel/test-child-process-spawnsync-input.js:98:8) # at Module._compile (module.js:423:26) # at Object.Module._extensions..js (module.js:430:10) # at Module.load (module.js:354:32) # at Function.Module._load (module.js:311:12) # at Function.Module.runMain (module.js:455:10) # at startup (node.js:138:18) # at node.js:974:3 ``` I've seen this before, only on centos.
process
investigate potentially flaky test on centos not ok test child process spawnsync input js assert js throw new assert assertionerror assertionerror deepequal at object home iojs build workspace node test commit linux nodes test parallel test child process spawnsync input js at module compile module js at object module extensions js module js at module load module js at function module load module js at function module runmain module js at startup node js at node js i ve seen this before only on centos
1
50,458
26,653,562,996
IssuesEvent
2023-01-25 15:18:24
getsentry/sentry-javascript
https://api.github.com/repos/getsentry/sentry-javascript
closed
Transactions/spans started regardless of tracing enablement
Type: Improvement Type: Breaking Feature: Performance Package: tracing Status: Backlog
_Note that in the description below, "tracing being disabled" does NOT mean having `tracesSampleRate` set to `0`, but rather having neither `tracesSampleRate` nor `tracesSampler` defined in `Sentry.init()._ Right now, if tracing is disabled, our SDKs correctly do not send transactions to Sentry, by [forcing the sampling decision to be `false`](https://github.com/getsentry/sentry-javascript/blob/314d117b75016ad7e987acf90239d0025973f171/packages/tracing/src/hubextensions.ts#L43-L48). This prevents the transaction fro being sent, and also has the side effect of making the transaction not record its child spans. Notably, what it does _not_ do, however, is prevent those transactions and spans from being created, nor prevent the logging which goes along with them. This has a few disadvantages: - Unnecessary overhead. In a browser it might not be bad (there's likely only one transaction happening at once), but on a server it could add up quickly (since every request has a corresponding transaction). - Noisy, potentially confusing logs. If tracing is disabled, we shouldn't be logging about it. - In Node, inclusion on all outgoing HTTP requests of the `sentry-trace` header. A disabled system not be modifying outgoing requests. Furthermore, sending the header (and therefore a sampling decision) has the potential to change the behavior of downstream services. As of version 6.17.9, this affects all instances of starting a transaction or a span except: - starting a serverside transaction in nextjs - starting a span for either an XHR or fetch request in the browser To fix this, probably the easiest thing to do is to have `Hub.startTransaction` and `Transaction.startChild` silently bail if tracing is disabled. We'll have to adjust the spots which call those methods to be able to handle getting `undefined` back, also. Related to https://github.com/getsentry/sentry-javascript/issues/4051.
True
Transactions/spans started regardless of tracing enablement - _Note that in the description below, "tracing being disabled" does NOT mean having `tracesSampleRate` set to `0`, but rather having neither `tracesSampleRate` nor `tracesSampler` defined in `Sentry.init()._ Right now, if tracing is disabled, our SDKs correctly do not send transactions to Sentry, by [forcing the sampling decision to be `false`](https://github.com/getsentry/sentry-javascript/blob/314d117b75016ad7e987acf90239d0025973f171/packages/tracing/src/hubextensions.ts#L43-L48). This prevents the transaction fro being sent, and also has the side effect of making the transaction not record its child spans. Notably, what it does _not_ do, however, is prevent those transactions and spans from being created, nor prevent the logging which goes along with them. This has a few disadvantages: - Unnecessary overhead. In a browser it might not be bad (there's likely only one transaction happening at once), but on a server it could add up quickly (since every request has a corresponding transaction). - Noisy, potentially confusing logs. If tracing is disabled, we shouldn't be logging about it. - In Node, inclusion on all outgoing HTTP requests of the `sentry-trace` header. A disabled system not be modifying outgoing requests. Furthermore, sending the header (and therefore a sampling decision) has the potential to change the behavior of downstream services. As of version 6.17.9, this affects all instances of starting a transaction or a span except: - starting a serverside transaction in nextjs - starting a span for either an XHR or fetch request in the browser To fix this, probably the easiest thing to do is to have `Hub.startTransaction` and `Transaction.startChild` silently bail if tracing is disabled. We'll have to adjust the spots which call those methods to be able to handle getting `undefined` back, also. Related to https://github.com/getsentry/sentry-javascript/issues/4051.
non_process
transactions spans started regardless of tracing enablement note that in the description below tracing being disabled does not mean having tracessamplerate set to but rather having neither tracessamplerate nor tracessampler defined in sentry init right now if tracing is disabled our sdks correctly do not send transactions to sentry by this prevents the transaction fro being sent and also has the side effect of making the transaction not record its child spans notably what it does not do however is prevent those transactions and spans from being created nor prevent the logging which goes along with them this has a few disadvantages unnecessary overhead in a browser it might not be bad there s likely only one transaction happening at once but on a server it could add up quickly since every request has a corresponding transaction noisy potentially confusing logs if tracing is disabled we shouldn t be logging about it in node inclusion on all outgoing http requests of the sentry trace header a disabled system not be modifying outgoing requests furthermore sending the header and therefore a sampling decision has the potential to change the behavior of downstream services as of version this affects all instances of starting a transaction or a span except starting a serverside transaction in nextjs starting a span for either an xhr or fetch request in the browser to fix this probably the easiest thing to do is to have hub starttransaction and transaction startchild silently bail if tracing is disabled we ll have to adjust the spots which call those methods to be able to handle getting undefined back also related to
0
176,753
28,149,138,135
IssuesEvent
2023-04-02 20:44:02
bounswe/bounswe2023group1
https://api.github.com/repos/bounswe/bounswe2023group1
closed
Creating and Documentation of High Level Mock-ups
Priority/High Status/In Progress Type/Design
We are creating mockups for each of the the four user groups we have. The work will involve writing the mockup texts, creating mock UI's and updating the requirements based on newly discovered needs(#88). A common template for the mock UI's will be created in #72. These issues track the progress of the four mockups: - #68 - #69 - #70 - #71
1.0
Creating and Documentation of High Level Mock-ups - We are creating mockups for each of the the four user groups we have. The work will involve writing the mockup texts, creating mock UI's and updating the requirements based on newly discovered needs(#88). A common template for the mock UI's will be created in #72. These issues track the progress of the four mockups: - #68 - #69 - #70 - #71
non_process
creating and documentation of high level mock ups we are creating mockups for each of the the four user groups we have the work will involve writing the mockup texts creating mock ui s and updating the requirements based on newly discovered needs a common template for the mock ui s will be created in these issues track the progress of the four mockups
0
22,382
31,142,283,889
IssuesEvent
2023-08-16 01:44:06
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Flaky test: net_stubbing absolute path
process: flaky test topic: flake โ„๏ธ stage: fire watch priority: low topic: net_stubbing.cy.ts stale
### Link to dashboard or CircleCI failure https://dashboard.cypress.io/projects/ypt4pf/runs/38102/test-results/06201ab2-edbe-4cc8-a295-40de3145e25b ### Link to failing test in GitHub https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/net_stubbing.cy.ts#L1859 ### Analysis <img width="453" alt="Screen Shot 2022-08-17 at 9 18 29 PM" src="https://user-images.githubusercontent.com/26726429/185292702-476cdaa5-a132-4ff2-af36-894f3fd2cc37.png"> ### Cypress Version 10.6.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
1.0
Flaky test: net_stubbing absolute path - ### Link to dashboard or CircleCI failure https://dashboard.cypress.io/projects/ypt4pf/runs/38102/test-results/06201ab2-edbe-4cc8-a295-40de3145e25b ### Link to failing test in GitHub https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/net_stubbing.cy.ts#L1859 ### Analysis <img width="453" alt="Screen Shot 2022-08-17 at 9 18 29 PM" src="https://user-images.githubusercontent.com/26726429/185292702-476cdaa5-a132-4ff2-af36-894f3fd2cc37.png"> ### Cypress Version 10.6.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
process
flaky test net stubbing absolute path link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at pm src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
1
71,218
15,185,661,336
IssuesEvent
2021-02-15 11:14:45
Neko7sora/github-slideshow
https://api.github.com/repos/Neko7sora/github-slideshow
opened
CVE-2011-4969 (Medium) detected in jquery-1.4.4.min.js
security vulnerability
## CVE-2011-4969 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.4.4.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js</a></p> <p>Path to dependency file: github-slideshow/assets/fonts/droid-serif/web-fonts/droidserif_bolditalic_macroman/DroidSerif-BoldItalic-demo.html</p> <p>Path to vulnerable library: github-slideshow/assets/fonts/droid-serif/web-fonts/droidserif_bolditalic_macroman/DroidSerif-BoldItalic-demo.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.4.4.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Neko7sora/github-slideshow/commit/8fc3587745ff93a1f1fa8c24067543cc601e781a">8fc3587745ff93a1f1fa8c24067543cc601e781a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Cross-site scripting (XSS) vulnerability in jQuery before 1.6.3, when using location.hash to select elements, allows remote attackers to inject arbitrary web script or HTML via a crafted tag. <p>Publish Date: 2013-03-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2011-4969>CVE-2011-4969</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-4969">https://nvd.nist.gov/vuln/detail/CVE-2011-4969</a></p> <p>Release Date: 2013-03-08</p> <p>Fix Resolution: 1.6.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2011-4969 (Medium) detected in jquery-1.4.4.min.js - ## CVE-2011-4969 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.4.4.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js</a></p> <p>Path to dependency file: github-slideshow/assets/fonts/droid-serif/web-fonts/droidserif_bolditalic_macroman/DroidSerif-BoldItalic-demo.html</p> <p>Path to vulnerable library: github-slideshow/assets/fonts/droid-serif/web-fonts/droidserif_bolditalic_macroman/DroidSerif-BoldItalic-demo.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.4.4.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Neko7sora/github-slideshow/commit/8fc3587745ff93a1f1fa8c24067543cc601e781a">8fc3587745ff93a1f1fa8c24067543cc601e781a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Cross-site scripting (XSS) vulnerability in jQuery before 1.6.3, when using location.hash to select elements, allows remote attackers to inject arbitrary web script or HTML via a crafted tag. <p>Publish Date: 2013-03-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2011-4969>CVE-2011-4969</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-4969">https://nvd.nist.gov/vuln/detail/CVE-2011-4969</a></p> <p>Release Date: 2013-03-08</p> <p>Fix Resolution: 1.6.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file github slideshow assets fonts droid serif web fonts droidserif bolditalic macroman droidserif bolditalic demo html path to vulnerable library github slideshow assets fonts droid serif web fonts droidserif bolditalic macroman droidserif bolditalic demo html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details cross site scripting xss vulnerability in jquery before when using location hash to select elements allows remote attackers to inject arbitrary web script or html via a crafted tag publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
302,060
22,785,994,597
IssuesEvent
2022-07-09 08:44:25
h-dt/hola-clone
https://api.github.com/repos/h-dt/hola-clone
closed
[bug,docs] Front home ํ™”๋ฉด
documentation help wanted status: in progress question
![image](https://user-images.githubusercontent.com/94469974/177915153-5103c757-817b-4d82-8bcc-f081415e2797.png) home ํ™”๋ฉด ์ „์ฒด/ ํ”„๋กœ์ ํŠธ/์Šคํ„ฐ๋”” selectํ•˜๋Š” ์˜์—ญ์—์„œ ํ”„๋กœ์ ํŠธ ํ•˜๊ณ  ์Šคํ„ฐ๋””๋Š” ๊ฐ๊ฐ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ›์•„์˜ค๋ฉด ๋˜๋Š”๋ฐ ์ „์ฒด์ ์œผ๋กœ ๋ฐ›์•„์˜ค๋Š” ๋ฐ์ดํ„ฐ๋Š” ํ”„๋กœ์ ํŠธ+์Šคํ„ฐ๋””๋กœ ์ฃผ์‹œ๋Š”์ง€ ์•„๋‹ˆ๋ฉด ์ „์ฒด๋ฐ์ดํ„ฐ ํ•˜๋‚˜๋ฅผ ์ฃผ์‹œ๋Š”์ง€ ํ™•์ธํ•ด๋ณด์‹œ๊ณ  ์•Œ๋ ค์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒŸ์Šต๋‹ˆ๋‹ค!
1.0
[bug,docs] Front home ํ™”๋ฉด - ![image](https://user-images.githubusercontent.com/94469974/177915153-5103c757-817b-4d82-8bcc-f081415e2797.png) home ํ™”๋ฉด ์ „์ฒด/ ํ”„๋กœ์ ํŠธ/์Šคํ„ฐ๋”” selectํ•˜๋Š” ์˜์—ญ์—์„œ ํ”„๋กœ์ ํŠธ ํ•˜๊ณ  ์Šคํ„ฐ๋””๋Š” ๊ฐ๊ฐ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ›์•„์˜ค๋ฉด ๋˜๋Š”๋ฐ ์ „์ฒด์ ์œผ๋กœ ๋ฐ›์•„์˜ค๋Š” ๋ฐ์ดํ„ฐ๋Š” ํ”„๋กœ์ ํŠธ+์Šคํ„ฐ๋””๋กœ ์ฃผ์‹œ๋Š”์ง€ ์•„๋‹ˆ๋ฉด ์ „์ฒด๋ฐ์ดํ„ฐ ํ•˜๋‚˜๋ฅผ ์ฃผ์‹œ๋Š”์ง€ ํ™•์ธํ•ด๋ณด์‹œ๊ณ  ์•Œ๋ ค์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒŸ์Šต๋‹ˆ๋‹ค!
non_process
front home ํ™”๋ฉด home ํ™”๋ฉด ์ „์ฒด ํ”„๋กœ์ ํŠธ ์Šคํ„ฐ๋”” selectํ•˜๋Š” ์˜์—ญ์—์„œ ํ”„๋กœ์ ํŠธ ํ•˜๊ณ  ์Šคํ„ฐ๋””๋Š” ๊ฐ๊ฐ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ›์•„์˜ค๋ฉด ๋˜๋Š”๋ฐ ์ „์ฒด์ ์œผ๋กœ ๋ฐ›์•„์˜ค๋Š” ๋ฐ์ดํ„ฐ๋Š” ํ”„๋กœ์ ํŠธ ์Šคํ„ฐ๋””๋กœ ์ฃผ์‹œ๋Š”์ง€ ์•„๋‹ˆ๋ฉด ์ „์ฒด๋ฐ์ดํ„ฐ ํ•˜๋‚˜๋ฅผ ์ฃผ์‹œ๋Š”์ง€ ํ™•์ธํ•ด๋ณด์‹œ๊ณ  ์•Œ๋ ค์ฃผ์‹œ๋ฉด ๊ฐ์‚ฌํ•˜๊ฒŸ์Šต๋‹ˆ๋‹ค
0
1,829
4,613,629,721
IssuesEvent
2016-09-25 04:10:12
EBrown8534/StackExchangeStatisticsExplorer
https://api.github.com/repos/EBrown8534/StackExchangeStatisticsExplorer
opened
Site-to-Site Comparison Graphs
enhancement in process
We should have line graphs that show comparisons between all sites selected in the site-to-site comparison for various metrics. (Need a list of metrics?)
1.0
Site-to-Site Comparison Graphs - We should have line graphs that show comparisons between all sites selected in the site-to-site comparison for various metrics. (Need a list of metrics?)
process
site to site comparison graphs we should have line graphs that show comparisons between all sites selected in the site to site comparison for various metrics need a list of metrics
1
6,648
9,769,806,538
IssuesEvent
2019-06-06 09:26:07
stoyicker/test-accessors
https://api.github.com/repos/stoyicker/test-accessors
closed
Add a fallback to Field#modifiers field to fetch the Field#accessFlags field instead
bug processor-java
Android's implementation of Field doesn't have a modifiers field, which means that static setters in instrumented tests fail.
1.0
Add a fallback to Field#modifiers field to fetch the Field#accessFlags field instead - Android's implementation of Field doesn't have a modifiers field, which means that static setters in instrumented tests fail.
process
add a fallback to field modifiers field to fetch the field accessflags field instead android s implementation of field doesn t have a modifiers field which means that static setters in instrumented tests fail
1
1,870
4,697,583,912
IssuesEvent
2016-10-12 09:50:52
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Spawned process don't trigger close or exit event
child_process
Hi, When i spawn a process and write in his stdin he will not trigger "close" or "exit" event. ```javascript emuObject.currentProcess = spawn(externalCommands[cmd], cmdArgs, {cwd:emuObject.cd}); emuObject.stdin.pipe(emuObject.currentProcess.stdin); emuObject.currentProcess.stdout.on('data', function(data){ self.emit('createOutput', emuObject, data.toString()); }); emuObject.currentProcess.stderr.on('data', function(data){ self.emit('createError', emuObject, data.toString()); }); emuObject.currentProcess.on('exit', function(code, signal){ console.log('exited process'); }); emuObject.currentProcess.on('close', function(code, signal){ console.log('closed process'); if(code == 0){ emuObject.currentProcess = null; self.emit('createLine', emuObject, emuObject.cd); } else{ emuObject.currentProcess = null; self.emit('createLine', emuObject, emuObject.cd); self.emit('killEmu', emuObject, true); } }); ``` i tried: ```javascript emuObject.currentProcess.stdin.resume(); emuObject.currentProcess.stdin.write(cmd + '\n'); emuObject.currentProcess.stdin.pause(); ``` but it does nothing I tried too ```javascript emuObject.currentProcess.stdin.write(cmd + '\n'); emuObject.currentProcess.stdin.end(); ``` but it end the child process instantly. Just in case it can have something to do with this issue, i'm using nw.js Any idea?
1.0
Spawned process don't trigger close or exit event - Hi, When i spawn a process and write in his stdin he will not trigger "close" or "exit" event. ```javascript emuObject.currentProcess = spawn(externalCommands[cmd], cmdArgs, {cwd:emuObject.cd}); emuObject.stdin.pipe(emuObject.currentProcess.stdin); emuObject.currentProcess.stdout.on('data', function(data){ self.emit('createOutput', emuObject, data.toString()); }); emuObject.currentProcess.stderr.on('data', function(data){ self.emit('createError', emuObject, data.toString()); }); emuObject.currentProcess.on('exit', function(code, signal){ console.log('exited process'); }); emuObject.currentProcess.on('close', function(code, signal){ console.log('closed process'); if(code == 0){ emuObject.currentProcess = null; self.emit('createLine', emuObject, emuObject.cd); } else{ emuObject.currentProcess = null; self.emit('createLine', emuObject, emuObject.cd); self.emit('killEmu', emuObject, true); } }); ``` i tried: ```javascript emuObject.currentProcess.stdin.resume(); emuObject.currentProcess.stdin.write(cmd + '\n'); emuObject.currentProcess.stdin.pause(); ``` but it does nothing I tried too ```javascript emuObject.currentProcess.stdin.write(cmd + '\n'); emuObject.currentProcess.stdin.end(); ``` but it end the child process instantly. Just in case it can have something to do with this issue, i'm using nw.js Any idea?
process
spawned process don t trigger close or exit event hi when i spawn a process and write in his stdin he will not trigger close or exit event javascript emuobject currentprocess spawn externalcommands cmdargs cwd emuobject cd emuobject stdin pipe emuobject currentprocess stdin emuobject currentprocess stdout on data function data self emit createoutput emuobject data tostring emuobject currentprocess stderr on data function data self emit createerror emuobject data tostring emuobject currentprocess on exit function code signal console log exited process emuobject currentprocess on close function code signal console log closed process if code emuobject currentprocess null self emit createline emuobject emuobject cd else emuobject currentprocess null self emit createline emuobject emuobject cd self emit killemu emuobject true i tried javascript emuobject currentprocess stdin resume emuobject currentprocess stdin write cmd n emuobject currentprocess stdin pause but it does nothing i tried too javascript emuobject currentprocess stdin write cmd n emuobject currentprocess stdin end but it end the child process instantly just in case it can have something to do with this issue i m using nw js any idea
1
4,085
7,042,947,844
IssuesEvent
2017-12-30 20:35:33
eve-savvy/eve-mail
https://api.github.com/repos/eve-savvy/eve-mail
closed
Authorization window stays open after successful authorization
bug help wanted renderer process
### Issue When an auth request opens the browser to allow for a user to authenticate a character the browser remains open on the authorization request after the user has successfully authenticated. ### Expectation The user should either be notified that they can close the external browser window, or the external browser window should close/notify the user that authentication was successful.
1.0
Authorization window stays open after successful authorization - ### Issue When an auth request opens the browser to allow for a user to authenticate a character the browser remains open on the authorization request after the user has successfully authenticated. ### Expectation The user should either be notified that they can close the external browser window, or the external browser window should close/notify the user that authentication was successful.
process
authorization window stays open after successful authorization issue when an auth request opens the browser to allow for a user to authenticate a character the browser remains open on the authorization request after the user has successfully authenticated expectation the user should either be notified that they can close the external browser window or the external browser window should close notify the user that authentication was successful
1
112,172
9,556,120,890
IssuesEvent
2019-05-03 07:15:30
kowainik/co-log
https://api.github.com/repos/kowainik/co-log
opened
[RFC] Laws property-based tests for `LogAction`
question tests
It would be really nice to have tests for the logging functionality, especially property-based testing. However, it's not really clear how to test it. It would be really nice though to have property-based tests for laws for every data type. However, in order to do this, we need to implement the following two pieces: - [ ] **Question 1:** Some way to generate arbitrary `LogAction` - [ ] **Question 2:** Some way to compare `LogAction` **Question 1** is still open. For **Question 2**: instead of comparing `LogAction`s we can compare results of `PureLogger`. We have `doctest` for unit testing, so this should be enough in terms of unit tests (though, more functions always can be covered with `doctest`). But having property-based tests for laws would be really good. Any thoughts are appreciated!
1.0
[RFC] Laws property-based tests for `LogAction` - It would be really nice to have tests for the logging functionality, especially property-based testing. However, it's not really clear how to test it. It would be really nice though to have property-based tests for laws for every data type. However, in order to do this, we need to implement the following two pieces: - [ ] **Question 1:** Some way to generate arbitrary `LogAction` - [ ] **Question 2:** Some way to compare `LogAction` **Question 1** is still open. For **Question 2**: instead of comparing `LogAction`s we can compare results of `PureLogger`. We have `doctest` for unit testing, so this should be enough in terms of unit tests (though, more functions always can be covered with `doctest`). But having property-based tests for laws would be really good. Any thoughts are appreciated!
non_process
laws property based tests for logaction it would be really nice to have tests for the logging functionality especially property based testing however it s not really clear how to test it it would be really nice though to have property based tests for laws for every data type however in order to do this we need to implement the following two pieces question some way to generate arbitrary logaction question some way to compare logaction question is still open for question instead of comparing logaction s we can compare results of purelogger we have doctest for unit testing so this should be enough in terms of unit tests though more functions always can be covered with doctest but having property based tests for laws would be really good any thoughts are appreciated
0
164,676
6,246,009,401
IssuesEvent
2017-07-13 02:04:20
jstanden/cerb
https://api.github.com/repos/jstanden/cerb
closed
[Refactor] Remove joins on message worklists (ticket, address)
priority-support refactor
The full list of millions of messages is causing issues in some environments.
1.0
[Refactor] Remove joins on message worklists (ticket, address) - The full list of millions of messages is causing issues in some environments.
non_process
remove joins on message worklists ticket address the full list of millions of messages is causing issues in some environments
0
140,201
12,889,161,748
IssuesEvent
2020-07-13 14:08:02
saltstack/salt
https://api.github.com/repos/saltstack/salt
closed
[DOCS] Missing docs for "requires" requisite in Salt Cloud maps
Bug Documentation Magnesium P3 Severity: Low
**Description** Received report on Slack: > In salt-cloud source I found that it supports `requires` directive (in the map file), although I cannot find any documentation for it. It seems to be a way to define that one VM requires another to be created (which is what I need), but I don't know how to use that directive. **Setup** Snippet of what the reporter found to work: ``` profile: - vm1: devices: ... - vm2: requires: - vm1 devices: ... ``` **Expected behavior** Need to get the docs updated w/ this info.
1.0
[DOCS] Missing docs for "requires" requisite in Salt Cloud maps - **Description** Received report on Slack: > In salt-cloud source I found that it supports `requires` directive (in the map file), although I cannot find any documentation for it. It seems to be a way to define that one VM requires another to be created (which is what I need), but I don't know how to use that directive. **Setup** Snippet of what the reporter found to work: ``` profile: - vm1: devices: ... - vm2: requires: - vm1 devices: ... ``` **Expected behavior** Need to get the docs updated w/ this info.
non_process
missing docs for requires requisite in salt cloud maps description received report on slack in salt cloud source i found that it supports requires directive in the map file although i cannot find any documentation for it it seems to be a way to define that one vm requires another to be created which is what i need but i don t know how to use that directive setup snippet of what the reporter found to work profile devices requires devices expected behavior need to get the docs updated w this info
0
335,664
10,164,851,899
IssuesEvent
2019-08-07 12:41:51
pmem/issues
https://api.github.com/repos/pmem/issues
closed
Test: vmmalloc_init/TEST17: SETUP (all/pmem/nondebug/memcheck) fails
Exposure: Low OS: Linux Priority: 3 medium State: To be verified Type: Bug
<!-- Before creating new issue, ensure that similar issue wasn't already created * Search: https://github.com/pmem/issues/issues Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report. Remember this is just a minimal template. You can extend it with data you think may be useful. --> # ISSUE: <!-- fill the title of issue --> ## Environment Information - PMDK package version(s): 1.6-101-gba30d1b1f - OS(es) version(s): Ubuntu19.01 - ndctl version(s): 63+ - kernel version(s): 5.0.0-21-generic <!-- fill in also other useful environment data --> ## Please provide a reproduction of the bug: ``` ./RUNTESTS vmmalloc_init -s TEST7 -m force-enable -t all ./RUNTESTS vmmalloc_init -s TEST17 -m force-enable -t all ``` ## How often bug is revealed: (always, often, rare): <!-- check one if possible --> <!-- describe special circumstances in section above --> ## Actual behavior: ``` $ ./RUNTESTS vmmalloc_init -s TEST7 -t all -m force-enable vmmalloc_init/TEST7: SETUP (all/pmem/debug/memcheck) RUNTESTS: stopping: vmmalloc_init/TEST7 failed, TEST=all FS=any BUILD=debug ``` ## Expected behavior: Tests should pass. ## Details Logs: [vmmalloc_init.zip](https://github.com/pmem/issues/files/3467800/vmmalloc_init.zip) ## Additional information about Priority and Help Requested: Are you willing to submit a pull request with a proposed change? (Yes, No) <!-- check one if possible --> Requested priority: (Showstopper, High, Medium, Low) <!-- check one if possible -->
1.0
Test: vmmalloc_init/TEST17: SETUP (all/pmem/nondebug/memcheck) fails - <!-- Before creating new issue, ensure that similar issue wasn't already created * Search: https://github.com/pmem/issues/issues Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report. Remember this is just a minimal template. You can extend it with data you think may be useful. --> # ISSUE: <!-- fill the title of issue --> ## Environment Information - PMDK package version(s): 1.6-101-gba30d1b1f - OS(es) version(s): Ubuntu19.01 - ndctl version(s): 63+ - kernel version(s): 5.0.0-21-generic <!-- fill in also other useful environment data --> ## Please provide a reproduction of the bug: ``` ./RUNTESTS vmmalloc_init -s TEST7 -m force-enable -t all ./RUNTESTS vmmalloc_init -s TEST17 -m force-enable -t all ``` ## How often bug is revealed: (always, often, rare): <!-- check one if possible --> <!-- describe special circumstances in section above --> ## Actual behavior: ``` $ ./RUNTESTS vmmalloc_init -s TEST7 -t all -m force-enable vmmalloc_init/TEST7: SETUP (all/pmem/debug/memcheck) RUNTESTS: stopping: vmmalloc_init/TEST7 failed, TEST=all FS=any BUILD=debug ``` ## Expected behavior: Tests should pass. ## Details Logs: [vmmalloc_init.zip](https://github.com/pmem/issues/files/3467800/vmmalloc_init.zip) ## Additional information about Priority and Help Requested: Are you willing to submit a pull request with a proposed change? (Yes, No) <!-- check one if possible --> Requested priority: (Showstopper, High, Medium, Low) <!-- check one if possible -->
non_process
test vmmalloc init setup all pmem nondebug memcheck fails before creating new issue ensure that similar issue wasn t already created search note that if you do not provide enough information to reproduce the issue we may not be able to take action on your report remember this is just a minimal template you can extend it with data you think may be useful issue environment information pmdk package version s os es version s ndctl version s kernel version s generic please provide a reproduction of the bug runtests vmmalloc init s m force enable t all runtests vmmalloc init s m force enable t all how often bug is revealed always often rare actual behavior runtests vmmalloc init s t all m force enable vmmalloc init setup all pmem debug memcheck runtests stopping vmmalloc init failed test all fs any build debug expected behavior tests should pass details logs additional information about priority and help requested are you willing to submit a pull request with a proposed change yes no requested priority showstopper high medium low
0
622,047
19,605,382,754
IssuesEvent
2022-01-06 08:48:15
parallel-finance/parallel
https://api.github.com/repos/parallel-finance/parallel
closed
Switch from cDOT-project to cDOT-lease
high priority
## Solutions 1. reserve token while minting cDOT to user, users need to come to us to unreserve (only work for cDOT-project it seems) 2. dont mint cDOT after contribution, issue cDOT after winning (via on_idle + childtrie) 3. dont mint cDOT after contribution, issue cDOT after winning (via offchain-client)
1.0
Switch from cDOT-project to cDOT-lease - ## Solutions 1. reserve token while minting cDOT to user, users need to come to us to unreserve (only work for cDOT-project it seems) 2. dont mint cDOT after contribution, issue cDOT after winning (via on_idle + childtrie) 3. dont mint cDOT after contribution, issue cDOT after winning (via offchain-client)
non_process
switch from cdot project to cdot lease solutions reserve token while minting cdot to user users need to come to us to unreserve only work for cdot project it seems dont mint cdot after contribution issue cdot after winning via on idle childtrie dont mint cdot after contribution issue cdot after winning via offchain client
0
162,938
12,698,122,211
IssuesEvent
2020-06-22 12:58:25
radareorg/radare2
https://api.github.com/repos/radareorg/radare2
closed
add hexdump metadata(Cd) to data sections
enhancement test-required
After analysis (aaa), we can use `Cd 1` for all addresses in data sections (only where there's no other metadata) so that it will be easy in visual mode to know when you are in data sections. Something like what IDA does.
1.0
add hexdump metadata(Cd) to data sections - After analysis (aaa), we can use `Cd 1` for all addresses in data sections (only where there's no other metadata) so that it will be easy in visual mode to know when you are in data sections. Something like what IDA does.
non_process
add hexdump metadata cd to data sections after analysis aaa we can use cd for all addresses in data sections only where there s no other metadata so that it will be easy in visual mode to know when you are in data sections something like what ida does
0
11,864
14,665,727,051
IssuesEvent
2020-12-29 14:51:05
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
DITA-OT does not rewrite xrefs to topic elements correctly during preprocessing when chunk = "to-content" is specified
bug preprocess preprocess/chunking priority/medium stale
When chunk = "to-content" is specified, DITA-OT creates a ditabase document during preprocessing that contains all of the topics that are being chunked. If the topics' IDs contain any duplicates, then unique IDs are generated to make all of the topic IDs unique in the ditabase document. However, xrefs to topic elements don't properly get rewritten. I am enclosing a sample .ditamap containing three topics that demonstrates the problem. In the sample, the topics intro.dita, topic1.dita, and topic2.dita all contain id = "topicid" on the topic. This is valid since the ID is unique within each topic. The chunk.ditamap contains chunk = "to-content" on intro.dita telling the DITA-OT that we want intro.dita, topic1.dita, and topic2.dita output as a single file. During preprocessing, the DITA-OT will create intro.dita as a single ditabase document containing the contents of the three topics. Since taken together there would be duplicate IDs, the DITA-OT writes unique IDs for the contents of topic1.dita and topic2.dita in the ditabase document. The problem is the xrefs in topic1.dita and topic2.dita are not rewritten to contain the new, unique IDs. Reproducible using DITA-OT1.7.5 and DITA-OT1.8.4. https://drive.google.com/file/d/0ByZs7YV5C0j0QWV3TVppSGRZXzQ/edit?usp=sharing
2.0
DITA-OT does not rewrite xrefs to topic elements correctly during preprocessing when chunk = "to-content" is specified - When chunk = "to-content" is specified, DITA-OT creates a ditabase document during preprocessing that contains all of the topics that are being chunked. If the topics' IDs contain any duplicates, then unique IDs are generated to make all of the topic IDs unique in the ditabase document. However, xrefs to topic elements don't properly get rewritten. I am enclosing a sample .ditamap containing three topics that demonstrates the problem. In the sample, the topics intro.dita, topic1.dita, and topic2.dita all contain id = "topicid" on the topic. This is valid since the ID is unique within each topic. The chunk.ditamap contains chunk = "to-content" on intro.dita telling the DITA-OT that we want intro.dita, topic1.dita, and topic2.dita output as a single file. During preprocessing, the DITA-OT will create intro.dita as a single ditabase document containing the contents of the three topics. Since taken together there would be duplicate IDs, the DITA-OT writes unique IDs for the contents of topic1.dita and topic2.dita in the ditabase document. The problem is the xrefs in topic1.dita and topic2.dita are not rewritten to contain the new, unique IDs. Reproducible using DITA-OT1.7.5 and DITA-OT1.8.4. https://drive.google.com/file/d/0ByZs7YV5C0j0QWV3TVppSGRZXzQ/edit?usp=sharing
process
dita ot does not rewrite xrefs to topic elements correctly during preprocessing when chunk to content is specified when chunk to content is specified dita ot creates a ditabase document during preprocessing that contains all of the topics that are being chunked if the topics ids contain any duplicates then unique ids are generated to make all of the topic ids unique in the ditabase document however xrefs to topic elements don t properly get rewritten i am enclosing a sample ditamap containing three topics that demonstrates the problem in the sample the topics intro dita dita and dita all contain id topicid on the topic this is valid since the id is unique within each topic the chunk ditamap contains chunk to content on intro dita telling the dita ot that we want intro dita dita and dita output as a single file during preprocessing the dita ot will create intro dita as a single ditabase document containing the contents of the three topics since taken together there would be duplicate ids the dita ot writes unique ids for the contents of dita and dita in the ditabase document the problem is the xrefs in dita and dita are not rewritten to contain the new unique ids reproducible using dita and dita
1
257,720
19,529,922,343
IssuesEvent
2021-12-30 14:54:30
cython/cython
https://api.github.com/repos/cython/cython
closed
[BUG] Namespace is not inserted if a cppclass is renamed
R: worksforme Documentation
**Describe the bug** When renaming a `cppclass` (`cdef cppclass CppObject "Object"`), namespace is not inserted into the compiled code. Though everythins is okay if namespace is already provided in the 'renaming string', like `cdef cppclass CppObject "nspace::Object"`. I am not sure if it is a bug, but this moment is not explained in the documentation. **To Reproduce** Code to reproduce the behaviour: **test.pyx**: ```cython # distutils: language = c++ # distutils: sources = cpp_lib.cpp cdef extern from "cpp_lib.h" namespace "test": # it works with cdef cppclass CppTest "test::Test" cdef cppclass CppTest "Test": CppTest() cdef class Test: cdef CppTest test ``` **cpp_lib.h**: ```C++ namespace test { class Test { public: Test(); }; } ``` **cpp_lib.cpp**: ``` C++ #include "cpp_lib.h" namespace test { Test::Test() {}; } ``` **Expected behavior** I expected Cython to insert the namespace automatically since it is already provided after `namespace` keyword, but for some reason I have to write it explicitely for every cppclass (if I want to rename it). I think the easiest way is to explain class renaming in the documentation to avoid possible confusion. **Environment (please complete the following information):** - OS: Windows 10 Pro, Version 10.0.19.044 Build 19044 - Python version: 3.7.9 - Cython version: **Additional context** The error message: ``` test.cpp test.cpp(935): error C3646: 'test': unknown override specifier test.cpp(935): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int test.cpp(1340): error C2882: 'test': illegal use of namespace identifier in expression test.cpp(1340): error C2061: syntax error: identifier 'Test' test.cpp(1351): error C2882: 'test': illegal use of namespace identifier in expression test.cpp(1351): error C2672: '__Pyx_call_destructor': no matching overloaded function found error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2 ```
1.0
[BUG] Namespace is not inserted if a cppclass is renamed - **Describe the bug** When renaming a `cppclass` (`cdef cppclass CppObject "Object"`), namespace is not inserted into the compiled code. Though everythins is okay if namespace is already provided in the 'renaming string', like `cdef cppclass CppObject "nspace::Object"`. I am not sure if it is a bug, but this moment is not explained in the documentation. **To Reproduce** Code to reproduce the behaviour: **test.pyx**: ```cython # distutils: language = c++ # distutils: sources = cpp_lib.cpp cdef extern from "cpp_lib.h" namespace "test": # it works with cdef cppclass CppTest "test::Test" cdef cppclass CppTest "Test": CppTest() cdef class Test: cdef CppTest test ``` **cpp_lib.h**: ```C++ namespace test { class Test { public: Test(); }; } ``` **cpp_lib.cpp**: ``` C++ #include "cpp_lib.h" namespace test { Test::Test() {}; } ``` **Expected behavior** I expected Cython to insert the namespace automatically since it is already provided after `namespace` keyword, but for some reason I have to write it explicitely for every cppclass (if I want to rename it). I think the easiest way is to explain class renaming in the documentation to avoid possible confusion. **Environment (please complete the following information):** - OS: Windows 10 Pro, Version 10.0.19.044 Build 19044 - Python version: 3.7.9 - Cython version: **Additional context** The error message: ``` test.cpp test.cpp(935): error C3646: 'test': unknown override specifier test.cpp(935): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int test.cpp(1340): error C2882: 'test': illegal use of namespace identifier in expression test.cpp(1340): error C2061: syntax error: identifier 'Test' test.cpp(1351): error C2882: 'test': illegal use of namespace identifier in expression test.cpp(1351): error C2672: '__Pyx_call_destructor': no matching overloaded function found error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2 ```
non_process
namespace is not inserted if a cppclass is renamed describe the bug when renaming a cppclass cdef cppclass cppobject object namespace is not inserted into the compiled code though everythins is okay if namespace is already provided in the renaming string like cdef cppclass cppobject nspace object i am not sure if it is a bug but this moment is not explained in the documentation to reproduce code to reproduce the behaviour test pyx cython distutils language c distutils sources cpp lib cpp cdef extern from cpp lib h namespace test it works with cdef cppclass cpptest test test cdef cppclass cpptest test cpptest cdef class test cdef cpptest test cpp lib h c namespace test class test public test cpp lib cpp c include cpp lib h namespace test test test expected behavior i expected cython to insert the namespace automatically since it is already provided after namespace keyword but for some reason i have to write it explicitely for every cppclass if i want to rename it i think the easiest way is to explain class renaming in the documentation to avoid possible confusion environment please complete the following information os windows pro version build python version cython version additional context the error message test cpp test cpp error test unknown override specifier test cpp error missing type specifier int assumed note c does not support default int test cpp error test illegal use of namespace identifier in expression test cpp error syntax error identifier test test cpp error test illegal use of namespace identifier in expression test cpp error pyx call destructor no matching overloaded function found error command c program files microsoft visual studio community vc tools msvc bin cl exe failed with exit status
0
100,432
12,521,706,360
IssuesEvent
2020-06-03 17:51:49
cr-ste-justine/clin-project
https://api.github.com/repos/cr-ste-justine/clin-project
opened
Rajouter d'autres colonnes selectionnable a la liste des patients
Clin Clin Design Front-end
Ajouter dans lโ€™option โ€œAfficherโ€ les colonnes suivantes: ![image.png](https://images.zenhubusercontent.com/5c5c6036df0733a30555cc4e/32fd2271-ee71-446b-a142-cb2f599a6c29) - MRN - RAMQ, ID Famille - Famille - ID Specimen - ร‰tude ** Ces colonnes seront dรฉcochรฉe par defaut **
1.0
Rajouter d'autres colonnes selectionnable a la liste des patients - Ajouter dans lโ€™option โ€œAfficherโ€ les colonnes suivantes: ![image.png](https://images.zenhubusercontent.com/5c5c6036df0733a30555cc4e/32fd2271-ee71-446b-a142-cb2f599a6c29) - MRN - RAMQ, ID Famille - Famille - ID Specimen - ร‰tude ** Ces colonnes seront dรฉcochรฉe par defaut **
non_process
rajouter d autres colonnes selectionnable a la liste des patients ajouter dans lโ€™option โ€œafficherโ€ les colonnes suivantes mrn ramq id famille famille id specimen รฉtude ces colonnes seront dรฉcochรฉe par defaut
0
41,373
10,707,566,679
IssuesEvent
2019-10-24 17:45:28
alan-turing-institute/sktime
https://api.github.com/repos/alan-turing-institute/sktime
opened
Compile manylinux wheels
build / ci help wanted
Currently, we only get wheels for specific linux versions which are not accepted by PyPI. We need to set up our CI to build wheels following the [manylinux](https://github.com/pypa/manylinux) convention.
1.0
Compile manylinux wheels - Currently, we only get wheels for specific linux versions which are not accepted by PyPI. We need to set up our CI to build wheels following the [manylinux](https://github.com/pypa/manylinux) convention.
non_process
compile manylinux wheels currently we only get wheels for specific linux versions which are not accepted by pypi we need to set up our ci to build wheels following the convention
0
64,229
18,286,202,977
IssuesEvent
2021-10-05 10:33:07
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Room settings save button can be incorrectly disabled even though there are changes
T-Defect
### Steps to reproduce 1. Open Room Settings. 2. Edit the Room Name field. The Save button gets enabled. 3. Edit the Room Topic field. The Save button is still enabled. 4. Revert the edit to the Room Topic field. The Save button now incorrectly gets disabled. ### What happened? ### What did you expect? The Save button should remain enabled because the Room Name is still changed from the starting state. ### What happened? The Save button was disabled. I'm using the Desktop version but this likely applies to Element Web too. ### Operating system Arch Linux ### Application version Element version: 1.8.5 Olm version: 3.2.3 ### How did you install the app? pacman ### Homeserver _No response_ ### Have you submitted a rageshake? No
1.0
Room settings save button can be incorrectly disabled even though there are changes - ### Steps to reproduce 1. Open Room Settings. 2. Edit the Room Name field. The Save button gets enabled. 3. Edit the Room Topic field. The Save button is still enabled. 4. Revert the edit to the Room Topic field. The Save button now incorrectly gets disabled. ### What happened? ### What did you expect? The Save button should remain enabled because the Room Name is still changed from the starting state. ### What happened? The Save button was disabled. I'm using the Desktop version but this likely applies to Element Web too. ### Operating system Arch Linux ### Application version Element version: 1.8.5 Olm version: 3.2.3 ### How did you install the app? pacman ### Homeserver _No response_ ### Have you submitted a rageshake? No
non_process
room settings save button can be incorrectly disabled even though there are changes steps to reproduce open room settings edit the room name field the save button gets enabled edit the room topic field the save button is still enabled revert the edit to the room topic field the save button now incorrectly gets disabled what happened what did you expect the save button should remain enabled because the room name is still changed from the starting state what happened the save button was disabled i m using the desktop version but this likely applies to element web too operating system arch linux application version element version olm version how did you install the app pacman homeserver no response have you submitted a rageshake no
0
4,785
7,661,273,284
IssuesEvent
2018-05-11 13:44:51
SharePoint/PnP-PowerShell
https://api.github.com/repos/SharePoint/PnP-PowerShell
closed
Add-PnPFile can't write Values Modified & Created
Needs investigation To be processed
### Reporting an Issue or Missing Feature When uploading a file with Add-PnPFile values for Modified & Created are ignored ### Expected behavior > PS > $pubdate > **Thursday, 19 November 2015 2:47:38 PM** > > PS > $pubdate.GetType() > IsPublic IsSerial Name BaseType > True True **DateTime** System.ValueType > > PS > Add-PnPFile -Path $path -Folder $folder -Values @{Modified="$pubdate"; Created="$pubdate"} > Name Type Items/Size Last Modified > 2015_07 Annual Leave.pdf File 57000 **19/11/2015 2:47:38 PM** ### Actual behavior > PS > $pubdate > **Thursday, 19 November 2015 2:47:38 PM** > > PS > $pubdate.GetType() > IsPublic IsSerial Name BaseType > True True **DateTime** System.ValueType > > PS > Add-PnPFile -Path $path -Folder $folder -Values @{Modified="$pubdate"; Created="$pubdate"} > Name Type Items/Size Last Modified > 2015_07 Annual Leave.pdf File 57000 **7/05/2018 5:49:01 AM** ### Steps to reproduce behavior Happens with every file upload for me, I can't set the modified or created date no matter what I do. However I can create a new Date & Time column and set that just fine. ### Which version of the PnP-PowerShell Cmdlets are you using? - [ ] PnP PowerShell for SharePoint 2013 - [ ] PnP PowerShell for SharePoint 2016 - [x] PnP PowerShell for SharePoint Online ### What is the version of the Cmdlet module you are running? SharePointPnPPowerShellOnline 2.25.1804.1 ### How did you install the PnP-PowerShell Cmdlets? - [ ] MSI Installed downloaded from GitHub - [x] Installed through the PowerShell Gallery with Install-Module - [ ] Other means
1.0
Add-PnPFile can't write Values Modified & Created - ### Reporting an Issue or Missing Feature When uploading a file with Add-PnPFile values for Modified & Created are ignored ### Expected behavior > PS > $pubdate > **Thursday, 19 November 2015 2:47:38 PM** > > PS > $pubdate.GetType() > IsPublic IsSerial Name BaseType > True True **DateTime** System.ValueType > > PS > Add-PnPFile -Path $path -Folder $folder -Values @{Modified="$pubdate"; Created="$pubdate"} > Name Type Items/Size Last Modified > 2015_07 Annual Leave.pdf File 57000 **19/11/2015 2:47:38 PM** ### Actual behavior > PS > $pubdate > **Thursday, 19 November 2015 2:47:38 PM** > > PS > $pubdate.GetType() > IsPublic IsSerial Name BaseType > True True **DateTime** System.ValueType > > PS > Add-PnPFile -Path $path -Folder $folder -Values @{Modified="$pubdate"; Created="$pubdate"} > Name Type Items/Size Last Modified > 2015_07 Annual Leave.pdf File 57000 **7/05/2018 5:49:01 AM** ### Steps to reproduce behavior Happens with every file upload for me, I can't set the modified or created date no matter what I do. However I can create a new Date & Time column and set that just fine. ### Which version of the PnP-PowerShell Cmdlets are you using? - [ ] PnP PowerShell for SharePoint 2013 - [ ] PnP PowerShell for SharePoint 2016 - [x] PnP PowerShell for SharePoint Online ### What is the version of the Cmdlet module you are running? SharePointPnPPowerShellOnline 2.25.1804.1 ### How did you install the PnP-PowerShell Cmdlets? - [ ] MSI Installed downloaded from GitHub - [x] Installed through the PowerShell Gallery with Install-Module - [ ] Other means
process
add pnpfile can t write values modified created reporting an issue or missing feature when uploading a file with add pnpfile values for modified created are ignored expected behavior ps pubdate thursday november pm ps pubdate gettype ispublic isserial name basetype true true datetime system valuetype ps add pnpfile path path folder folder values modified pubdate created pubdate name type items size last modified annual leave pdf file pm actual behavior ps pubdate thursday november pm ps pubdate gettype ispublic isserial name basetype true true datetime system valuetype ps add pnpfile path path folder folder values modified pubdate created pubdate name type items size last modified annual leave pdf file am steps to reproduce behavior happens with every file upload for me i can t set the modified or created date no matter what i do however i can create a new date time column and set that just fine which version of the pnp powershell cmdlets are you using pnp powershell for sharepoint pnp powershell for sharepoint pnp powershell for sharepoint online what is the version of the cmdlet module you are running sharepointpnppowershellonline how did you install the pnp powershell cmdlets msi installed downloaded from github installed through the powershell gallery with install module other means
1
22,252
30,802,546,347
IssuesEvent
2023-08-01 03:27:04
h4sh5/npm-auto-scanner
https://api.github.com/repos/h4sh5/npm-auto-scanner
opened
zip_achive_bp 1.999.0 has 2 guarddog issues
npm-install-script npm-silent-process-execution
```{"npm-install-script":[{"code":" \"postinstall\": \"node preinstall.js\",","location":"package/package.json:7","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"const child = spawn('node', ['index.js'], {\n detached: true,\n stdio: 'ignore'\n});","location":"package/preinstall.js:3","message":"This package is silently executing another executable"}]}```
1.0
zip_achive_bp 1.999.0 has 2 guarddog issues - ```{"npm-install-script":[{"code":" \"postinstall\": \"node preinstall.js\",","location":"package/package.json:7","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"const child = spawn('node', ['index.js'], {\n detached: true,\n stdio: 'ignore'\n});","location":"package/preinstall.js:3","message":"This package is silently executing another executable"}]}```
process
zip achive bp has guarddog issues npm install script npm silent process execution n detached true n stdio ignore n location package preinstall js message this package is silently executing another executable
1
21,173
28,144,306,237
IssuesEvent
2023-04-02 09:59:40
bitfocus/companion-module-requests
https://api.github.com/repos/bitfocus/companion-module-requests
opened
CP750 Remote control
NOT YET PROCESSED
The name of the device, hardware, or software you would like to control: Dolby CP750 What you would like to be able to make it do from Companion: Remote control of format and fader, and feedback, as already existing on companion module for CP650 and CP950 Direct links or attachments to the ethernet control protocol or API: [CP750 manual_5.pdf](https://github.com/bitfocus/companion-module-requests/files/11131755/CP750.manual_5.pdf)
1.0
CP750 Remote control - The name of the device, hardware, or software you would like to control: Dolby CP750 What you would like to be able to make it do from Companion: Remote control of format and fader, and feedback, as already existing on companion module for CP650 and CP950 Direct links or attachments to the ethernet control protocol or API: [CP750 manual_5.pdf](https://github.com/bitfocus/companion-module-requests/files/11131755/CP750.manual_5.pdf)
process
remote control the name of the device hardware or software you would like to control dolby what you would like to be able to make it do from companion remote control of format and fader and feedback as already existing on companion module for and direct links or attachments to the ethernet control protocol or api
1
18,602
24,576,114,471
IssuesEvent
2022-10-13 12:29:20
aiidateam/aiida-core
https://api.github.com/repos/aiidateam/aiida-core
closed
`WithSerialize.serialize` calls the serializer even if the provided value is `None`
type/bug priority/important topic/engine topic/processes
This leads to problems with the `to_aiida_type` serializer, which will raise a `ValueError` when it receives `None`. Now it can be discussed whether `to_aiida_type` should simply register `None` and simply return the value when it receives `None` instead of excepting. However, does it ever make sense for a serializer on a port to be called if the provided input is `None`? This problem came to light after #5688 was merged which added the `to_aiida_type` serializer to the dynamically generated ports of `ProcessFunction`s. The following will now raise: ```python @calcfunction def math(a, b, c=None): result = a + b if c is not None: result += c return result math(1, 2) # runs fine math(1, 2, 3) # runs fine math(1, 2, c=None) # this will except saying `None` cannot be automatically serialized ``` From a user perspective, not specifying `c` or explicitly specifying `c=None` should result in the same behavior.
1.0
`WithSerialize.serialize` calls the serializer even if the provided value is `None` - This leads to problems with the `to_aiida_type` serializer, which will raise a `ValueError` when it receives `None`. Now it can be discussed whether `to_aiida_type` should simply register `None` and simply return the value when it receives `None` instead of excepting. However, does it ever make sense for a serializer on a port to be called if the provided input is `None`? This problem came to light after #5688 was merged which added the `to_aiida_type` serializer to the dynamically generated ports of `ProcessFunction`s. The following will now raise: ```python @calcfunction def math(a, b, c=None): result = a + b if c is not None: result += c return result math(1, 2) # runs fine math(1, 2, 3) # runs fine math(1, 2, c=None) # this will except saying `None` cannot be automatically serialized ``` From a user perspective, not specifying `c` or explicitly specifying `c=None` should result in the same behavior.
process
withserialize serialize calls the serializer even if the provided value is none this leads to problems with the to aiida type serializer which will raise a valueerror when it receives none now it can be discussed whether to aiida type should simply register none and simply return the value when it receives none instead of excepting however does it ever make sense for a serializer on a port to be called if the provided input is none this problem came to light after was merged which added the to aiida type serializer to the dynamically generated ports of processfunction s the following will now raise python calcfunction def math a b c none result a b if c is not none result c return result math runs fine math runs fine math c none this will except saying none cannot be automatically serialized from a user perspective not specifying c or explicitly specifying c none should result in the same behavior
1
11,028
13,836,093,253
IssuesEvent
2020-10-14 00:11:29
googleapis/gapic-generator-typescript
https://api.github.com/repos/googleapis/gapic-generator-typescript
closed
gts@3 issues related to generated code
type: process
I have a release candidate of `gts@3` sitting here: https://www.npmjs.com/package/gts/v/3.0.0-alpha.1 Still working on a changelog, but it's mostly upgrading all the dependencies. I have a PR out on all the repos, and I'm seeing runs like this: https://github.com/googleapis/nodejs-vision/pull/823/checks?check_run_id=1078358969 The pattern looks to match: ``` /home/runner/work/nodejs-vision/nodejs-vision/src/v1/image_annotator_client.ts ##[warning] 22:3 warning 'CallOptions' is defined but never used @typescript-eslint/no-unused-vars ##[error] 32:17 error Require statement not part of import statement @typescript-eslint/no-var-requires ``` Ideally we'd address both of these before rolling out the 3.x release. Right now I'm just poking around for what's going to cause trouble :)
1.0
gts@3 issues related to generated code - I have a release candidate of `gts@3` sitting here: https://www.npmjs.com/package/gts/v/3.0.0-alpha.1 Still working on a changelog, but it's mostly upgrading all the dependencies. I have a PR out on all the repos, and I'm seeing runs like this: https://github.com/googleapis/nodejs-vision/pull/823/checks?check_run_id=1078358969 The pattern looks to match: ``` /home/runner/work/nodejs-vision/nodejs-vision/src/v1/image_annotator_client.ts ##[warning] 22:3 warning 'CallOptions' is defined but never used @typescript-eslint/no-unused-vars ##[error] 32:17 error Require statement not part of import statement @typescript-eslint/no-var-requires ``` Ideally we'd address both of these before rolling out the 3.x release. Right now I'm just poking around for what's going to cause trouble :)
process
gts issues related to generated code i have a release candidate of gts sitting here still working on a changelog but it s mostly upgrading all the dependencies i have a pr out on all the repos and i m seeing runs like this the pattern looks to match home runner work nodejs vision nodejs vision src image annotator client ts warning calloptions is defined but never used typescript eslint no unused vars error require statement not part of import statement typescript eslint no var requires ideally we d address both of these before rolling out the x release right now i m just poking around for what s going to cause trouble
1
1,895
11,042,642,182
IssuesEvent
2019-12-09 09:35:16
DevExpress/testcafe
https://api.github.com/repos/DevExpress/testcafe
opened
We should give a capability to switch to invisible iframe
AREA: client SYSTEM: automations TYPE: enhancement support center
Reasons: 0) some sites use invisible iframes to perform some util logic (for example see [a Studio issue](https://github.com/DevExpress/testcafe-studio/issues/2881)) 1) we have actions that don't check element's visiblity before execution (`pressKey, navigateTo` etc.) 2) we have even one action with selector that doesn't check element's visibility - `setFilesToUpload` 3) users use a hack to execute such switching, they assign `1px*1px` size to an invisible iframe. So there is only 1px but our behavior is absolutely different in this case. Test should fail during action execution for this iframe if the required element is invisible.
1.0
We should give a capability to switch to invisible iframe - Reasons: 0) some sites use invisible iframes to perform some util logic (for example see [a Studio issue](https://github.com/DevExpress/testcafe-studio/issues/2881)) 1) we have actions that don't check element's visiblity before execution (`pressKey, navigateTo` etc.) 2) we have even one action with selector that doesn't check element's visibility - `setFilesToUpload` 3) users use a hack to execute such switching, they assign `1px*1px` size to an invisible iframe. So there is only 1px but our behavior is absolutely different in this case. Test should fail during action execution for this iframe if the required element is invisible.
non_process
we should give a capability to switch to invisible iframe reasons some sites use invisible iframes to perform some util logic for example see we have actions that don t check element s visiblity before execution presskey navigateto etc we have even one action with selector that doesn t check element s visibility setfilestoupload users use a hack to execute such switching they assign size to an invisible iframe so there is only but our behavior is absolutely different in this case test should fail during action execution for this iframe if the required element is invisible
0
160,203
12,505,817,712
IssuesEvent
2020-06-02 11:26:50
aliasrobotics/RVD
https://api.github.com/repos/aliasrobotics/RVD
opened
subprocess call with shell=True identified, security issue., /opt/ros_melodic_ws/src/ros/rosbash/test/test_scripts.py:55
bandit bug static analysis testing triage
```yaml { "id": 1, "title": "subprocess call with shell=True identified, security issue., /opt/ros_melodic_ws/src/ros/rosbash/test/test_scripts.py:55", "type": "bug", "description": "HIGH confidence of HIGH severity bug. subprocess call with shell=True identified, security issue. at /opt/ros_melodic_ws/src/ros/rosbash/test/test_scripts.py:55 See links for more info on the bug.", "cwe": "None", "cve": "None", "keywords": [ "bandit", "bug", "static analysis", "testing", "triage", "bug" ], "system": "", "vendor": null, "severity": { "rvss-score": 0, "rvss-vector": "", "severity-description": "", "cvss-score": 0, "cvss-vector": "" }, "links": "", "flaw": { "phase": "testing", "specificity": "subject-specific", "architectural-location": "application-specific", "application": "N/A", "subsystem": "N/A", "package": "N/A", "languages": "None", "date-detected": "2020-06-02 (11:26)", "detected-by": "Alias Robotics", "detected-by-method": "testing static", "date-reported": "2020-06-02 (11:26)", "reported-by": "Alias Robotics", "reported-by-relationship": "automatic", "issue": "", "reproducibility": "always", "trace": "/opt/ros_melodic_ws/src/ros/rosbash/test/test_scripts.py:55", "reproduction": "See artifacts below (if available)", "reproduction-image": "" }, "exploitation": { "description": "", "exploitation-image": "", "exploitation-vector": "" }, "mitigation": { "description": "", "pull-request": "", "date-mitigation": "" } } ```
1.0
subprocess call with shell=True identified, security issue., /opt/ros_melodic_ws/src/ros/rosbash/test/test_scripts.py:55 - ```yaml { "id": 1, "title": "subprocess call with shell=True identified, security issue., /opt/ros_melodic_ws/src/ros/rosbash/test/test_scripts.py:55", "type": "bug", "description": "HIGH confidence of HIGH severity bug. subprocess call with shell=True identified, security issue. at /opt/ros_melodic_ws/src/ros/rosbash/test/test_scripts.py:55 See links for more info on the bug.", "cwe": "None", "cve": "None", "keywords": [ "bandit", "bug", "static analysis", "testing", "triage", "bug" ], "system": "", "vendor": null, "severity": { "rvss-score": 0, "rvss-vector": "", "severity-description": "", "cvss-score": 0, "cvss-vector": "" }, "links": "", "flaw": { "phase": "testing", "specificity": "subject-specific", "architectural-location": "application-specific", "application": "N/A", "subsystem": "N/A", "package": "N/A", "languages": "None", "date-detected": "2020-06-02 (11:26)", "detected-by": "Alias Robotics", "detected-by-method": "testing static", "date-reported": "2020-06-02 (11:26)", "reported-by": "Alias Robotics", "reported-by-relationship": "automatic", "issue": "", "reproducibility": "always", "trace": "/opt/ros_melodic_ws/src/ros/rosbash/test/test_scripts.py:55", "reproduction": "See artifacts below (if available)", "reproduction-image": "" }, "exploitation": { "description": "", "exploitation-image": "", "exploitation-vector": "" }, "mitigation": { "description": "", "pull-request": "", "date-mitigation": "" } } ```
non_process
subprocess call with shell true identified security issue opt ros melodic ws src ros rosbash test test scripts py yaml id title subprocess call with shell true identified security issue opt ros melodic ws src ros rosbash test test scripts py type bug description high confidence of high severity bug subprocess call with shell true identified security issue at opt ros melodic ws src ros rosbash test test scripts py see links for more info on the bug cwe none cve none keywords bandit bug static analysis testing triage bug system vendor null severity rvss score rvss vector severity description cvss score cvss vector links flaw phase testing specificity subject specific architectural location application specific application n a subsystem n a package n a languages none date detected detected by alias robotics detected by method testing static date reported reported by alias robotics reported by relationship automatic issue reproducibility always trace opt ros melodic ws src ros rosbash test test scripts py reproduction see artifacts below if available reproduction image exploitation description exploitation image exploitation vector mitigation description pull request date mitigation
0
11,170
13,957,694,718
IssuesEvent
2020-10-24 08:11:20
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
CY: Geoportal Process for Data Service Linking (Workshop Ispra January 2019)
CY - Cyprus Geoportal Harvesting process
Dear Angelo, we are trying to sort out things due to the upcoming workshop in Ispra. A word file is attached including some questions regarding the document provided by Robert Tomas &quot;Geoportal workflow for establishing links between data sets and network services&quot;. Your help will be greatly appreciated. The DLS Team
1.0
CY: Geoportal Process for Data Service Linking (Workshop Ispra January 2019) - Dear Angelo, we are trying to sort out things due to the upcoming workshop in Ispra. A word file is attached including some questions regarding the document provided by Robert Tomas &quot;Geoportal workflow for establishing links between data sets and network services&quot;. Your help will be greatly appreciated. The DLS Team
process
cy geoportal process for data service linking workshop ispra january dear angelo we are trying to sort out things due to the upcoming workshop in ispra a word file is attached including some questions regarding the document provided by robert tomas quot geoportal workflow for establishing links between data sets and network services quot your help will be greatly appreciated the dls team
1
87,979
17,404,830,991
IssuesEvent
2021-08-03 03:22:56
CATcher-org/CATcher
https://api.github.com/repos/CATcher-org/CATcher
closed
Representing AuthComponent state using ADT
aspect-CodeQuality category.Enhancement difficulty.Hard p.Low
# Overview Algebraic Data Types (ADTs) can be viewed as enum with data. Currently, application state in the AuthComponent is represented by a collection of booleans (isAuthenticating, isOutdatedVersion, etc) which does not appropriately and intuitively translate into the state of the application. State is determined by programmatically checking for some collection of booleans, (!isOutdatedVersion && !isUserAuthenticated) which may miss out edge cases, are hard to parse for new contributors, and susceptible to regression bugs. ## Proposal ADTs can be accomplished in typescript 2.0 with interfaces and a "disciminant". as an exmaple, we can have ``` type State = Initial | Auth | Current type Auth = Authenticating | Authenticated | NotAuthenticated | Error type Current = UpToDate | Outdated | Error ``` Implementation detail can be found [here](https://stackoverflow.com/questions/33915459/algebraic-data-types-in-typescript) as a reference. ### Notes Start after #723 is merged.
1.0
Representing AuthComponent state using ADT - # Overview Algebraic Data Types (ADTs) can be viewed as enum with data. Currently, application state in the AuthComponent is represented by a collection of booleans (isAuthenticating, isOutdatedVersion, etc) which does not appropriately and intuitively translate into the state of the application. State is determined by programmatically checking for some collection of booleans, (!isOutdatedVersion && !isUserAuthenticated) which may miss out edge cases, are hard to parse for new contributors, and susceptible to regression bugs. ## Proposal ADTs can be accomplished in typescript 2.0 with interfaces and a "disciminant". as an exmaple, we can have ``` type State = Initial | Auth | Current type Auth = Authenticating | Authenticated | NotAuthenticated | Error type Current = UpToDate | Outdated | Error ``` Implementation detail can be found [here](https://stackoverflow.com/questions/33915459/algebraic-data-types-in-typescript) as a reference. ### Notes Start after #723 is merged.
non_process
representing authcomponent state using adt overview algebraic data types adts can be viewed as enum with data currently application state in the authcomponent is represented by a collection of booleans isauthenticating isoutdatedversion etc which does not appropriately and intuitively translate into the state of the application state is determined by programmatically checking for some collection of booleans isoutdatedversion isuserauthenticated which may miss out edge cases are hard to parse for new contributors and susceptible to regression bugs proposal adts can be accomplished in typescript with interfaces and a disciminant as an exmaple we can have type state initial auth current type auth authenticating authenticated notauthenticated error type current uptodate outdated error implementation detail can be found as a reference notes start after is merged
0
18,871
24,800,463,446
IssuesEvent
2022-10-24 21:11:44
googleapis/google-cloud-go
https://api.github.com/repos/googleapis/google-cloud-go
closed
chore(ci): move linting presubmit from kokoro into GitHub Actions
type: process
We should move the Go linting tasks from kokoro into a GitHub Action for easier debugging, speed of execution (and thus quicker feedback), and ease of maintenance.
1.0
chore(ci): move linting presubmit from kokoro into GitHub Actions - We should move the Go linting tasks from kokoro into a GitHub Action for easier debugging, speed of execution (and thus quicker feedback), and ease of maintenance.
process
chore ci move linting presubmit from kokoro into github actions we should move the go linting tasks from kokoro into a github action for easier debugging speed of execution and thus quicker feedback and ease of maintenance
1
17,773
23,701,399,786
IssuesEvent
2022-08-29 19:21:54
openxla/stablehlo
https://api.github.com/repos/openxla/stablehlo
opened
Improve the interpreter tesing using fuzzing and cross-validation.
Interpreter Process
### Request description The idea here is to use a fuzzer to generate StableHLO programs, run them and validate the results against another implementation (e.g. a compiler maybe). ### Additional context _No response_
1.0
Improve the interpreter tesing using fuzzing and cross-validation. - ### Request description The idea here is to use a fuzzer to generate StableHLO programs, run them and validate the results against another implementation (e.g. a compiler maybe). ### Additional context _No response_
process
improve the interpreter tesing using fuzzing and cross validation request description the idea here is to use a fuzzer to generate stablehlo programs run them and validate the results against another implementation e g a compiler maybe additional context no response
1
207,814
15,837,375,828
IssuesEvent
2021-04-06 20:41:46
microsoft/vscode-python
https://api.github.com/repos/microsoft/vscode-python
closed
pytest test discovery tries to import conftest.py from other projects
area-testing triage type-bug
Originally posted in #12538, I thought maybe it was the same problem, but I'm not sure. My project structure is not the most common one, but not so unusual either, I think. ``` <project-root>/ |----src/ | |----<package>/ | | |----__init__.py | | |----... |----tests/ | |----__init__.py | |----conftest.py | |----test_xxxx.py | |----... |----setup.cfg ``` `setup.cfg` contains the `pytest` settings such that simple running `pytest` from `<project-root>` works perfectly. However, when I try to discover the tests in vscode, I get this error: ``` During handling of the above exception, another exception occurred: /opt/conda/envs/<env-name>/lib/python3.7/site-packages/_pytest/config/__init__.py:446: in _importconftest mod = conftestpath.pyimport() /opt/conda/envs/<env-name>/lib/python3.7/site-packages/py/_path/local.py:721: in pyimport raise self.ImportMismatchError(modname, modfile, self) E py._path.local.LocalPath.ImportMismatchError: ('tests.conftest', '/home/<user>/src/<another-project-dir>/tests/conftest.py', local('/home/<user>/src/<another-project-dir>/tests/conftest.py')) ``` It's trying to load `conftest.py` from two other projects!! I've confirmed that these projects are not installed in the conda env or accidentally cached anywhere (everywhere I could think to check). The project I'm working on has no reason to know that this project exists. From the output I found this command: ` python /home/<user>/.local/share/code-server/extensions/ms-python.python-2020.5.86806/pythonFiles/testing_tools/run_adapter.py discover pytest -- --rootdir <project-path> -s` If I manually run this command in my project root, it works perfectly. If I run it from my home directory, I replicate the problem where it tries to load the wrong project. In fact, the two projects it's trying to load, are the only two that have a `conftest.py` anywhere within my home directory. I tried setting `python.testing.cwd` to `<project-path>` but it didn't change anything. I can't think of anything else. In the end my best guess is that `run_adapter.py` should be run from the project path rather than my home directory. Any advice is appreciated. I'm willing to believe its PEBCAK and not a bug, but either way I'd be grateful for a solution. _Originally posted by @hsharrison in https://github.com/microsoft/vscode-python/issues/12538#issuecomment-731105448_
1.0
pytest test discovery tries to import conftest.py from other projects - Originally posted in #12538, I thought maybe it was the same problem, but I'm not sure. My project structure is not the most common one, but not so unusual either, I think. ``` <project-root>/ |----src/ | |----<package>/ | | |----__init__.py | | |----... |----tests/ | |----__init__.py | |----conftest.py | |----test_xxxx.py | |----... |----setup.cfg ``` `setup.cfg` contains the `pytest` settings such that simple running `pytest` from `<project-root>` works perfectly. However, when I try to discover the tests in vscode, I get this error: ``` During handling of the above exception, another exception occurred: /opt/conda/envs/<env-name>/lib/python3.7/site-packages/_pytest/config/__init__.py:446: in _importconftest mod = conftestpath.pyimport() /opt/conda/envs/<env-name>/lib/python3.7/site-packages/py/_path/local.py:721: in pyimport raise self.ImportMismatchError(modname, modfile, self) E py._path.local.LocalPath.ImportMismatchError: ('tests.conftest', '/home/<user>/src/<another-project-dir>/tests/conftest.py', local('/home/<user>/src/<another-project-dir>/tests/conftest.py')) ``` It's trying to load `conftest.py` from two other projects!! I've confirmed that these projects are not installed in the conda env or accidentally cached anywhere (everywhere I could think to check). The project I'm working on has no reason to know that this project exists. From the output I found this command: ` python /home/<user>/.local/share/code-server/extensions/ms-python.python-2020.5.86806/pythonFiles/testing_tools/run_adapter.py discover pytest -- --rootdir <project-path> -s` If I manually run this command in my project root, it works perfectly. If I run it from my home directory, I replicate the problem where it tries to load the wrong project. In fact, the two projects it's trying to load, are the only two that have a `conftest.py` anywhere within my home directory. I tried setting `python.testing.cwd` to `<project-path>` but it didn't change anything. I can't think of anything else. In the end my best guess is that `run_adapter.py` should be run from the project path rather than my home directory. Any advice is appreciated. I'm willing to believe its PEBCAK and not a bug, but either way I'd be grateful for a solution. _Originally posted by @hsharrison in https://github.com/microsoft/vscode-python/issues/12538#issuecomment-731105448_
non_process
pytest test discovery tries to import conftest py from other projects originally posted in i thought maybe it was the same problem but i m not sure my project structure is not the most common one but not so unusual either i think src init py tests init py conftest py test xxxx py setup cfg setup cfg contains the pytest settings such that simple running pytest from works perfectly however when i try to discover the tests in vscode i get this error during handling of the above exception another exception occurred opt conda envs lib site packages pytest config init py in importconftest mod conftestpath pyimport opt conda envs lib site packages py path local py in pyimport raise self importmismatcherror modname modfile self e py path local localpath importmismatcherror tests conftest home src tests conftest py local home src tests conftest py it s trying to load conftest py from two other projects i ve confirmed that these projects are not installed in the conda env or accidentally cached anywhere everywhere i could think to check the project i m working on has no reason to know that this project exists from the output i found this command python home local share code server extensions ms python python pythonfiles testing tools run adapter py discover pytest rootdir s if i manually run this command in my project root it works perfectly if i run it from my home directory i replicate the problem where it tries to load the wrong project in fact the two projects it s trying to load are the only two that have a conftest py anywhere within my home directory i tried setting python testing cwd to but it didn t change anything i can t think of anything else in the end my best guess is that run adapter py should be run from the project path rather than my home directory any advice is appreciated i m willing to believe its pebcak and not a bug but either way i d be grateful for a solution originally posted by hsharrison in
0
7,297
10,442,770,175
IssuesEvent
2019-09-18 13:42:33
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Where is "Quick Create"?
Pri2 assigned-to-author automation/svc process-automation/subsvc product-question triaged
Create Runbook section, step 3. Click the Add a runbook button found at the top of the list. On the Add Runbook page, select Quick Create. I don't see the "Add a runbook" or "Quck Create" option anywhere. --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 28627d8f-9037-7b22-3659-9102a2506eb3 * Version Independent ID: 20d12e2c-4b72-0683-282e-65d8a4b6c9c8 * Content: [Azure Quickstart - Create an Azure Automation runbook](https://docs.microsoft.com/en-us/azure/automation/automation-quickstart-create-runbook#feedback) * Content Source: [articles/automation/automation-quickstart-create-runbook.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-quickstart-create-runbook.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @csand-msft * Microsoft Alias: **csand**
1.0
Where is "Quick Create"? - Create Runbook section, step 3. Click the Add a runbook button found at the top of the list. On the Add Runbook page, select Quick Create. I don't see the "Add a runbook" or "Quck Create" option anywhere. --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 28627d8f-9037-7b22-3659-9102a2506eb3 * Version Independent ID: 20d12e2c-4b72-0683-282e-65d8a4b6c9c8 * Content: [Azure Quickstart - Create an Azure Automation runbook](https://docs.microsoft.com/en-us/azure/automation/automation-quickstart-create-runbook#feedback) * Content Source: [articles/automation/automation-quickstart-create-runbook.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-quickstart-create-runbook.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @csand-msft * Microsoft Alias: **csand**
process
where is quick create create runbook section step click the add a runbook button found at the top of the list on the add runbook page select quick create i don t see the add a runbook or quck create option anywhere document details โš  do not edit this section it is required for docs microsoft com โžŸ github issue linking id version independent id content content source service automation sub service process automation github login csand msft microsoft alias csand
1
14,246
3,814,364,940
IssuesEvent
2016-03-28 12:52:09
lintool/warcbase
https://api.github.com/repos/lintool/warcbase
closed
Deprecate loadWarc and loadArc in favour of loadArchives
documentation
Since we now have `loadArchives`, which detects ARC or WARC input and processes appropriately, I think we could go ahead an replace `loadArc` and `loadWarc` with the new function in all the documentation. I recommend we leave the original functions in Warcbase, though.
1.0
Deprecate loadWarc and loadArc in favour of loadArchives - Since we now have `loadArchives`, which detects ARC or WARC input and processes appropriately, I think we could go ahead an replace `loadArc` and `loadWarc` with the new function in all the documentation. I recommend we leave the original functions in Warcbase, though.
non_process
deprecate loadwarc and loadarc in favour of loadarchives since we now have loadarchives which detects arc or warc input and processes appropriately i think we could go ahead an replace loadarc and loadwarc with the new function in all the documentation i recommend we leave the original functions in warcbase though
0
7,171
10,533,573,800
IssuesEvent
2019-10-01 13:19:24
girder/viime
https://api.github.com/repos/girder/viime
opened
Discuss testing strategy
type: requirements/docs
I've avoided writing tests to this point because things were very fluid and a large number of unit tests would simply have slowed us down and added no value. I believe we're at a point now where a more explicit testing strategy is needed. I don't think we should strive for % coverage, but we could probably identify areas in the code where tests would add value rather than simply adding maintenance burden. In particular, I think it would be reasonable to test the vuex store, more of the things in utils, and some of the heavier logic in components such as DataTable, ProblemBar, Transform, and maybe Upload. I don't have a strong desire to test what I consider to be "leaf nodes" of the application, such as individual analyses and the transform plots, but I'm open to suggestions on those.' Testing vue components isn't an exact science, and we can follow the lead set by girder/girder_web_components where it makes sense.
1.0
Discuss testing strategy - I've avoided writing tests to this point because things were very fluid and a large number of unit tests would simply have slowed us down and added no value. I believe we're at a point now where a more explicit testing strategy is needed. I don't think we should strive for % coverage, but we could probably identify areas in the code where tests would add value rather than simply adding maintenance burden. In particular, I think it would be reasonable to test the vuex store, more of the things in utils, and some of the heavier logic in components such as DataTable, ProblemBar, Transform, and maybe Upload. I don't have a strong desire to test what I consider to be "leaf nodes" of the application, such as individual analyses and the transform plots, but I'm open to suggestions on those.' Testing vue components isn't an exact science, and we can follow the lead set by girder/girder_web_components where it makes sense.
non_process
discuss testing strategy i ve avoided writing tests to this point because things were very fluid and a large number of unit tests would simply have slowed us down and added no value i believe we re at a point now where a more explicit testing strategy is needed i don t think we should strive for coverage but we could probably identify areas in the code where tests would add value rather than simply adding maintenance burden in particular i think it would be reasonable to test the vuex store more of the things in utils and some of the heavier logic in components such as datatable problembar transform and maybe upload i don t have a strong desire to test what i consider to be leaf nodes of the application such as individual analyses and the transform plots but i m open to suggestions on those testing vue components isn t an exact science and we can follow the lead set by girder girder web components where it makes sense
0
16,496
21,472,562,816
IssuesEvent
2022-04-26 10:51:03
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
IndexOutOfBounds if output collection of multi-instance is modified
kind/bug severity/high area/ux team/process-automation
**Describe the bug** I deployed a BPMN process with a parallel multi-instance embedded subprocess. The multi-instance subprocess defines an output collection to collect the results of the iterations. If the output collection is modified during the iteration (e.g. the size of the output collection is reduced) then multi-instance can't be completed. The processing fails with an `IndexOutOfBounds` exception. The concrete exception may change depending on the modification. **To Reproduce** ![image](https://user-images.githubusercontent.com/4305769/163994928-162a2562-386f-457a-b05f-40e4ad10176f.png) [test-multi-instance-output-collection.bpmn.txt](https://github.com/camunda/zeebe/files/8512641/test-multi-instance-output-collection.bpmn.txt) 1. deploy a BPMN process with a parallel multi-instance embedded subprocess * define an output collection variable `results` 2. create an instance of the process 3. modify the variable `results` when the multi-instance is active (e.g. set the variable to `[]` - an empty array) * set variables via command * set variables via job worker * set variable via output mapping (e.g. on the start event of the embedded subprocess) 4. verify that the multi-instance is not completed 5. verify that a related error record is written **Expected behavior** An incident is created if the output element can't be added to the output collection. The incident is visible to the user and allows to fix the issue manually by modifying the output collection. **Log/Stacktrace** <details><summary>Full Stacktrace</summary> <p> ``` java.lang.IndexOutOfBoundsException: index=8 capacity=8 at org.agrona.concurrent.UnsafeBuffer.boundsCheck(UnsafeBuffer.java:2212) ~[agrona-1.15.0.jar:1.15.0] at org.agrona.concurrent.UnsafeBuffer.getByte(UnsafeBuffer.java:947) ~[agrona-1.15.0.jar:1.15.0] at io.camunda.zeebe.msgpack.spec.MsgPackReader.skipValues(MsgPackReader.java:355) ~[zeebe-msgpack-core-8.0.0.jar:8.0.0] at io.camunda.zeebe.msgpack.spec.MsgPackReader.skipValue(MsgPackReader.java:350) ~[zeebe-msgpack-core-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.insertAt(MultiInstanceBodyProcessor.java:410) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.lambda$updateOutputCollection$17(MultiInstanceBodyProcessor.java:389) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.Either$Right.map(Either.java:355) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.updateOutputCollection(MultiInstanceBodyProcessor.java:380) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.lambda$updateOutputCollection$16(MultiInstanceBodyProcessor.java:366) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at java.util.Optional.map(Unknown Source) ~[?:?] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.updateOutputCollection(MultiInstanceBodyProcessor.java:364) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.beforeExecutionPathCompleted(MultiInstanceBodyProcessor.java:153) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.beforeExecutionPathCompleted(MultiInstanceBodyProcessor.java:35) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnStateTransitionBehavior.lambda$beforeExecutionPathCompleted$5(BpmnStateTransitionBehavior.java:354) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnStateTransitionBehavior.invokeElementContainerIfPresent(BpmnStateTransitionBehavior.java:442) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnStateTransitionBehavior.beforeExecutionPathCompleted(BpmnStateTransitionBehavior.java:350) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnStateTransitionBehavior.transitionToCompleted(BpmnStateTransitionBehavior.java:164) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.SubProcessProcessor.lambda$onComplete$3(SubProcessProcessor.java:76) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.Either$Right.flatMap(Either.java:366) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.SubProcessProcessor.onComplete(SubProcessProcessor.java:73) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.SubProcessProcessor.onComplete(SubProcessProcessor.java:23) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processEvent(BpmnStreamProcessor.java:133) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.lambda$processRecord$0(BpmnStreamProcessor.java:110) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.Either$Right.ifRightOrLeft(Either.java:381) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processRecord(BpmnStreamProcessor.java:107) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.TypedRecordProcessor.processRecord(TypedRecordProcessor.java:54) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.lambda$processInTransaction$3(ProcessingStateMachine.java:300) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.db.impl.rocksdb.transaction.ZeebeTransaction.run(ZeebeTransaction.java:84) ~[zeebe-db-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processInTransaction(ProcessingStateMachine.java:290) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processCommand(ProcessingStateMachine.java:253) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.tryToReadNextRecord(ProcessingStateMachine.java:213) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.readNextRecord(ProcessingStateMachine.java:189) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorJob.invoke(ActorJob.java:79) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorJob.execute(ActorJob.java:44) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorTask.execute(ActorTask.java:122) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:97) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorThread.doWork(ActorThread.java:80) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorThread.run(ActorThread.java:189) ~[zeebe-util-8.0.0.jar:8.0.0] ``` </p> </details> **Environment:** - Zeebe Version: 8.0.0 (and others)
1.0
IndexOutOfBounds if output collection of multi-instance is modified - **Describe the bug** I deployed a BPMN process with a parallel multi-instance embedded subprocess. The multi-instance subprocess defines an output collection to collect the results of the iterations. If the output collection is modified during the iteration (e.g. the size of the output collection is reduced) then multi-instance can't be completed. The processing fails with an `IndexOutOfBounds` exception. The concrete exception may change depending on the modification. **To Reproduce** ![image](https://user-images.githubusercontent.com/4305769/163994928-162a2562-386f-457a-b05f-40e4ad10176f.png) [test-multi-instance-output-collection.bpmn.txt](https://github.com/camunda/zeebe/files/8512641/test-multi-instance-output-collection.bpmn.txt) 1. deploy a BPMN process with a parallel multi-instance embedded subprocess * define an output collection variable `results` 2. create an instance of the process 3. modify the variable `results` when the multi-instance is active (e.g. set the variable to `[]` - an empty array) * set variables via command * set variables via job worker * set variable via output mapping (e.g. on the start event of the embedded subprocess) 4. verify that the multi-instance is not completed 5. verify that a related error record is written **Expected behavior** An incident is created if the output element can't be added to the output collection. The incident is visible to the user and allows to fix the issue manually by modifying the output collection. **Log/Stacktrace** <details><summary>Full Stacktrace</summary> <p> ``` java.lang.IndexOutOfBoundsException: index=8 capacity=8 at org.agrona.concurrent.UnsafeBuffer.boundsCheck(UnsafeBuffer.java:2212) ~[agrona-1.15.0.jar:1.15.0] at org.agrona.concurrent.UnsafeBuffer.getByte(UnsafeBuffer.java:947) ~[agrona-1.15.0.jar:1.15.0] at io.camunda.zeebe.msgpack.spec.MsgPackReader.skipValues(MsgPackReader.java:355) ~[zeebe-msgpack-core-8.0.0.jar:8.0.0] at io.camunda.zeebe.msgpack.spec.MsgPackReader.skipValue(MsgPackReader.java:350) ~[zeebe-msgpack-core-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.insertAt(MultiInstanceBodyProcessor.java:410) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.lambda$updateOutputCollection$17(MultiInstanceBodyProcessor.java:389) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.Either$Right.map(Either.java:355) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.updateOutputCollection(MultiInstanceBodyProcessor.java:380) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.lambda$updateOutputCollection$16(MultiInstanceBodyProcessor.java:366) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at java.util.Optional.map(Unknown Source) ~[?:?] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.updateOutputCollection(MultiInstanceBodyProcessor.java:364) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.beforeExecutionPathCompleted(MultiInstanceBodyProcessor.java:153) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.MultiInstanceBodyProcessor.beforeExecutionPathCompleted(MultiInstanceBodyProcessor.java:35) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnStateTransitionBehavior.lambda$beforeExecutionPathCompleted$5(BpmnStateTransitionBehavior.java:354) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnStateTransitionBehavior.invokeElementContainerIfPresent(BpmnStateTransitionBehavior.java:442) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnStateTransitionBehavior.beforeExecutionPathCompleted(BpmnStateTransitionBehavior.java:350) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnStateTransitionBehavior.transitionToCompleted(BpmnStateTransitionBehavior.java:164) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.SubProcessProcessor.lambda$onComplete$3(SubProcessProcessor.java:76) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.Either$Right.flatMap(Either.java:366) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.SubProcessProcessor.onComplete(SubProcessProcessor.java:73) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.container.SubProcessProcessor.onComplete(SubProcessProcessor.java:23) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processEvent(BpmnStreamProcessor.java:133) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.lambda$processRecord$0(BpmnStreamProcessor.java:110) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.Either$Right.ifRightOrLeft(Either.java:381) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processRecord(BpmnStreamProcessor.java:107) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.TypedRecordProcessor.processRecord(TypedRecordProcessor.java:54) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.lambda$processInTransaction$3(ProcessingStateMachine.java:300) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.db.impl.rocksdb.transaction.ZeebeTransaction.run(ZeebeTransaction.java:84) ~[zeebe-db-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processInTransaction(ProcessingStateMachine.java:290) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processCommand(ProcessingStateMachine.java:253) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.tryToReadNextRecord(ProcessingStateMachine.java:213) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.readNextRecord(ProcessingStateMachine.java:189) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorJob.invoke(ActorJob.java:79) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorJob.execute(ActorJob.java:44) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorTask.execute(ActorTask.java:122) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:97) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorThread.doWork(ActorThread.java:80) ~[zeebe-util-8.0.0.jar:8.0.0] at io.camunda.zeebe.util.sched.ActorThread.run(ActorThread.java:189) ~[zeebe-util-8.0.0.jar:8.0.0] ``` </p> </details> **Environment:** - Zeebe Version: 8.0.0 (and others)
process
indexoutofbounds if output collection of multi instance is modified describe the bug i deployed a bpmn process with a parallel multi instance embedded subprocess the multi instance subprocess defines an output collection to collect the results of the iterations if the output collection is modified during the iteration e g the size of the output collection is reduced then multi instance can t be completed the processing fails with an indexoutofbounds exception the concrete exception may change depending on the modification to reproduce deploy a bpmn process with a parallel multi instance embedded subprocess define an output collection variable results create an instance of the process modify the variable results when the multi instance is active e g set the variable to an empty array set variables via command set variables via job worker set variable via output mapping e g on the start event of the embedded subprocess verify that the multi instance is not completed verify that a related error record is written expected behavior an incident is created if the output element can t be added to the output collection the incident is visible to the user and allows to fix the issue manually by modifying the output collection log stacktrace full stacktrace java lang indexoutofboundsexception index capacity at org agrona concurrent unsafebuffer boundscheck unsafebuffer java at org agrona concurrent unsafebuffer getbyte unsafebuffer java at io camunda zeebe msgpack spec msgpackreader skipvalues msgpackreader java at io camunda zeebe msgpack spec msgpackreader skipvalue msgpackreader java at io camunda zeebe engine processing bpmn container multiinstancebodyprocessor insertat multiinstancebodyprocessor java at io camunda zeebe engine processing bpmn container multiinstancebodyprocessor lambda updateoutputcollection multiinstancebodyprocessor java at io camunda zeebe util either right map either java at io camunda zeebe engine processing bpmn container multiinstancebodyprocessor updateoutputcollection multiinstancebodyprocessor java at io camunda zeebe engine processing bpmn container multiinstancebodyprocessor lambda updateoutputcollection multiinstancebodyprocessor java at java util optional map unknown source at io camunda zeebe engine processing bpmn container multiinstancebodyprocessor updateoutputcollection multiinstancebodyprocessor java at io camunda zeebe engine processing bpmn container multiinstancebodyprocessor beforeexecutionpathcompleted multiinstancebodyprocessor java at io camunda zeebe engine processing bpmn container multiinstancebodyprocessor beforeexecutionpathcompleted multiinstancebodyprocessor java at io camunda zeebe engine processing bpmn behavior bpmnstatetransitionbehavior lambda beforeexecutionpathcompleted bpmnstatetransitionbehavior java at io camunda zeebe engine processing bpmn behavior bpmnstatetransitionbehavior invokeelementcontainerifpresent bpmnstatetransitionbehavior java at io camunda zeebe engine processing bpmn behavior bpmnstatetransitionbehavior beforeexecutionpathcompleted bpmnstatetransitionbehavior java at io camunda zeebe engine processing bpmn behavior bpmnstatetransitionbehavior transitiontocompleted bpmnstatetransitionbehavior java at io camunda zeebe engine processing bpmn container subprocessprocessor lambda oncomplete subprocessprocessor java at io camunda zeebe util either right flatmap either java at io camunda zeebe engine processing bpmn container subprocessprocessor oncomplete subprocessprocessor java at io camunda zeebe engine processing bpmn container subprocessprocessor oncomplete subprocessprocessor java at io camunda zeebe engine processing bpmn bpmnstreamprocessor processevent bpmnstreamprocessor java at io camunda zeebe engine processing bpmn bpmnstreamprocessor lambda processrecord bpmnstreamprocessor java at io camunda zeebe util either right ifrightorleft either java at io camunda zeebe engine processing bpmn bpmnstreamprocessor processrecord bpmnstreamprocessor java at io camunda zeebe engine processing streamprocessor typedrecordprocessor processrecord typedrecordprocessor java at io camunda zeebe engine processing streamprocessor processingstatemachine lambda processintransaction processingstatemachine java at io camunda zeebe db impl rocksdb transaction zeebetransaction run zeebetransaction java at io camunda zeebe engine processing streamprocessor processingstatemachine processintransaction processingstatemachine java at io camunda zeebe engine processing streamprocessor processingstatemachine processcommand processingstatemachine java at io camunda zeebe engine processing streamprocessor processingstatemachine trytoreadnextrecord processingstatemachine java at io camunda zeebe engine processing streamprocessor processingstatemachine readnextrecord processingstatemachine java at io camunda zeebe util sched actorjob invoke actorjob java at io camunda zeebe util sched actorjob execute actorjob java at io camunda zeebe util sched actortask execute actortask java at io camunda zeebe util sched actorthread executecurrenttask actorthread java at io camunda zeebe util sched actorthread dowork actorthread java at io camunda zeebe util sched actorthread run actorthread java environment zeebe version and others
1
49,470
12,344,624,212
IssuesEvent
2020-05-15 07:24:16
angular/angular-cli
https://api.github.com/repos/angular/angular-cli
closed
Server bundles are minified by default
comp: devkit/build-angular
<!--๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”… Oh hi there! ๐Ÿ˜„ To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. ๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…--> # ๐Ÿž Bug report ### Command (mark with an `x`) <!-- Can you pin-point the command or commands that are effected by this bug? --> <!-- โœ๏ธedit: --> - [ ] new - [x] build - [ ] serve - [ ] test - [ ] e2e - [ ] generate - [ ] add - [ ] update - [ ] lint - [ ] xi18n - [ ] run - [ ] config - [ ] help - [ ] version - [ ] doc ### Is this a regression? No ### Description By default, the production server bundles generated by the Angular CLI are minified, making debugging much harder, despite the code comment at https://github.com/angular/angular-cli/blob/master/packages/angular_devkit/build_angular/src/angular-cli-files/models/webpack-configs/common.ts#L416 reading: > // On server, we don't want to compress anything. We still set the ngDevMode = false for it > // to remove dev code, and ngI18nClosureMode to remove Closure compiler i18n code The reason being that, when `buildOptions.platform == 'server'` is `true`, the following object is passed to the `TerserPlugin` `compress` option: ``` { ecma: terserEcma, global_defs: angularGlobalDefinitions, keep_fnames: true, } ``` However, according to the TerserPlugin's [docs](https://github.com/terser/terser#compress-options), most `compress` flags default to `true`, so this effectively causes the server code to be minified. In addition, unsetting the `scripts` optimizations in `angular.json` causes other side effects, namely `ngDevMode` no longer being explicitly set to `false`, causing the server code to run in dev mode by default. ``` "optimization": { "scripts": false, "styles": true }, ``` One workaround is to set the `NG_BUILD_DEBUG_OPTIMIZE` flag, which does turn off `compress`, but my concern is that it would get removed & renamed at some point since it's undocumented. I think `compress` should be set to `false` for server builds, as per the code comment currently reads, as working with minified code on the server is a pain. It would cause server-side bundles to jump in size, e.g. for my (fairly large) app is goes from 20 MB (with `optimization: true`) to 32 MB (with `optimization: true` but `compress: false`), but is definitely worth it IMO. I can look into drafting a PR if you guys agree with that. ## ๐Ÿ”ฌ Minimal Reproduction <pre><code> $ npm install -g @angular/cli@latest $ ng new ssr-minify-test $ cd ssr-minify-test/ $ ng add @nguniversal/express-engine $ npm run build:ssr </code></pre> Manually inspect `dist/ssr-minify-test/server/main.js` -> code is minified / unreadable ## ๐ŸŒ Your Environment <pre><code> _ _ ____ _ ___ / \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _| / โ–ณ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | | / ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | | /_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___| |___/ Angular CLI: 9.1.1 Node: 12.4.0 OS: darwin x64 Angular: ... Ivy Workspace: Package Version ------------------------------------------------------ @angular-devkit/architect 0.901.1 @angular-devkit/core 9.1.1 @angular-devkit/schematics 9.1.1 @schematics/angular 9.1.1 @schematics/update 0.901.1 rxjs 6.5.4 </code></pre>
1.0
Server bundles are minified by default - <!--๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”… Oh hi there! ๐Ÿ˜„ To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. ๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…--> # ๐Ÿž Bug report ### Command (mark with an `x`) <!-- Can you pin-point the command or commands that are effected by this bug? --> <!-- โœ๏ธedit: --> - [ ] new - [x] build - [ ] serve - [ ] test - [ ] e2e - [ ] generate - [ ] add - [ ] update - [ ] lint - [ ] xi18n - [ ] run - [ ] config - [ ] help - [ ] version - [ ] doc ### Is this a regression? No ### Description By default, the production server bundles generated by the Angular CLI are minified, making debugging much harder, despite the code comment at https://github.com/angular/angular-cli/blob/master/packages/angular_devkit/build_angular/src/angular-cli-files/models/webpack-configs/common.ts#L416 reading: > // On server, we don't want to compress anything. We still set the ngDevMode = false for it > // to remove dev code, and ngI18nClosureMode to remove Closure compiler i18n code The reason being that, when `buildOptions.platform == 'server'` is `true`, the following object is passed to the `TerserPlugin` `compress` option: ``` { ecma: terserEcma, global_defs: angularGlobalDefinitions, keep_fnames: true, } ``` However, according to the TerserPlugin's [docs](https://github.com/terser/terser#compress-options), most `compress` flags default to `true`, so this effectively causes the server code to be minified. In addition, unsetting the `scripts` optimizations in `angular.json` causes other side effects, namely `ngDevMode` no longer being explicitly set to `false`, causing the server code to run in dev mode by default. ``` "optimization": { "scripts": false, "styles": true }, ``` One workaround is to set the `NG_BUILD_DEBUG_OPTIMIZE` flag, which does turn off `compress`, but my concern is that it would get removed & renamed at some point since it's undocumented. I think `compress` should be set to `false` for server builds, as per the code comment currently reads, as working with minified code on the server is a pain. It would cause server-side bundles to jump in size, e.g. for my (fairly large) app is goes from 20 MB (with `optimization: true`) to 32 MB (with `optimization: true` but `compress: false`), but is definitely worth it IMO. I can look into drafting a PR if you guys agree with that. ## ๐Ÿ”ฌ Minimal Reproduction <pre><code> $ npm install -g @angular/cli@latest $ ng new ssr-minify-test $ cd ssr-minify-test/ $ ng add @nguniversal/express-engine $ npm run build:ssr </code></pre> Manually inspect `dist/ssr-minify-test/server/main.js` -> code is minified / unreadable ## ๐ŸŒ Your Environment <pre><code> _ _ ____ _ ___ / \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _| / โ–ณ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | | / ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | | /_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___| |___/ Angular CLI: 9.1.1 Node: 12.4.0 OS: darwin x64 Angular: ... Ivy Workspace: Package Version ------------------------------------------------------ @angular-devkit/architect 0.901.1 @angular-devkit/core 9.1.1 @angular-devkit/schematics 9.1.1 @schematics/angular 9.1.1 @schematics/update 0.901.1 rxjs 6.5.4 </code></pre>
non_process
server bundles are minified by default ๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”… oh hi there ๐Ÿ˜„ to expedite issue processing please search open and closed issues before submitting a new one existing issues often contain information about workarounds resolution or progress updates ๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”…๐Ÿ”… ๐Ÿž bug report command mark with an x new build serve test generate add update lint run config help version doc is this a regression no description by default the production server bundles generated by the angular cli are minified making debugging much harder despite the code comment at reading on server we don t want to compress anything we still set the ngdevmode false for it to remove dev code and to remove closure compiler code the reason being that when buildoptions platform server is true the following object is passed to the terserplugin compress option ecma terserecma global defs angularglobaldefinitions keep fnames true however according to the terserplugin s most compress flags default to true so this effectively causes the server code to be minified in addition unsetting the scripts optimizations in angular json causes other side effects namely ngdevmode no longer being explicitly set to false causing the server code to run in dev mode by default optimization scripts false styles true one workaround is to set the ng build debug optimize flag which does turn off compress but my concern is that it would get removed renamed at some point since it s undocumented i think compress should be set to false for server builds as per the code comment currently reads as working with minified code on the server is a pain it would cause server side bundles to jump in size e g for my fairly large app is goes from mb with optimization true to mb with optimization true but compress false but is definitely worth it imo i can look into drafting a pr if you guys agree with that ๐Ÿ”ฌ minimal reproduction npm install g angular cli latest ng new ssr minify test cd ssr minify test ng add nguniversal express engine npm run build ssr manually inspect dist ssr minify test server main js code is minified unreadable ๐ŸŒ your environment โ–ณ angular cli node os darwin angular ivy workspace package version angular devkit architect angular devkit core angular devkit schematics schematics angular schematics update rxjs
0
446,757
12,878,268,103
IssuesEvent
2020-07-11 15:40:14
DevAdventCalendar/DevAdventCalendar
https://api.github.com/repos/DevAdventCalendar/DevAdventCalendar
closed
Fix Sonarcloud bugs
bug good first issue medium priority
**To Reproduce** Steps to reproduce the behavior: 1. Go to https://sonarcloud.io/project/issues?id=DevAdventCalendar_DevAdventCalendar&resolved=false&types=BUG **Current behavior** There are many bugs and vulnerabilities **Expected behavior** There are no bugs (besides this from /libs or /wwwroot folder)
1.0
Fix Sonarcloud bugs - **To Reproduce** Steps to reproduce the behavior: 1. Go to https://sonarcloud.io/project/issues?id=DevAdventCalendar_DevAdventCalendar&resolved=false&types=BUG **Current behavior** There are many bugs and vulnerabilities **Expected behavior** There are no bugs (besides this from /libs or /wwwroot folder)
non_process
fix sonarcloud bugs to reproduce steps to reproduce the behavior go to current behavior there are many bugs and vulnerabilities expected behavior there are no bugs besides this from libs or wwwroot folder
0
14,221
17,141,466,631
IssuesEvent
2021-07-13 10:04:43
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
Preprocessor chain 'climate_statistics' and 'multimodel_statistics' fails as time dimension is not present
bug preprocessor
The construction of a preprocessor that computes first `climate_statistics` and then `multimodel_statistics` fails with a missing `time` coordinate error from the latter. This is reasonable as for individual model `climate_statistics` the time axis is replaced with the one corresponding to requested operation (either 'day_of_year', 'month_number', 'season_number'), but `multimodel_statistics` expects to receive a cube with the `time` coordinate I made a test and it should be a big issue to replace the deletion of `time` coordinate from the cube after aggregation https://github.com/ESMValGroup/ESMValCore/blob/7f8406bdafe898c05d37f398457023b228e79324/esmvalcore/preprocessor/_time.py#L537-L539 with a demotion of the cube `time` coordinate to auxiliary (time will have values coherent with the aggregation operation), as in the following ```python @@ -535,7 +535,8 @@ def climate_statistics(cube, clim_coord = _get_period_coord(cube, period, seasons) operator = get_iris_analysis_operation(operator) clim_cube = cube.aggregated_by(clim_coord, operator) - clim_cube.remove_coord('time') + iris.util.demote_dim_coord_to_aux_coord(clim_cube,'time') if clim_cube.coord(clim_coord.name()).is_monotonic(): iris.util.promote_aux_coord_to_dim_coord(clim_cube, clim_coord.name()) else: ```
1.0
Preprocessor chain 'climate_statistics' and 'multimodel_statistics' fails as time dimension is not present - The construction of a preprocessor that computes first `climate_statistics` and then `multimodel_statistics` fails with a missing `time` coordinate error from the latter. This is reasonable as for individual model `climate_statistics` the time axis is replaced with the one corresponding to requested operation (either 'day_of_year', 'month_number', 'season_number'), but `multimodel_statistics` expects to receive a cube with the `time` coordinate I made a test and it should be a big issue to replace the deletion of `time` coordinate from the cube after aggregation https://github.com/ESMValGroup/ESMValCore/blob/7f8406bdafe898c05d37f398457023b228e79324/esmvalcore/preprocessor/_time.py#L537-L539 with a demotion of the cube `time` coordinate to auxiliary (time will have values coherent with the aggregation operation), as in the following ```python @@ -535,7 +535,8 @@ def climate_statistics(cube, clim_coord = _get_period_coord(cube, period, seasons) operator = get_iris_analysis_operation(operator) clim_cube = cube.aggregated_by(clim_coord, operator) - clim_cube.remove_coord('time') + iris.util.demote_dim_coord_to_aux_coord(clim_cube,'time') if clim_cube.coord(clim_coord.name()).is_monotonic(): iris.util.promote_aux_coord_to_dim_coord(clim_cube, clim_coord.name()) else: ```
process
preprocessor chain climate statistics and multimodel statistics fails as time dimension is not present the construction of a preprocessor that computes first climate statistics and then multimodel statistics fails with a missing time coordinate error from the latter this is reasonable as for individual model climate statistics the time axis is replaced with the one corresponding to requested operation either day of year month number season number but multimodel statistics expects to receive a cube with the time coordinate i made a test and it should be a big issue to replace the deletion of time coordinate from the cube after aggregation with a demotion of the cube time coordinate to auxiliary time will have values coherent with the aggregation operation as in the following python def climate statistics cube clim coord get period coord cube period seasons operator get iris analysis operation operator clim cube cube aggregated by clim coord operator clim cube remove coord time iris util demote dim coord to aux coord clim cube time if clim cube coord clim coord name is monotonic iris util promote aux coord to dim coord clim cube clim coord name else
1
36,731
8,101,776,142
IssuesEvent
2018-08-12 17:29:36
MDAnalysis/mdanalysis
https://api.github.com/repos/MDAnalysis/mdanalysis
opened
Residue.chi1_angle() selection is incomplete
Component-Core Component-Selections Difficulty-easy Format-PDB GSOC Starter defect
**Expected behavior** `Residue.chi1_angle()` https://github.com/MDAnalysis/mdanalysis/blob/388346d38370d833b48152f61405a18942d25251/package/MDAnalysis/core/topologyattrs.py#L614 should find all chi1 angles for all residues for standard PDB files from the Protein Databank. **Actual behavior** @hfmull noticed in https://github.com/MDAnalysis/mdanalysis/pull/2033/files#r208769338 that the standard selection is incomplete. He implemented a custom selection in the new `analysis.dihedrals.Janin` class. **Code to reproduce the behavior** Need example file. **Currently version of MDAnalysis** develop 0.18.1-dev
1.0
Residue.chi1_angle() selection is incomplete - **Expected behavior** `Residue.chi1_angle()` https://github.com/MDAnalysis/mdanalysis/blob/388346d38370d833b48152f61405a18942d25251/package/MDAnalysis/core/topologyattrs.py#L614 should find all chi1 angles for all residues for standard PDB files from the Protein Databank. **Actual behavior** @hfmull noticed in https://github.com/MDAnalysis/mdanalysis/pull/2033/files#r208769338 that the standard selection is incomplete. He implemented a custom selection in the new `analysis.dihedrals.Janin` class. **Code to reproduce the behavior** Need example file. **Currently version of MDAnalysis** develop 0.18.1-dev
non_process
residue angle selection is incomplete expected behavior residue angle should find all angles for all residues for standard pdb files from the protein databank actual behavior hfmull noticed in that the standard selection is incomplete he implemented a custom selection in the new analysis dihedrals janin class code to reproduce the behavior need example file currently version of mdanalysis develop dev
0
184,222
14,281,890,637
IssuesEvent
2020-11-23 08:48:11
enonic/lib-admin-ui
https://api.github.com/repos/enonic/lib-admin-ui
opened
Option Set - set's label is not displayed after reopening the content.
Bug Test is Failing
1. Select `Features` and open new wizard for `Option Set` The label is configured in this content type ``` <option-set name="radioOptionSet"> <label>Single selection</label> ``` 2. Fill in the name input, then select `Option2` in `Single selection` and save it ![image](https://user-images.githubusercontent.com/3728712/99942835-58df6a80-2d81-11eb-9446-43b52043087c.png) `Single selection` label is displayed in the form - OK 3. Reopen the content . **BUG**: `Single selection` label is not displayed in the form: ![image](https://user-images.githubusercontent.com/3728712/99942972-93e19e00-2d81-11eb-96f5-5a20f948c599.png)
1.0
Option Set - set's label is not displayed after reopening the content. - 1. Select `Features` and open new wizard for `Option Set` The label is configured in this content type ``` <option-set name="radioOptionSet"> <label>Single selection</label> ``` 2. Fill in the name input, then select `Option2` in `Single selection` and save it ![image](https://user-images.githubusercontent.com/3728712/99942835-58df6a80-2d81-11eb-9446-43b52043087c.png) `Single selection` label is displayed in the form - OK 3. Reopen the content . **BUG**: `Single selection` label is not displayed in the form: ![image](https://user-images.githubusercontent.com/3728712/99942972-93e19e00-2d81-11eb-96f5-5a20f948c599.png)
non_process
option set set s label is not displayed after reopening the content select features and open new wizard for option set the label is configured in this content type single selection fill in the name input then select in single selection and save it single selection label is displayed in the form ok reopen the content bug single selection label is not displayed in the form
0
20,815
27,578,116,710
IssuesEvent
2023-03-08 14:27:09
python/cpython
https://api.github.com/repos/python/cpython
closed
Pool spawns way too many subprocesses in map() when cgroup cpu.shares is default or set high
expert-multiprocessing
I have a user running some proprietary code that is misbehaving in the multiprocessing.map() function. In the HTCondor scheduler, each job is assigned a cpu.shares number of 100 times the number of CPUS requested for the job. When submitting a 40-worker pool, the pstree looks very orderly when the CPU request to HTCondor is set to 1 - the main python process has 40 child processes and no more, the load average climbs to about 40 and stays there, and the job crunches its numbers and returns a result in the expected time. Each python child process' cpu.shares cgroup value is set to 100, inherited from the parent process that HTCondor launched, and each of them uses about 100% cpu time shown in "top." When the CPU request is set to 40, however, and cpu.shares is set to 4000 as a result, pstree shows the main process, the child processes and then, unexpectedly, there are child processes of the child processes, as if each of the 40 pool workers is trying to use 40 CPUs instead of one. The load average soon spikes above 1,000, the context switching goes through the roof, and the job slows to a crawl. The code has the expected if __name__=='__main__' statement to shelter the master process' code, and the start method is set to "spawn" with force set to True. I'm working on gathering snapshots of the behavior, I'll post them on this issue shortly, but if anyone has any ideas in the meantime as to what might be going on, I'd appreciate the feedback.
1.0
Pool spawns way too many subprocesses in map() when cgroup cpu.shares is default or set high - I have a user running some proprietary code that is misbehaving in the multiprocessing.map() function. In the HTCondor scheduler, each job is assigned a cpu.shares number of 100 times the number of CPUS requested for the job. When submitting a 40-worker pool, the pstree looks very orderly when the CPU request to HTCondor is set to 1 - the main python process has 40 child processes and no more, the load average climbs to about 40 and stays there, and the job crunches its numbers and returns a result in the expected time. Each python child process' cpu.shares cgroup value is set to 100, inherited from the parent process that HTCondor launched, and each of them uses about 100% cpu time shown in "top." When the CPU request is set to 40, however, and cpu.shares is set to 4000 as a result, pstree shows the main process, the child processes and then, unexpectedly, there are child processes of the child processes, as if each of the 40 pool workers is trying to use 40 CPUs instead of one. The load average soon spikes above 1,000, the context switching goes through the roof, and the job slows to a crawl. The code has the expected if __name__=='__main__' statement to shelter the master process' code, and the start method is set to "spawn" with force set to True. I'm working on gathering snapshots of the behavior, I'll post them on this issue shortly, but if anyone has any ideas in the meantime as to what might be going on, I'd appreciate the feedback.
process
pool spawns way too many subprocesses in map when cgroup cpu shares is default or set high i have a user running some proprietary code that is misbehaving in the multiprocessing map function in the htcondor scheduler each job is assigned a cpu shares number of times the number of cpus requested for the job when submitting a worker pool the pstree looks very orderly when the cpu request to htcondor is set to the main python process has child processes and no more the load average climbs to about and stays there and the job crunches its numbers and returns a result in the expected time each python child process cpu shares cgroup value is set to inherited from the parent process that htcondor launched and each of them uses about cpu time shown in top when the cpu request is set to however and cpu shares is set to as a result pstree shows the main process the child processes and then unexpectedly there are child processes of the child processes as if each of the pool workers is trying to use cpus instead of one the load average soon spikes above the context switching goes through the roof and the job slows to a crawl the code has the expected if name main statement to shelter the master process code and the start method is set to spawn with force set to true i m working on gathering snapshots of the behavior i ll post them on this issue shortly but if anyone has any ideas in the meantime as to what might be going on i d appreciate the feedback
1
20,578
3,828,011,188
IssuesEvent
2016-03-31 02:20:29
jecrockett/gametime
https://api.github.com/repos/jecrockett/gametime
closed
Testing strategy
before eval testing
* Discuss approach to testing. * What will we test? * What will we use for testing? * What will we omit?
1.0
Testing strategy - * Discuss approach to testing. * What will we test? * What will we use for testing? * What will we omit?
non_process
testing strategy discuss approach to testing what will we test what will we use for testing what will we omit
0
22,660
31,895,945,931
IssuesEvent
2023-09-18 01:42:40
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
Change term - basisOfRecord
Term - change Class - Record-level non-normative Task Group - Material Sample Process - complete
## Term change * Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/) * Efficacy Justification (why is this change necessary?): It would be understood that MaterialEntity would be an informal superclass to `dwc:MaterialSample`, `dwc:PreservedSpecimen`, `dwc:LivingSpecimen`, `dwc:FossilSpecimen`. Examples involving the use of MaterialEntity would be required. * Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations. * Stability Justification (what concerns are there that this might affect existing implementations?): The addition of MaterialEntity as an example for will facilitate usaged by Global Biodiversity Information Facility (GBIF) Darwin Core Archives. * Implications for dwciri: namespace (does this change affect a dwciri term version)?: No Current Term definition: https://dwc.tdwg.org/list/#dwc_basisOfRecord Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~): * Term name (in lowerCamelCase for properties, UpperCamelCase for classes): basisOfRecord * Term label (English, not normative): Basis of Record * Organized in Class (e.g., Occurrence, Event, Location, Taxon): * Definition of the term (normative): The specific nature of the data record. * Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use the standard label of one of the Darwin Core classes. * Examples (not normative): **MaterialEntity,** PreservedSpecimen, FossilSpecimen, LivingSpecimen, MaterialSample, Event, HumanObservation, MachineObservation, Taxon, Occurrence, MaterialCitation * Refines (identifier of the broader term this term refines; normative): * Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/RecordBasis
1.0
Change term - basisOfRecord - ## Term change * Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/) * Efficacy Justification (why is this change necessary?): It would be understood that MaterialEntity would be an informal superclass to `dwc:MaterialSample`, `dwc:PreservedSpecimen`, `dwc:LivingSpecimen`, `dwc:FossilSpecimen`. Examples involving the use of MaterialEntity would be required. * Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations. * Stability Justification (what concerns are there that this might affect existing implementations?): The addition of MaterialEntity as an example for will facilitate usaged by Global Biodiversity Information Facility (GBIF) Darwin Core Archives. * Implications for dwciri: namespace (does this change affect a dwciri term version)?: No Current Term definition: https://dwc.tdwg.org/list/#dwc_basisOfRecord Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~): * Term name (in lowerCamelCase for properties, UpperCamelCase for classes): basisOfRecord * Term label (English, not normative): Basis of Record * Organized in Class (e.g., Occurrence, Event, Location, Taxon): * Definition of the term (normative): The specific nature of the data record. * Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use the standard label of one of the Darwin Core classes. * Examples (not normative): **MaterialEntity,** PreservedSpecimen, FossilSpecimen, LivingSpecimen, MaterialSample, Event, HumanObservation, MachineObservation, Taxon, Occurrence, MaterialCitation * Refines (identifier of the broader term this term refines; normative): * Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/RecordBasis
process
change term basisofrecord term change submitter efficacy justification why is this change necessary it would be understood that materialentity would be an informal superclass to dwc materialsample dwc preservedspecimen dwc livingspecimen dwc fossilspecimen examples involving the use of materialentity would be required demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations the addition of materialentity as an example for will facilitate usaged by global biodiversity information facility gbif darwin core archives implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes basisofrecord term label english not normative basis of record organized in class e g occurrence event location taxon definition of the term normative the specific nature of the data record usage comments recommendations regarding content etc not normative recommended best practice is to use the standard label of one of the darwin core classes examples not normative materialentity preservedspecimen fossilspecimen livingspecimen materialsample event humanobservation machineobservation taxon occurrence materialcitation refines identifier of the broader term this term refines normative replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative datasets dataset units unit recordbasis
1
102,004
12,736,977,637
IssuesEvent
2020-06-25 17:53:59
department-of-veterans-affairs/caseflow
https://api.github.com/repos/department-of-veterans-affairs/caseflow
opened
Discovery Research Study: Intake Generative Research
Design: size-squirrel ๐Ÿฟ Product: caseflow-intake Team: Foxtrot ๐ŸฆŠ Type: design ๐Ÿ’…
## User story As an Intake user, I need a way to share my needs and ideas in order to improve the product. ## Problem statement <!-- Describe the problem the design, writing, or research is intended to solve. --> Currently, there is no clear way for Intake users to relay issues they are facing with the product outside of support tickets and moreover, express any needs or ideas to improve their workflow and user experience. ## What is out of scope? <!-- This can be particularly helpful for research tickets. Does not need to be an exhaustive list, but should clearly define the boundaries of the work --> Any design solutions ## Background/context <!-- Why are we designing/writing this? Who is it for? What research has been done that tells us this needs to be designed, written, or researched? --> In order to gather user needs and requirements, the Caseflow Intake team will spend time learning from BVA and VBA Intake users through 45-minute user research sessions. ## What are the unknowns? <!-- If there are key unknowns or assumptions, add them here. If we're accepting the risks associated with the unknowns or assumptions, let us know that too. --> - What upward approvals are needed - What upward communication is needed ## Success criteria <!-- Include as needed, especially for issues that aren't part of epics. if no measurable success criteria, what does success look like? --> - [ ] `1 point` [Research Plan](https://docs.google.com/document/d/1QJQmI_yhZy4nEfS1Nfv927gpCD31Iu0DS9fmabumQXo/edit?usp=sharing) (https://app.mural.co/t/workqueue2001/m/workqueue2001/1593019961366/87411402f67814c17f01d3feb3a25cd9c41d1f3f) - [ ] `1 point` Feedback ## Technical/logistical constraints (if known) <!-- Are there technical constraints that will impact any design or writing solution? Logistical constraints that will impact user research? --> - Getting DVC enhancements from Jim
2.0
Discovery Research Study: Intake Generative Research - ## User story As an Intake user, I need a way to share my needs and ideas in order to improve the product. ## Problem statement <!-- Describe the problem the design, writing, or research is intended to solve. --> Currently, there is no clear way for Intake users to relay issues they are facing with the product outside of support tickets and moreover, express any needs or ideas to improve their workflow and user experience. ## What is out of scope? <!-- This can be particularly helpful for research tickets. Does not need to be an exhaustive list, but should clearly define the boundaries of the work --> Any design solutions ## Background/context <!-- Why are we designing/writing this? Who is it for? What research has been done that tells us this needs to be designed, written, or researched? --> In order to gather user needs and requirements, the Caseflow Intake team will spend time learning from BVA and VBA Intake users through 45-minute user research sessions. ## What are the unknowns? <!-- If there are key unknowns or assumptions, add them here. If we're accepting the risks associated with the unknowns or assumptions, let us know that too. --> - What upward approvals are needed - What upward communication is needed ## Success criteria <!-- Include as needed, especially for issues that aren't part of epics. if no measurable success criteria, what does success look like? --> - [ ] `1 point` [Research Plan](https://docs.google.com/document/d/1QJQmI_yhZy4nEfS1Nfv927gpCD31Iu0DS9fmabumQXo/edit?usp=sharing) (https://app.mural.co/t/workqueue2001/m/workqueue2001/1593019961366/87411402f67814c17f01d3feb3a25cd9c41d1f3f) - [ ] `1 point` Feedback ## Technical/logistical constraints (if known) <!-- Are there technical constraints that will impact any design or writing solution? Logistical constraints that will impact user research? --> - Getting DVC enhancements from Jim
non_process
discovery research study intake generative research user story as an intake user i need a way to share my needs and ideas in order to improve the product problem statement currently there is no clear way for intake users to relay issues they are facing with the product outside of support tickets and moreover express any needs or ideas to improve their workflow and user experience what is out of scope any design solutions background context in order to gather user needs and requirements the caseflow intake team will spend time learning from bva and vba intake users through minute user research sessions what are the unknowns what upward approvals are needed what upward communication is needed success criteria point point feedback technical logistical constraints if known getting dvc enhancements from jim
0
4,068
7,001,509,536
IssuesEvent
2017-12-18 10:29:58
aiidateam/aiida_core
https://api.github.com/repos/aiidateam/aiida_core
closed
Array of calculations
coding-day/done topic/JobCalculationAndProcess type/accepted feature
Originally reported by: **Giovanni Pizzi (Bitbucket: [pizzi](https://bitbucket.org/pizzi), GitHub: [giovannipizzi](https://github.com/giovannipizzi))** ---------------------------------------- Due to policies of some clusters, sometimes only few jobs can be submitted, and they require to run with a script similar to the following: ``` mpirun -np 16 exec < input1 >out1 & mpirun -np 16 exec < input2 >out2 & mpirun -np 16 exec < input3 >out3 & wait ``` This can be achieved as follows: * add a ```only_send=True``` parameter to ```.submit()``` that will send all files of each calculation but will not execute the ```qsub``` command. * Set the state in a new SENTONLY state * Create a new calculation of type, e.g., ArrayCalculation, having as input the RemoteData of the parent jobs, in SENTONLY state. The script will run each of the parent scripts with ```&``` at the end, end then add ```wait``` * A simple parser of the ArrayCalculation will simply switch the parent calculation states from SENTONLY to COMPUTED, so the daemon will then parse them. Issues: * one can call directly the scripts of the parent calculation, maybe, but there may be issues with the environment variables (e.g. if different set of modules are loaded, or for the PBS_O_WORKDIR, etc.). Understand if it is possible to load an independent environment for each subjob, and if this is ok. * If the parent is the RemoteData node, one can create the ArrayCalculation only after submission, but at the moment this is not detectable * If a child calculation has to be run, it has to be attached to each calculation and not to the ArrayCalculation, since its parser will just trigger the parsing of the parents, but not actually parse them. ---------------------------------------- - Bitbucket: https://bitbucket.org/aiida_team/aiida_core/issue/111
1.0
Array of calculations - Originally reported by: **Giovanni Pizzi (Bitbucket: [pizzi](https://bitbucket.org/pizzi), GitHub: [giovannipizzi](https://github.com/giovannipizzi))** ---------------------------------------- Due to policies of some clusters, sometimes only few jobs can be submitted, and they require to run with a script similar to the following: ``` mpirun -np 16 exec < input1 >out1 & mpirun -np 16 exec < input2 >out2 & mpirun -np 16 exec < input3 >out3 & wait ``` This can be achieved as follows: * add a ```only_send=True``` parameter to ```.submit()``` that will send all files of each calculation but will not execute the ```qsub``` command. * Set the state in a new SENTONLY state * Create a new calculation of type, e.g., ArrayCalculation, having as input the RemoteData of the parent jobs, in SENTONLY state. The script will run each of the parent scripts with ```&``` at the end, end then add ```wait``` * A simple parser of the ArrayCalculation will simply switch the parent calculation states from SENTONLY to COMPUTED, so the daemon will then parse them. Issues: * one can call directly the scripts of the parent calculation, maybe, but there may be issues with the environment variables (e.g. if different set of modules are loaded, or for the PBS_O_WORKDIR, etc.). Understand if it is possible to load an independent environment for each subjob, and if this is ok. * If the parent is the RemoteData node, one can create the ArrayCalculation only after submission, but at the moment this is not detectable * If a child calculation has to be run, it has to be attached to each calculation and not to the ArrayCalculation, since its parser will just trigger the parsing of the parents, but not actually parse them. ---------------------------------------- - Bitbucket: https://bitbucket.org/aiida_team/aiida_core/issue/111
process
array of calculations originally reported by giovanni pizzi bitbucket github due to policies of some clusters sometimes only few jobs can be submitted and they require to run with a script similar to the following mpirun np exec mpirun np exec mpirun np exec wait this can be achieved as follows add a only send true parameter to submit that will send all files of each calculation but will not execute the qsub command set the state in a new sentonly state create a new calculation of type e g arraycalculation having as input the remotedata of the parent jobs in sentonly state the script will run each of the parent scripts with at the end end then add wait a simple parser of the arraycalculation will simply switch the parent calculation states from sentonly to computed so the daemon will then parse them issues one can call directly the scripts of the parent calculation maybe but there may be issues with the environment variables e g if different set of modules are loaded or for the pbs o workdir etc understand if it is possible to load an independent environment for each subjob and if this is ok if the parent is the remotedata node one can create the arraycalculation only after submission but at the moment this is not detectable if a child calculation has to be run it has to be attached to each calculation and not to the arraycalculation since its parser will just trigger the parsing of the parents but not actually parse them bitbucket
1
57,412
7,055,698,520
IssuesEvent
2018-01-04 09:32:57
drupaltransylvania/drupal-camp
https://api.github.com/repos/drupaltransylvania/drupal-camp
opened
Create article design
design
As a front-end developer I want to have designed the article page So that I can implement it. **Acceptance criteria** The article page will have a title and WYSIWYG body field that can contain the following elements: - H1, H2, H3 - Bold, Italic, Underline - Tables Please make a smaller header for this type of pages, because it will not have an introduction part.
1.0
Create article design - As a front-end developer I want to have designed the article page So that I can implement it. **Acceptance criteria** The article page will have a title and WYSIWYG body field that can contain the following elements: - H1, H2, H3 - Bold, Italic, Underline - Tables Please make a smaller header for this type of pages, because it will not have an introduction part.
non_process
create article design as a front end developer i want to have designed the article page so that i can implement it acceptance criteria the article page will have a title and wysiwyg body field that can contain the following elements bold italic underline tables please make a smaller header for this type of pages because it will not have an introduction part
0
29,557
13,133,439,053
IssuesEvent
2020-08-06 20:55:25
terraform-providers/terraform-provider-aws
https://api.github.com/repos/terraform-providers/terraform-provider-aws
closed
Secrets manager policy validation fails for principals that are just created
bug service/iam service/secretsmanager
<!--- Please note the following potential times when an issue might be in Terraform core: * [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues * [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues * [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues * [Registry](https://registry.terraform.io/) issues * Spans resources across multiple providers If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead. ---> <!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a ๐Ÿ‘ [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Terraform CLI and Terraform AWS Provider Version <!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). ---> Terraform v0.12.16 + provider.archive v1.3.0 + provider.aws v3.0.0 + provider.null v2.1.2 ### Affected Resource(s) <!--- Please list the affected resources and data sources. ---> * aws_iam_role * aws_secretsmanager_secret ### Terraform Configuration Files <!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code ---> ```hcl resource "aws_iam_role" "this" { name = join("-", ["instance-profile", "test"]) assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ec2.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } data "aws_iam_policy_document" "this" { statement { effect = "Allow" resources = ["*"] actions = ["secretsmanager:Get*"] principals { type = "AWS" identifiers = [aws_iam_role.this.arn] } } } resource "aws_secretsmanager_secret" "this" { name = "test" description = "Created by test" recovery_window_in_days = 0 policy = data.aws_iam_policy_document.this.json } ``` ### Debug Output <!--- Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist. To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html). ---> ### Panic Output <!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. ---> ### Expected Behavior Secret is generated with the appropriate permission at the first apply ### Actual Behavior The validation for the policy fails with the following output data.aws_caller_identity.current: Refreshing state... aws_iam_role.this: Creating... aws_iam_role.this: Creation complete after 2s [id=instance-profile-test] data.aws_iam_policy_document.this: Refreshing state... aws_secretsmanager_secret.this: Creating... Error: error setting Secrets Manager Secret "arn:aws:secretsmanager:ap-southeast-2:XXXXXXX: secret:test-e5XYyU" policy: MalformedPolicyDocumentException: This resource policy contains an unsupported principal. on main.tf line 6, in resource "aws_secretsmanager_secret" "this": 6: resource "aws_secretsmanager_secret" "this" { **NOTE**: a subsequent apply works as expected. I suspect the validation of the policy happens before the IAM role arn is actually available for querying due to [IAM eventual consistency](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency) ### Steps to Reproduce 1. Ensure is the first execution or run `terraform destroy` 2. `terraform apply` --> Generate an error 3. `terraform apply` --> Successful execution
2.0
Secrets manager policy validation fails for principals that are just created - <!--- Please note the following potential times when an issue might be in Terraform core: * [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues * [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues * [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues * [Registry](https://registry.terraform.io/) issues * Spans resources across multiple providers If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead. ---> <!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a ๐Ÿ‘ [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Terraform CLI and Terraform AWS Provider Version <!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). ---> Terraform v0.12.16 + provider.archive v1.3.0 + provider.aws v3.0.0 + provider.null v2.1.2 ### Affected Resource(s) <!--- Please list the affected resources and data sources. ---> * aws_iam_role * aws_secretsmanager_secret ### Terraform Configuration Files <!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code ---> ```hcl resource "aws_iam_role" "this" { name = join("-", ["instance-profile", "test"]) assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ec2.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } data "aws_iam_policy_document" "this" { statement { effect = "Allow" resources = ["*"] actions = ["secretsmanager:Get*"] principals { type = "AWS" identifiers = [aws_iam_role.this.arn] } } } resource "aws_secretsmanager_secret" "this" { name = "test" description = "Created by test" recovery_window_in_days = 0 policy = data.aws_iam_policy_document.this.json } ``` ### Debug Output <!--- Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist. To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html). ---> ### Panic Output <!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. ---> ### Expected Behavior Secret is generated with the appropriate permission at the first apply ### Actual Behavior The validation for the policy fails with the following output data.aws_caller_identity.current: Refreshing state... aws_iam_role.this: Creating... aws_iam_role.this: Creation complete after 2s [id=instance-profile-test] data.aws_iam_policy_document.this: Refreshing state... aws_secretsmanager_secret.this: Creating... Error: error setting Secrets Manager Secret "arn:aws:secretsmanager:ap-southeast-2:XXXXXXX: secret:test-e5XYyU" policy: MalformedPolicyDocumentException: This resource policy contains an unsupported principal. on main.tf line 6, in resource "aws_secretsmanager_secret" "this": 6: resource "aws_secretsmanager_secret" "this" { **NOTE**: a subsequent apply works as expected. I suspect the validation of the policy happens before the IAM role arn is actually available for querying due to [IAM eventual consistency](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency) ### Steps to Reproduce 1. Ensure is the first execution or run `terraform destroy` 2. `terraform apply` --> Generate an error 3. `terraform apply` --> Successful execution
non_process
secrets manager policy validation fails for principals that are just created please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a ๐Ÿ‘ to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform cli and terraform aws provider version terraform provider archive provider aws provider null affected resource s aws iam role aws secretsmanager secret terraform configuration files hcl resource aws iam role this name join assume role policy eof version statement action sts assumerole principal service amazonaws com effect allow sid eof data aws iam policy document this statement effect allow resources actions principals type aws identifiers resource aws secretsmanager secret this name test description created by test recovery window in days policy data aws iam policy document this json debug output please provide a link to a github gist containing the complete debug output please do not paste the debug output in the issue just paste a link to the gist to obtain the debug output see the panic output expected behavior secret is generated with the appropriate permission at the first apply actual behavior the validation for the policy fails with the following output data aws caller identity current refreshing state aws iam role this creating aws iam role this creation complete after data aws iam policy document this refreshing state aws secretsmanager secret this creating error error setting secrets manager secret arn aws secretsmanager ap southeast xxxxxxx secret test policy malformedpolicydocumentexception this resource policy contains an unsupported principal on main tf line in resource aws secretsmanager secret this resource aws secretsmanager secret this note a subsequent apply works as expected i suspect the validation of the policy happens before the iam role arn is actually available for querying due to steps to reproduce ensure is the first execution or run terraform destroy terraform apply generate an error terraform apply successful execution
0
51,780
13,648,272,252
IssuesEvent
2020-09-26 08:17:59
srivatsamarichi/tailspin-spacegame
https://api.github.com/repos/srivatsamarichi/tailspin-spacegame
closed
CVE-2018-11694 (High) detected in node-sass0bd48bbad6fccb0da16d3bdf76ad541f5f45ec70, node-sass-4.12.0.tgz
bug security vulnerability
## CVE-2018-11694 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass0bd48bbad6fccb0da16d3bdf76ad541f5f45ec70</b>, <b>node-sass-4.12.0.tgz</b></p></summary> <p> <details><summary><b>node-sass-4.12.0.tgz</b></p></summary> <p>Wrapper around libsass</p> <p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz</a></p> <p>Path to dependency file: tailspin-spacegame/package.json</p> <p>Path to vulnerable library: tailspin-spacegame/node_modules/node-sass/package.json</p> <p> Dependency Hierarchy: - :x: **node-sass-4.12.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/tailspin-spacegame/commit/18bed90b3f61ffbe393dbb67ae624f4355632bcc">18bed90b3f61ffbe393dbb67ae624f4355632bcc</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694</a></p> <p>Release Date: 2018-06-04</p> <p>Fix Resolution: LibSass - 3.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-11694 (High) detected in node-sass0bd48bbad6fccb0da16d3bdf76ad541f5f45ec70, node-sass-4.12.0.tgz - ## CVE-2018-11694 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass0bd48bbad6fccb0da16d3bdf76ad541f5f45ec70</b>, <b>node-sass-4.12.0.tgz</b></p></summary> <p> <details><summary><b>node-sass-4.12.0.tgz</b></p></summary> <p>Wrapper around libsass</p> <p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz</a></p> <p>Path to dependency file: tailspin-spacegame/package.json</p> <p>Path to vulnerable library: tailspin-spacegame/node_modules/node-sass/package.json</p> <p> Dependency Hierarchy: - :x: **node-sass-4.12.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/tailspin-spacegame/commit/18bed90b3f61ffbe393dbb67ae624f4355632bcc">18bed90b3f61ffbe393dbb67ae624f4355632bcc</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694</a></p> <p>Release Date: 2018-06-04</p> <p>Fix Resolution: LibSass - 3.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in node node sass tgz cve high severity vulnerability vulnerable libraries node node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tailspin spacegame package json path to vulnerable library tailspin spacegame node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass functions selector append which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
0
393,393
26,990,019,460
IssuesEvent
2023-02-09 19:05:16
Mingadinga/2023-Design-Pattern
https://api.github.com/repos/Mingadinga/2023-Design-Pattern
opened
Adapter Pattern
documentation
## ํ•ต์‹ฌ ์˜๋„ ํŠน์ • ํด๋ž˜์Šค ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํด๋ผ์ด์–ธํŠธ์—์„œ ์š”๊ตฌํ•˜๋Š” ๋‹ค๋ฅธ ์ธ์Šคํ„ด์Šค๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค. ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ ํ˜ธํ™˜๋˜์ง€ ์•Š์•„ ๊ฐ™์ด ์“ธ ์ˆ˜ ์—†์—ˆ๋˜ ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋„์™€์ค€๋‹ค. ## ์ ์šฉ ์ƒํ™ฉ ํด๋ผ์ด์–ธํŠธ๊ฐ€ ๊ธฐ์กด์— ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ ์žˆ๋Š”๋ฐ, ํ˜ธํ™˜๋˜์ง€ ์•Š๋Š” ์ƒˆ๋กœ์šด ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๊ธฐ์กด ์ธํ„ฐํŽ˜์ด์Šค(ํƒ€๊นƒ)์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์„ ๋•Œ ์–ด๋Œ‘ํ„ฐ ํŒจํ„ด์„ ์‚ฌ์šฉํ•œ๋‹ค. ## ์†”๋ฃจ์…˜์˜ ๊ตฌ์กฐ์™€ ๊ฐ ์š”์†Œ์˜ ์—ญํ•  ![image](https://user-images.githubusercontent.com/53958188/217912706-875c00ff-ec21-4330-b7cf-b1b51a4131d6.png) ์œ„์˜ ๊ทธ๋ฆผ์€ ๊ฐ์ฒด ์–ด๋Œ‘ํ„ฐ์˜ ๊ตฌ์กฐ. ๋‹ค์ค‘ ์ƒ์†์ด ๊ฐ€๋Šฅํ•œ ์–ธ์–ด๋กœ ํด๋ž˜์Šค ์–ด๋Œ‘ํ„ฐ๋„ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค. ํƒ€๊นƒ๊ณผ ์–ด๋Œ‘ํ‹ฐ๋ฅผ ์ƒ์†ํ•˜์—ฌ ์–ด๋Œ‘ํ„ฐ๋ฅผ ๋งŒ๋“ ๋‹ค. ์œ„์˜ ๊ทธ๋ฆผ์€ ๊ฐ์ฒด ์–ด๋Œ‘ํ„ฐ์˜ ๊ตฌ์กฐ. ๋‹ค์ค‘ ์ƒ์†์ด ๊ฐ€๋Šฅํ•œ ์–ธ์–ด๋กœ ํด๋ž˜์Šค ์–ด๋Œ‘ํ„ฐ๋„ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค. ํƒ€๊นƒ๊ณผ ์–ด๋Œ‘ํ‹ฐ๋ฅผ ์ƒ์†ํ•˜์—ฌ ์–ด๋Œ‘ํ„ฐ๋ฅผ ๋งŒ๋“ ๋‹ค. ### ๊ฐ์ฒด์—๊ฒŒ ์ฑ…์ž„์„ ๋ถ„ํ• ํ•˜๊ธฐ ์šฐ์„  ํด๋ผ์ด์–ธํŠธ๊ฐ€ ๊ธฐ์กด์— ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ Target์ด๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ƒˆ๋กญ๊ฒŒ ์‚ฌ์šฉํ•˜๋ ค๊ณ  ํ•˜๋Š” ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ Adaptee์ด๋‹ค. ํด๋ผ์ด์–ธํŠธ๊ฐ€ Adaptee๋ฅผ Target ํƒ€์ž…์œผ๋กœ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์ด ๋‘˜ ์‚ฌ์ด์— ์š”์ฒญ์„ ํ†ต๊ณผ์‹œ์ผœ์ค„ ์ˆ˜ ์žˆ๋Š” ์ฑ…์ž„์ด ํ•„์š”ํ•œ๋ฐ, ์ด๋ฅผ Adapter๊ฐ€ ์ˆ˜ํ–‰ํ•œ๋‹ค. ### ๊ตฌํ˜„ ํฌ์ธํŠธ ์–ด๋Œ‘ํ„ฐ ํด๋ž˜์Šค๋Š” ํƒ€๊นƒ ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๊ตฌํ˜„ํ•˜๊ณ , ์–ด๋Œ‘ํ‹ฐ ํด๋ž˜์Šค๋ฅผ ๊ตฌ์„ฑ์œผ๋กœ ๊ฐ€์ง„๋‹ค. ๊ทธ๋ž˜์„œ ํƒ€๊นƒ ์ธํ„ฐํŽ˜์ด์Šค๋กœ ๋ฉ”์‹œ์ง€๊ฐ€ ์š”์ฒญ์ด ์˜ค๋ฉด ์–ด๋Œ‘ํ‹ฐ์˜ ๋ฉ”์‹œ์ง€๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ฉ”์‹œ์ง€๋ฅผ ํ†ต๊ณผ์‹œํ‚จ๋‹ค. ## ์ ์šฉ ์˜ˆ์‹œ ### ์š”๊ตฌ์‚ฌํ•ญ ๊ธฐ์กด์— ์‚ฌ์šฉํ•˜๋˜ Iterator ์ธํ„ฐํŽ˜์ด์Šค์— Enumeration ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ์ถ”๊ฐ€๋กœ ์‚ฌ์šฉํ•˜๋ ค๊ณ  ํ•œ๋‹ค. ํด๋ผ์ด์–ธํŠธ๋Š” Iterator ํƒ€์ž…์œผ๋กœ Enumeration์„ ์‚ฌ์šฉํ•œ๋‹ค. ### ์„ค๊ณ„ ![image](https://user-images.githubusercontent.com/53958188/217912559-49643c14-165e-464a-b153-c8380516b7f5.png) ์–ด๋Œ‘ํ„ฐ์ธ EnumerationIterator๋Š” ํƒ€๊นƒ์ธ Iterator์„ ๊ตฌํ˜„ํ•˜๊ณ  ์–ด๋Œ‘ํ‹ฐ์ธ Enumeration์„ ๊ตฌ์„ฑ์œผ๋กœ ๊ฐ€์ง„๋‹ค. Iterator์˜ ์ถ”์ƒ ๋ฉ”์†Œ๋“œ๋ฅผ ๊ตฌํ˜„ํ•  ๋•Œ Enumeration์˜ ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ๋˜๋Š”๋ฐ, remove()๋Š” ๋Œ€์‘ํ•˜๋Š” ๋ฉ”์†Œ๋“œ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์™ธ๋ฅผ ๋˜์ง„๋‹ค. ํด๋ผ์ด์–ธํŠธ๋Š” ์–ด๋Œ‘ํ„ฐ๋ฅผ ํ†ตํ•ด Iterator๋กœ Enumeration์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ### ์ฝ”๋“œ EnumerationIterator ```java public class EnumerationIterator implements Iterator<Object> { Enumeration<?> enumeration; public EnumerationIterator(Enumeration<?> enumeration) { this.enumeration = enumeration; } public boolean hasNext() { return enumeration.hasMoreElements(); } public Object next() { return enumeration.nextElement(); } public void remove() { throw new UnsupportedOperationException(); } } ``` EnumerationIteratorTest ```java public class EnumerationIteratorTestDrive { public static void main (String args[]) { Vector<String> v = new Vector<String>(Arrays.asList(args)); // Iterator ํƒ€์ž…์œผ๋กœ Enumeration ์‚ฌ์šฉ ์ค‘ Iterator<?> iterator = new EnumerationIterator(v.elements()); while (iterator.hasNext()) { System.out.println(iterator.next()); } } } ```
1.0
Adapter Pattern - ## ํ•ต์‹ฌ ์˜๋„ ํŠน์ • ํด๋ž˜์Šค ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํด๋ผ์ด์–ธํŠธ์—์„œ ์š”๊ตฌํ•˜๋Š” ๋‹ค๋ฅธ ์ธ์Šคํ„ด์Šค๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค. ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ ํ˜ธํ™˜๋˜์ง€ ์•Š์•„ ๊ฐ™์ด ์“ธ ์ˆ˜ ์—†์—ˆ๋˜ ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋„์™€์ค€๋‹ค. ## ์ ์šฉ ์ƒํ™ฉ ํด๋ผ์ด์–ธํŠธ๊ฐ€ ๊ธฐ์กด์— ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ ์žˆ๋Š”๋ฐ, ํ˜ธํ™˜๋˜์ง€ ์•Š๋Š” ์ƒˆ๋กœ์šด ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๊ธฐ์กด ์ธํ„ฐํŽ˜์ด์Šค(ํƒ€๊นƒ)์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์„ ๋•Œ ์–ด๋Œ‘ํ„ฐ ํŒจํ„ด์„ ์‚ฌ์šฉํ•œ๋‹ค. ## ์†”๋ฃจ์…˜์˜ ๊ตฌ์กฐ์™€ ๊ฐ ์š”์†Œ์˜ ์—ญํ•  ![image](https://user-images.githubusercontent.com/53958188/217912706-875c00ff-ec21-4330-b7cf-b1b51a4131d6.png) ์œ„์˜ ๊ทธ๋ฆผ์€ ๊ฐ์ฒด ์–ด๋Œ‘ํ„ฐ์˜ ๊ตฌ์กฐ. ๋‹ค์ค‘ ์ƒ์†์ด ๊ฐ€๋Šฅํ•œ ์–ธ์–ด๋กœ ํด๋ž˜์Šค ์–ด๋Œ‘ํ„ฐ๋„ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค. ํƒ€๊นƒ๊ณผ ์–ด๋Œ‘ํ‹ฐ๋ฅผ ์ƒ์†ํ•˜์—ฌ ์–ด๋Œ‘ํ„ฐ๋ฅผ ๋งŒ๋“ ๋‹ค. ์œ„์˜ ๊ทธ๋ฆผ์€ ๊ฐ์ฒด ์–ด๋Œ‘ํ„ฐ์˜ ๊ตฌ์กฐ. ๋‹ค์ค‘ ์ƒ์†์ด ๊ฐ€๋Šฅํ•œ ์–ธ์–ด๋กœ ํด๋ž˜์Šค ์–ด๋Œ‘ํ„ฐ๋„ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค. ํƒ€๊นƒ๊ณผ ์–ด๋Œ‘ํ‹ฐ๋ฅผ ์ƒ์†ํ•˜์—ฌ ์–ด๋Œ‘ํ„ฐ๋ฅผ ๋งŒ๋“ ๋‹ค. ### ๊ฐ์ฒด์—๊ฒŒ ์ฑ…์ž„์„ ๋ถ„ํ• ํ•˜๊ธฐ ์šฐ์„  ํด๋ผ์ด์–ธํŠธ๊ฐ€ ๊ธฐ์กด์— ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ Target์ด๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ƒˆ๋กญ๊ฒŒ ์‚ฌ์šฉํ•˜๋ ค๊ณ  ํ•˜๋Š” ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ Adaptee์ด๋‹ค. ํด๋ผ์ด์–ธํŠธ๊ฐ€ Adaptee๋ฅผ Target ํƒ€์ž…์œผ๋กœ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์ด ๋‘˜ ์‚ฌ์ด์— ์š”์ฒญ์„ ํ†ต๊ณผ์‹œ์ผœ์ค„ ์ˆ˜ ์žˆ๋Š” ์ฑ…์ž„์ด ํ•„์š”ํ•œ๋ฐ, ์ด๋ฅผ Adapter๊ฐ€ ์ˆ˜ํ–‰ํ•œ๋‹ค. ### ๊ตฌํ˜„ ํฌ์ธํŠธ ์–ด๋Œ‘ํ„ฐ ํด๋ž˜์Šค๋Š” ํƒ€๊นƒ ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๊ตฌํ˜„ํ•˜๊ณ , ์–ด๋Œ‘ํ‹ฐ ํด๋ž˜์Šค๋ฅผ ๊ตฌ์„ฑ์œผ๋กœ ๊ฐ€์ง„๋‹ค. ๊ทธ๋ž˜์„œ ํƒ€๊นƒ ์ธํ„ฐํŽ˜์ด์Šค๋กœ ๋ฉ”์‹œ์ง€๊ฐ€ ์š”์ฒญ์ด ์˜ค๋ฉด ์–ด๋Œ‘ํ‹ฐ์˜ ๋ฉ”์‹œ์ง€๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ฉ”์‹œ์ง€๋ฅผ ํ†ต๊ณผ์‹œํ‚จ๋‹ค. ## ์ ์šฉ ์˜ˆ์‹œ ### ์š”๊ตฌ์‚ฌํ•ญ ๊ธฐ์กด์— ์‚ฌ์šฉํ•˜๋˜ Iterator ์ธํ„ฐํŽ˜์ด์Šค์— Enumeration ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ์ถ”๊ฐ€๋กœ ์‚ฌ์šฉํ•˜๋ ค๊ณ  ํ•œ๋‹ค. ํด๋ผ์ด์–ธํŠธ๋Š” Iterator ํƒ€์ž…์œผ๋กœ Enumeration์„ ์‚ฌ์šฉํ•œ๋‹ค. ### ์„ค๊ณ„ ![image](https://user-images.githubusercontent.com/53958188/217912559-49643c14-165e-464a-b153-c8380516b7f5.png) ์–ด๋Œ‘ํ„ฐ์ธ EnumerationIterator๋Š” ํƒ€๊นƒ์ธ Iterator์„ ๊ตฌํ˜„ํ•˜๊ณ  ์–ด๋Œ‘ํ‹ฐ์ธ Enumeration์„ ๊ตฌ์„ฑ์œผ๋กœ ๊ฐ€์ง„๋‹ค. Iterator์˜ ์ถ”์ƒ ๋ฉ”์†Œ๋“œ๋ฅผ ๊ตฌํ˜„ํ•  ๋•Œ Enumeration์˜ ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ๋˜๋Š”๋ฐ, remove()๋Š” ๋Œ€์‘ํ•˜๋Š” ๋ฉ”์†Œ๋“œ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์™ธ๋ฅผ ๋˜์ง„๋‹ค. ํด๋ผ์ด์–ธํŠธ๋Š” ์–ด๋Œ‘ํ„ฐ๋ฅผ ํ†ตํ•ด Iterator๋กœ Enumeration์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ### ์ฝ”๋“œ EnumerationIterator ```java public class EnumerationIterator implements Iterator<Object> { Enumeration<?> enumeration; public EnumerationIterator(Enumeration<?> enumeration) { this.enumeration = enumeration; } public boolean hasNext() { return enumeration.hasMoreElements(); } public Object next() { return enumeration.nextElement(); } public void remove() { throw new UnsupportedOperationException(); } } ``` EnumerationIteratorTest ```java public class EnumerationIteratorTestDrive { public static void main (String args[]) { Vector<String> v = new Vector<String>(Arrays.asList(args)); // Iterator ํƒ€์ž…์œผ๋กœ Enumeration ์‚ฌ์šฉ ์ค‘ Iterator<?> iterator = new EnumerationIterator(v.elements()); while (iterator.hasNext()) { System.out.println(iterator.next()); } } } ```
non_process
adapter pattern ํ•ต์‹ฌ ์˜๋„ ํŠน์ • ํด๋ž˜์Šค ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํด๋ผ์ด์–ธํŠธ์—์„œ ์š”๊ตฌํ•˜๋Š” ๋‹ค๋ฅธ ์ธ์Šคํ„ด์Šค๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ ํ˜ธํ™˜๋˜์ง€ ์•Š์•„ ๊ฐ™์ด ์“ธ ์ˆ˜ ์—†์—ˆ๋˜ ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋„์™€์ค€๋‹ค ์ ์šฉ ์ƒํ™ฉ ํด๋ผ์ด์–ธํŠธ๊ฐ€ ๊ธฐ์กด์— ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ ์žˆ๋Š”๋ฐ ํ˜ธํ™˜๋˜์ง€ ์•Š๋Š” ์ƒˆ๋กœ์šด ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๊ธฐ์กด ์ธํ„ฐํŽ˜์ด์Šค ํƒ€๊นƒ ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์„ ๋•Œ ์–ด๋Œ‘ํ„ฐ ํŒจํ„ด์„ ์‚ฌ์šฉํ•œ๋‹ค ์†”๋ฃจ์…˜์˜ ๊ตฌ์กฐ์™€ ๊ฐ ์š”์†Œ์˜ ์—ญํ•  ์œ„์˜ ๊ทธ๋ฆผ์€ ๊ฐ์ฒด ์–ด๋Œ‘ํ„ฐ์˜ ๊ตฌ์กฐ ๋‹ค์ค‘ ์ƒ์†์ด ๊ฐ€๋Šฅํ•œ ์–ธ์–ด๋กœ ํด๋ž˜์Šค ์–ด๋Œ‘ํ„ฐ๋„ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค ํƒ€๊นƒ๊ณผ ์–ด๋Œ‘ํ‹ฐ๋ฅผ ์ƒ์†ํ•˜์—ฌ ์–ด๋Œ‘ํ„ฐ๋ฅผ ๋งŒ๋“ ๋‹ค ์œ„์˜ ๊ทธ๋ฆผ์€ ๊ฐ์ฒด ์–ด๋Œ‘ํ„ฐ์˜ ๊ตฌ์กฐ ๋‹ค์ค‘ ์ƒ์†์ด ๊ฐ€๋Šฅํ•œ ์–ธ์–ด๋กœ ํด๋ž˜์Šค ์–ด๋Œ‘ํ„ฐ๋„ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค ํƒ€๊นƒ๊ณผ ์–ด๋Œ‘ํ‹ฐ๋ฅผ ์ƒ์†ํ•˜์—ฌ ์–ด๋Œ‘ํ„ฐ๋ฅผ ๋งŒ๋“ ๋‹ค ๊ฐ์ฒด์—๊ฒŒ ์ฑ…์ž„์„ ๋ถ„ํ• ํ•˜๊ธฐ ์šฐ์„  ํด๋ผ์ด์–ธํŠธ๊ฐ€ ๊ธฐ์กด์— ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ target์ด๋‹ค ๊ทธ๋ฆฌ๊ณ  ์ƒˆ๋กญ๊ฒŒ ์‚ฌ์šฉํ•˜๋ ค๊ณ  ํ•˜๋Š” ์ธํ„ฐํŽ˜์ด์Šค๊ฐ€ adaptee์ด๋‹ค ํด๋ผ์ด์–ธํŠธ๊ฐ€ adaptee๋ฅผ target ํƒ€์ž…์œผ๋กœ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์ด ๋‘˜ ์‚ฌ์ด์— ์š”์ฒญ์„ ํ†ต๊ณผ์‹œ์ผœ์ค„ ์ˆ˜ ์žˆ๋Š” ์ฑ…์ž„์ด ํ•„์š”ํ•œ๋ฐ ์ด๋ฅผ adapter๊ฐ€ ์ˆ˜ํ–‰ํ•œ๋‹ค ๊ตฌํ˜„ ํฌ์ธํŠธ ์–ด๋Œ‘ํ„ฐ ํด๋ž˜์Šค๋Š” ํƒ€๊นƒ ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๊ตฌํ˜„ํ•˜๊ณ  ์–ด๋Œ‘ํ‹ฐ ํด๋ž˜์Šค๋ฅผ ๊ตฌ์„ฑ์œผ๋กœ ๊ฐ€์ง„๋‹ค ๊ทธ๋ž˜์„œ ํƒ€๊นƒ ์ธํ„ฐํŽ˜์ด์Šค๋กœ ๋ฉ”์‹œ์ง€๊ฐ€ ์š”์ฒญ์ด ์˜ค๋ฉด ์–ด๋Œ‘ํ‹ฐ์˜ ๋ฉ”์‹œ์ง€๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ฉ”์‹œ์ง€๋ฅผ ํ†ต๊ณผ์‹œํ‚จ๋‹ค ์ ์šฉ ์˜ˆ์‹œ ์š”๊ตฌ์‚ฌํ•ญ ๊ธฐ์กด์— ์‚ฌ์šฉํ•˜๋˜ iterator ์ธํ„ฐํŽ˜์ด์Šค์— enumeration ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ์ถ”๊ฐ€๋กœ ์‚ฌ์šฉํ•˜๋ ค๊ณ  ํ•œ๋‹ค ํด๋ผ์ด์–ธํŠธ๋Š” iterator ํƒ€์ž…์œผ๋กœ enumeration์„ ์‚ฌ์šฉํ•œ๋‹ค ์„ค๊ณ„ ์–ด๋Œ‘ํ„ฐ์ธ enumerationiterator๋Š” ํƒ€๊นƒ์ธ iterator์„ ๊ตฌํ˜„ํ•˜๊ณ  ์–ด๋Œ‘ํ‹ฐ์ธ enumeration์„ ๊ตฌ์„ฑ์œผ๋กœ ๊ฐ€์ง„๋‹ค iterator์˜ ์ถ”์ƒ ๋ฉ”์†Œ๋“œ๋ฅผ ๊ตฌํ˜„ํ•  ๋•Œ enumeration์˜ ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ๋˜๋Š”๋ฐ remove ๋Š” ๋Œ€์‘ํ•˜๋Š” ๋ฉ”์†Œ๋“œ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์™ธ๋ฅผ ๋˜์ง„๋‹ค ํด๋ผ์ด์–ธํŠธ๋Š” ์–ด๋Œ‘ํ„ฐ๋ฅผ ํ†ตํ•ด iterator๋กœ enumeration์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค ์ฝ”๋“œ enumerationiterator java public class enumerationiterator implements iterator enumeration enumeration public enumerationiterator enumeration enumeration this enumeration enumeration public boolean hasnext return enumeration hasmoreelements public object next return enumeration nextelement public void remove throw new unsupportedoperationexception enumerationiteratortest java public class enumerationiteratortestdrive public static void main string args vector v new vector arrays aslist args iterator ํƒ€์ž…์œผ๋กœ enumeration ์‚ฌ์šฉ ์ค‘ iterator iterator new enumerationiterator v elements while iterator hasnext system out println iterator next
0
112,159
24,235,714,016
IssuesEvent
2022-09-26 22:57:59
robert-altom/test
https://api.github.com/repos/robert-altom/test
closed
Change WaitForObjectWithText to use the @text selector in the path
1.6.3 in code review gitlab
Now the WaitForObjectWithText command will simply search for the first object that match the path given then check if it has the correct text. We should check that the object has the correct text before bringing to driver in case the object doesn't have the required text server should continue the search. --- <sub>You can find the original issue from GitLab [here](https://gitlab.com/altom/altunity/altunitytester/-/issues/459).</sub>
1.0
Change WaitForObjectWithText to use the @text selector in the path - Now the WaitForObjectWithText command will simply search for the first object that match the path given then check if it has the correct text. We should check that the object has the correct text before bringing to driver in case the object doesn't have the required text server should continue the search. --- <sub>You can find the original issue from GitLab [here](https://gitlab.com/altom/altunity/altunitytester/-/issues/459).</sub>
non_process
change waitforobjectwithtext to use the text selector in the path now the waitforobjectwithtext command will simply search for the first object that match the path given then check if it has the correct text we should check that the object has the correct text before bringing to driver in case the object doesn t have the required text server should continue the search you can find the original issue from gitlab
0
2,446
5,226,052,496
IssuesEvent
2017-01-27 20:07:44
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Child Process: fork stdio option doesn't support the String variant that spawn does
child_process feature request good first contribution
Version: v7.4.0 Platform: Windows 64 bit The following code forks a script but all it's stdio objects are null index.js ```js var child = childProcess.fork('./userapp/test.js', [], { stdio: 'pipe' }); console.log('stdio', child.stdio); ``` test.js ```js var count = 0; setInterval(function () { if (count == 3) process.exit(1); console.log('test: ' + count); count++; }, 1000, 0); ``` console output: ``` stdio [ null, null, null, null ] test: 0 test: 1 test: 2 ``` Also i've tried to set the childs stdio with `stdio: [stream, stream, stream]` but this didn't work and the child used the parents stream to output
1.0
Child Process: fork stdio option doesn't support the String variant that spawn does - Version: v7.4.0 Platform: Windows 64 bit The following code forks a script but all it's stdio objects are null index.js ```js var child = childProcess.fork('./userapp/test.js', [], { stdio: 'pipe' }); console.log('stdio', child.stdio); ``` test.js ```js var count = 0; setInterval(function () { if (count == 3) process.exit(1); console.log('test: ' + count); count++; }, 1000, 0); ``` console output: ``` stdio [ null, null, null, null ] test: 0 test: 1 test: 2 ``` Also i've tried to set the childs stdio with `stdio: [stream, stream, stream]` but this didn't work and the child used the parents stream to output
process
child process fork stdio option doesn t support the string variant that spawn does version platform windows bit the following code forks a script but all it s stdio objects are null index js js var child childprocess fork userapp test js stdio pipe console log stdio child stdio test js js var count setinterval function if count process exit console log test count count console output stdio test test test also i ve tried to set the childs stdio with stdio but this didn t work and the child used the parents stream to output
1
259,198
27,621,719,688
IssuesEvent
2023-03-10 01:06:32
mittell/ruby-practice
https://api.github.com/repos/mittell/ruby-practice
opened
turbo-rails-1.3.3.gem: 1 vulnerabilities (highest severity is: 7.5)
Mend: dependency security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>turbo-rails-1.3.3.gem</b></p></summary> <p></p> <p>Path to dependency file: /rails-intro/Gemfile.lock</p> <p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/rack-2.2.6.2.gem</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (turbo-rails version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2023-27530](https://www.mend.io/vulnerability-database/CVE-2023-27530) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | rack-2.2.6.2.gem | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2023-27530</summary> ### Vulnerable Library - <b>rack-2.2.6.2.gem</b></p> <p>Rack provides a minimal, modular and adaptable interface for developing web applications in Ruby. By wrapping HTTP requests and responses in the simplest way possible, it unifies and distills the API for web servers, web frameworks, and software in between (the so-called middleware) into a single method call. </p> <p>Library home page: <a href="https://rubygems.org/gems/rack-2.2.6.2.gem">https://rubygems.org/gems/rack-2.2.6.2.gem</a></p> <p>Path to dependency file: /rails-intro/Gemfile.lock</p> <p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/rack-2.2.6.2.gem</p> <p> Dependency Hierarchy: - turbo-rails-1.3.3.gem (Root Library) - railties-7.0.4.2.gem - actionpack-7.0.4.2.gem - :x: **rack-2.2.6.2.gem** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A possible DoS vulnerability in the Multipart MIME parsing code in Rack. This vulnerability has been assigned the CVE identifier CVE-2023-27530. Versions Affected: All. Not affected: None Fixed Versions: 3.0.4.2, 2.2.6.3, 2.1.4.3, 2.0.9.3 The Multipart MIME parsing code in Rack limits the number of file parts, but does not limit the total number of parts that can be uploaded. Carefully crafted requests can abuse this and cause multipart parsing to take longer than expected. All users running an affected release should either upgrade or use one of the workarounds immediately. <p>Publish Date: 2023-03-03 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-27530>CVE-2023-27530</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2023-03-03</p> <p>Fix Resolution: rack - 2.0.9.3,2.1.4.3,2.2.6.3,3.0.4.2</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
turbo-rails-1.3.3.gem: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>turbo-rails-1.3.3.gem</b></p></summary> <p></p> <p>Path to dependency file: /rails-intro/Gemfile.lock</p> <p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/rack-2.2.6.2.gem</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (turbo-rails version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2023-27530](https://www.mend.io/vulnerability-database/CVE-2023-27530) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | rack-2.2.6.2.gem | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2023-27530</summary> ### Vulnerable Library - <b>rack-2.2.6.2.gem</b></p> <p>Rack provides a minimal, modular and adaptable interface for developing web applications in Ruby. By wrapping HTTP requests and responses in the simplest way possible, it unifies and distills the API for web servers, web frameworks, and software in between (the so-called middleware) into a single method call. </p> <p>Library home page: <a href="https://rubygems.org/gems/rack-2.2.6.2.gem">https://rubygems.org/gems/rack-2.2.6.2.gem</a></p> <p>Path to dependency file: /rails-intro/Gemfile.lock</p> <p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/rack-2.2.6.2.gem</p> <p> Dependency Hierarchy: - turbo-rails-1.3.3.gem (Root Library) - railties-7.0.4.2.gem - actionpack-7.0.4.2.gem - :x: **rack-2.2.6.2.gem** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A possible DoS vulnerability in the Multipart MIME parsing code in Rack. This vulnerability has been assigned the CVE identifier CVE-2023-27530. Versions Affected: All. Not affected: None Fixed Versions: 3.0.4.2, 2.2.6.3, 2.1.4.3, 2.0.9.3 The Multipart MIME parsing code in Rack limits the number of file parts, but does not limit the total number of parts that can be uploaded. Carefully crafted requests can abuse this and cause multipart parsing to take longer than expected. All users running an affected release should either upgrade or use one of the workarounds immediately. <p>Publish Date: 2023-03-03 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-27530>CVE-2023-27530</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2023-03-03</p> <p>Fix Resolution: rack - 2.0.9.3,2.1.4.3,2.2.6.3,3.0.4.2</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_process
turbo rails gem vulnerabilities highest severity is vulnerable library turbo rails gem path to dependency file rails intro gemfile lock path to vulnerable library home wss scanner gem ruby cache rack gem vulnerabilities cve severity cvss dependency type fixed in turbo rails version remediation available high rack gem transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library rack gem rack provides a minimal modular and adaptable interface for developing web applications in ruby by wrapping http requests and responses in the simplest way possible it unifies and distills the api for web servers web frameworks and software in between the so called middleware into a single method call library home page a href path to dependency file rails intro gemfile lock path to vulnerable library home wss scanner gem ruby cache rack gem dependency hierarchy turbo rails gem root library railties gem actionpack gem x rack gem vulnerable library found in base branch main vulnerability details a possible dos vulnerability in the multipart mime parsing code in rack this vulnerability has been assigned the cve identifier cve versions affected all not affected none fixed versions the multipart mime parsing code in rack limits the number of file parts but does not limit the total number of parts that can be uploaded carefully crafted requests can abuse this and cause multipart parsing to take longer than expected all users running an affected release should either upgrade or use one of the workarounds immediately publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution rack step up your open source security game with mend
0
465
2,772,885,815
IssuesEvent
2015-05-03 03:44:01
nealian/cse325_project6
https://api.github.com/repos/nealian/cse325_project6
opened
Implement the rest of the required functionality
requirement
Namely `fs_create`, `fs_delete`, `fs_read`, `fs_write`, `fs_get_filesize`, `fs_lseek`, and `fs_truncate`. Divide into sub-tasks as you see fit.
1.0
Implement the rest of the required functionality - Namely `fs_create`, `fs_delete`, `fs_read`, `fs_write`, `fs_get_filesize`, `fs_lseek`, and `fs_truncate`. Divide into sub-tasks as you see fit.
non_process
implement the rest of the required functionality namely fs create fs delete fs read fs write fs get filesize fs lseek and fs truncate divide into sub tasks as you see fit
0
201,258
15,802,147,940
IssuesEvent
2021-04-03 08:17:59
tzexern/ped
https://api.github.com/repos/tzexern/ped
opened
Minor Grammar Mistake under Remove Feature
severity.VeryLow type.DocumentationBug
![image.png](https://raw.githubusercontent.com/tzexern/ped/main/files/fe2a8e92-951b-4129-97ce-928490a16a13.png) Under remove, notes for `QUANTITY_TO_REMOVE`, is `...must be lower to equal...` supposed to be `... lower or equal...`? <!--session: 1617437377510-3c93c0e6-8609-4fd0-9f26-7eb717a196bb-->
1.0
Minor Grammar Mistake under Remove Feature - ![image.png](https://raw.githubusercontent.com/tzexern/ped/main/files/fe2a8e92-951b-4129-97ce-928490a16a13.png) Under remove, notes for `QUANTITY_TO_REMOVE`, is `...must be lower to equal...` supposed to be `... lower or equal...`? <!--session: 1617437377510-3c93c0e6-8609-4fd0-9f26-7eb717a196bb-->
non_process
minor grammar mistake under remove feature under remove notes for quantity to remove is must be lower to equal supposed to be lower or equal
0
1,695
4,346,120,150
IssuesEvent
2016-07-29 15:00:44
OpenBitcoinPrivacyProject/wallet-ratings
https://api.github.com/repos/OpenBitcoinPrivacyProject/wallet-ratings
opened
Revisit criterion for number of clicks to perform first backup
criteria easy-to-process
OBPPV3-CR61 is: > Number of clicks to create the first wallet backup This is currently only used as a criterion for OBPPV3-CM56: > Use eternal backups Under this attack: > Users may reuse non-ECDH addresses due to the fear of losing funds if avoiding reuse increases the risk that wallet backups will become unexpectedly invalid The other criterion under that countermeasure is OBPPV3-CR62: > Number of clicks needed to update an existing backup due to the creation of a new receiving or change address CR61 doesnโ€™t seem relevant to the privacy properties of eternal backups; youโ€™re equally screwed whether you reuse addresses or not if you do 0 backups. CR62 does properly capture the intention of the countermeasure, IMHO. If others agree, we can simply delete this criterion.
1.0
Revisit criterion for number of clicks to perform first backup - OBPPV3-CR61 is: > Number of clicks to create the first wallet backup This is currently only used as a criterion for OBPPV3-CM56: > Use eternal backups Under this attack: > Users may reuse non-ECDH addresses due to the fear of losing funds if avoiding reuse increases the risk that wallet backups will become unexpectedly invalid The other criterion under that countermeasure is OBPPV3-CR62: > Number of clicks needed to update an existing backup due to the creation of a new receiving or change address CR61 doesnโ€™t seem relevant to the privacy properties of eternal backups; youโ€™re equally screwed whether you reuse addresses or not if you do 0 backups. CR62 does properly capture the intention of the countermeasure, IMHO. If others agree, we can simply delete this criterion.
process
revisit criterion for number of clicks to perform first backup is number of clicks to create the first wallet backup this is currently only used as a criterion for use eternal backups under this attack users may reuse non ecdh addresses due to the fear of losing funds if avoiding reuse increases the risk that wallet backups will become unexpectedly invalid the other criterion under that countermeasure is number of clicks needed to update an existing backup due to the creation of a new receiving or change address doesnโ€™t seem relevant to the privacy properties of eternal backups youโ€™re equally screwed whether you reuse addresses or not if you do backups does properly capture the intention of the countermeasure imho if others agree we can simply delete this criterion
1
724,871
24,943,911,819
IssuesEvent
2022-10-31 21:32:21
GQDeltex/ft_transcendence
https://api.github.com/repos/GQDeltex/ft_transcendence
closed
implement to join a public channel
frontend priority
We can now change the channel from private to public, but there is no way to see on the frontend if you are in that channel right now, or if its only displayed, because its a public channel. Implement some kind of optical "change color" and remove "clickable" from shown public channels in channelList.
1.0
implement to join a public channel - We can now change the channel from private to public, but there is no way to see on the frontend if you are in that channel right now, or if its only displayed, because its a public channel. Implement some kind of optical "change color" and remove "clickable" from shown public channels in channelList.
non_process
implement to join a public channel we can now change the channel from private to public but there is no way to see on the frontend if you are in that channel right now or if its only displayed because its a public channel implement some kind of optical change color and remove clickable from shown public channels in channellist
0
18,488
24,550,903,457
IssuesEvent
2022-10-12 12:32:27
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] [Angular Upgrade] Dashboard > Sites > UI issue
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
UI issue is observed in the 'Sites' screen i.e., Alignment is not proper **AR:** ![AR](https://user-images.githubusercontent.com/86007179/178000649-b3b43df1-dc09-46e7-a67d-541b3fd90430.png) **ER:** ![Screenshot (1782)](https://user-images.githubusercontent.com/86007179/178001340-61c6a01d-4977-48d4-aca1-019f526d2e5c.png)
3.0
[PM] [Angular Upgrade] Dashboard > Sites > UI issue - UI issue is observed in the 'Sites' screen i.e., Alignment is not proper **AR:** ![AR](https://user-images.githubusercontent.com/86007179/178000649-b3b43df1-dc09-46e7-a67d-541b3fd90430.png) **ER:** ![Screenshot (1782)](https://user-images.githubusercontent.com/86007179/178001340-61c6a01d-4977-48d4-aca1-019f526d2e5c.png)
process
dashboard sites ui issue ui issue is observed in the sites screen i e alignment is not proper ar er
1
753,634
26,356,460,876
IssuesEvent
2023-01-11 10:04:48
space-wizards/space-station-14
https://api.github.com/repos/space-wizards/space-station-14
opened
Dragdrop shadow should be using an overlay
Priority: 2-Before Release Issue: Needs Cleanup Difficulty: 1-Easy
Might apply to anything using the dragdrophelper but ideally it only works in the 1 viewport (relative to the mouse).
1.0
Dragdrop shadow should be using an overlay - Might apply to anything using the dragdrophelper but ideally it only works in the 1 viewport (relative to the mouse).
non_process
dragdrop shadow should be using an overlay might apply to anything using the dragdrophelper but ideally it only works in the viewport relative to the mouse
0
10,568
13,368,987,894
IssuesEvent
2020-09-01 08:10:36
didi/mpx
https://api.github.com/repos/didi/mpx
closed
ๅˆๅง‹ๅŒ–้กน็›ฎไน‹ๅŽnpm run watchไน‹ๅŽๆŠฅ้”™๏ผŒไธ€็›ดๅœจbuilding...
processing
### ็Žฏๅขƒไฟกๆฏ - ็ณป็ปŸ๏ผšmacos 10.14.6 - node็‰ˆๆœฌ๏ผšv12.16.2 - @mpxjs/core: 2.5.11 ### ๆ“ไฝœๆญฅ้ชค 1. ๆŸฅ็œ‹ๆ–‡ๆกฃIntroduction็ซ ่Š‚็š„ใ€ๅฎ‰่ฃ…ไฝฟ็”จใ€‘ 2. npm i -g @mpxjs/cli 3. mpx init mpx-project๏ผˆ้€‰้กนไธบwx+no+yes+yes+yes+yes+no+no+mpx-project+A mpx project+'xcp'+touristappid๏ผ‰ 4. cd mpx-project 5. npm I 6. npm run watch ### ๆƒ…ๅ†ต่ฏดๆ˜Ž ๅŽŸๆฅไธ€็›ด็”จๅพฎไฟกๅฐ็จ‹ๅบๅŽŸ็”Ÿๅผ€ๅ‘๏ผŒไปŠๅคฉๆƒณ่ฏ•่ฏ•mpxใ€‚ๆŒ‰็…งๆ“ไฝœๆญฅ้ชคไน‹ๅŽ๏ผŒๆŠฅ้”™"(node:8554) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'name' of undefined"๏ผŒ็„ถๅŽๅ‘ฝไปค่กŒ็•Œ้ขไธ€็›ดๅœ็•™ๅœจโ ธ building...ใ€‚ๅฐ่ฏ•่ฟ‡control+cๅœๆญข๏ผŒ็„ถๅŽ็›ดๆŽฅๆ‰ง่กŒnpm run build๏ผŒไพ็„ถๆŠฅ้”™"ERROR in Entry module not found: Error: Can't resolve '/Users/xiecp/project/mpx-project/src/miniprogram/app.mpx' in '/Users/xiecp/project/mpx-project'"ใ€‚้กน็›ฎ็›ฎๅฝ•ไธ‹ไนŸๆœช็”Ÿๆˆdistๆ–‡ไปถๅคน๏ผŒๆ— ๆณ•ไฝฟ็”จๅฐ็จ‹ๅบๅผ€ๅ‘ๅทฅๅ…ท่ฟ›่กŒๆŸฅ็œ‹ใ€‚ ### ๅธŒๆœ›็ป“ๆžœ ๅœจmacos 10.14.6ๆ‰ง่กŒnpm run watchๅŽ่ƒฝๅคŸๆญฃๅธธ่ฟ่กŒๅˆๅง‹้กน็›ฎ๏ผŒไปฅไพฟ่ฟ›่กŒๅŽ็ปญไบ†่งฃๆก†ๆžถ๏ผŒ่ฟ›่กŒๅผ€ๅ‘ไฝ“้ชŒ ### ็›ธๅ…ณๆˆชๅ›พ - ๆ‰ง่กŒnpm run watchๅŽ <img width="650" alt="image" src="https://user-images.githubusercontent.com/67538938/88750988-42d1ec80-d189-11ea-9f45-bcaa00eccc2f.png"> - ๅผบๅˆถๅœๆญขnpm run watch๏ผŒๆ‰ง่กŒnpm run buildๅŽ <img width="650" alt="image" src="https://user-images.githubusercontent.com/67538938/88752711-3059b200-d18d-11ea-99e6-a583250a9e4c.png"> - ๅฎšไฝๅˆฐ็š„็›ธๅ…ณไปฃ็ ๏ผˆ่ทฏๅพ„๏ผš/node_modules/@mpxjs/webpack-plugin/lib/index.js๏ผ‰ <img width="915" alt="image" src="https://user-images.githubusercontent.com/67538938/88753077-f2a95900-d18d-11ea-9fa1-1b11d5cb3380.png">
1.0
ๅˆๅง‹ๅŒ–้กน็›ฎไน‹ๅŽnpm run watchไน‹ๅŽๆŠฅ้”™๏ผŒไธ€็›ดๅœจbuilding... - ### ็Žฏๅขƒไฟกๆฏ - ็ณป็ปŸ๏ผšmacos 10.14.6 - node็‰ˆๆœฌ๏ผšv12.16.2 - @mpxjs/core: 2.5.11 ### ๆ“ไฝœๆญฅ้ชค 1. ๆŸฅ็œ‹ๆ–‡ๆกฃIntroduction็ซ ่Š‚็š„ใ€ๅฎ‰่ฃ…ไฝฟ็”จใ€‘ 2. npm i -g @mpxjs/cli 3. mpx init mpx-project๏ผˆ้€‰้กนไธบwx+no+yes+yes+yes+yes+no+no+mpx-project+A mpx project+'xcp'+touristappid๏ผ‰ 4. cd mpx-project 5. npm I 6. npm run watch ### ๆƒ…ๅ†ต่ฏดๆ˜Ž ๅŽŸๆฅไธ€็›ด็”จๅพฎไฟกๅฐ็จ‹ๅบๅŽŸ็”Ÿๅผ€ๅ‘๏ผŒไปŠๅคฉๆƒณ่ฏ•่ฏ•mpxใ€‚ๆŒ‰็…งๆ“ไฝœๆญฅ้ชคไน‹ๅŽ๏ผŒๆŠฅ้”™"(node:8554) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'name' of undefined"๏ผŒ็„ถๅŽๅ‘ฝไปค่กŒ็•Œ้ขไธ€็›ดๅœ็•™ๅœจโ ธ building...ใ€‚ๅฐ่ฏ•่ฟ‡control+cๅœๆญข๏ผŒ็„ถๅŽ็›ดๆŽฅๆ‰ง่กŒnpm run build๏ผŒไพ็„ถๆŠฅ้”™"ERROR in Entry module not found: Error: Can't resolve '/Users/xiecp/project/mpx-project/src/miniprogram/app.mpx' in '/Users/xiecp/project/mpx-project'"ใ€‚้กน็›ฎ็›ฎๅฝ•ไธ‹ไนŸๆœช็”Ÿๆˆdistๆ–‡ไปถๅคน๏ผŒๆ— ๆณ•ไฝฟ็”จๅฐ็จ‹ๅบๅผ€ๅ‘ๅทฅๅ…ท่ฟ›่กŒๆŸฅ็œ‹ใ€‚ ### ๅธŒๆœ›็ป“ๆžœ ๅœจmacos 10.14.6ๆ‰ง่กŒnpm run watchๅŽ่ƒฝๅคŸๆญฃๅธธ่ฟ่กŒๅˆๅง‹้กน็›ฎ๏ผŒไปฅไพฟ่ฟ›่กŒๅŽ็ปญไบ†่งฃๆก†ๆžถ๏ผŒ่ฟ›่กŒๅผ€ๅ‘ไฝ“้ชŒ ### ็›ธๅ…ณๆˆชๅ›พ - ๆ‰ง่กŒnpm run watchๅŽ <img width="650" alt="image" src="https://user-images.githubusercontent.com/67538938/88750988-42d1ec80-d189-11ea-9f45-bcaa00eccc2f.png"> - ๅผบๅˆถๅœๆญขnpm run watch๏ผŒๆ‰ง่กŒnpm run buildๅŽ <img width="650" alt="image" src="https://user-images.githubusercontent.com/67538938/88752711-3059b200-d18d-11ea-99e6-a583250a9e4c.png"> - ๅฎšไฝๅˆฐ็š„็›ธๅ…ณไปฃ็ ๏ผˆ่ทฏๅพ„๏ผš/node_modules/@mpxjs/webpack-plugin/lib/index.js๏ผ‰ <img width="915" alt="image" src="https://user-images.githubusercontent.com/67538938/88753077-f2a95900-d18d-11ea-9fa1-1b11d5cb3380.png">
process
ๅˆๅง‹ๅŒ–้กน็›ฎไน‹ๅŽnpm run watchไน‹ๅŽๆŠฅ้”™๏ผŒไธ€็›ดๅœจbuilding ็Žฏๅขƒไฟกๆฏ ็ณป็ปŸ๏ผšmacos node็‰ˆๆœฌ๏ผš mpxjs core ๆ“ไฝœๆญฅ้ชค ๆŸฅ็œ‹ๆ–‡ๆกฃintroduction็ซ ่Š‚็š„ใ€ๅฎ‰่ฃ…ไฝฟ็”จใ€‘ npm i g mpxjs cli mpx init mpx project๏ผˆ้€‰้กนไธบwx no yes yes yes yes no no mpx project a mpx project xcp touristappid๏ผ‰ cd mpx project npm i npm run watch ๆƒ…ๅ†ต่ฏดๆ˜Ž ๅŽŸๆฅไธ€็›ด็”จๅพฎไฟกๅฐ็จ‹ๅบๅŽŸ็”Ÿๅผ€ๅ‘๏ผŒไปŠๅคฉๆƒณ่ฏ•่ฏ•mpxใ€‚ๆŒ‰็…งๆ“ไฝœๆญฅ้ชคไน‹ๅŽ๏ผŒๆŠฅ้”™ node unhandledpromiserejectionwarning typeerror cannot read property name of undefined ๏ผŒ็„ถๅŽๅ‘ฝไปค่กŒ็•Œ้ขไธ€็›ดๅœ็•™ๅœจโ ธ building ใ€‚ๅฐ่ฏ•่ฟ‡control cๅœๆญข๏ผŒ็„ถๅŽ็›ดๆŽฅๆ‰ง่กŒnpm run build๏ผŒไพ็„ถๆŠฅ้”™ error in entry module not found error can t resolve users xiecp project mpx project src miniprogram app mpx in users xiecp project mpx project ใ€‚้กน็›ฎ็›ฎๅฝ•ไธ‹ไนŸๆœช็”Ÿๆˆdistๆ–‡ไปถๅคน๏ผŒๆ— ๆณ•ไฝฟ็”จๅฐ็จ‹ๅบๅผ€ๅ‘ๅทฅๅ…ท่ฟ›่กŒๆŸฅ็œ‹ใ€‚ ๅธŒๆœ›็ป“ๆžœ ๅœจmacos run watchๅŽ่ƒฝๅคŸๆญฃๅธธ่ฟ่กŒๅˆๅง‹้กน็›ฎ๏ผŒไปฅไพฟ่ฟ›่กŒๅŽ็ปญไบ†่งฃๆก†ๆžถ๏ผŒ่ฟ›่กŒๅผ€ๅ‘ไฝ“้ชŒ ็›ธๅ…ณๆˆชๅ›พ ๆ‰ง่กŒnpm run watchๅŽ img width alt image src ๅผบๅˆถๅœๆญขnpm run watch๏ผŒๆ‰ง่กŒnpm run buildๅŽ img width alt image src ๅฎšไฝๅˆฐ็š„็›ธๅ…ณไปฃ็ ๏ผˆ่ทฏๅพ„๏ผš node modules mpxjs webpack plugin lib index js๏ผ‰ img width alt image src
1
578,610
17,148,958,588
IssuesEvent
2021-07-13 17:50:37
Couchers-org/couchers
https://api.github.com/repos/Couchers-org/couchers
closed
gRPC interceptors don't work with gRPC-Web Devtools
bug frontend priority: low
This means that the unathenticated error handler doesn't run correctly, so for example you won't be kicked out if the backend suddenly jails your or logs you out (e.g. due to ToS version updating). This works correctly in the prod build though because we disable the dev tools there. See also https://github.com/grpc/grpc-web/issues/1012 and https://github.com/SafetyCulture/grpc-web-devtools/issues/80
1.0
gRPC interceptors don't work with gRPC-Web Devtools - This means that the unathenticated error handler doesn't run correctly, so for example you won't be kicked out if the backend suddenly jails your or logs you out (e.g. due to ToS version updating). This works correctly in the prod build though because we disable the dev tools there. See also https://github.com/grpc/grpc-web/issues/1012 and https://github.com/SafetyCulture/grpc-web-devtools/issues/80
non_process
grpc interceptors don t work with grpc web devtools this means that the unathenticated error handler doesn t run correctly so for example you won t be kicked out if the backend suddenly jails your or logs you out e g due to tos version updating this works correctly in the prod build though because we disable the dev tools there see also and
0
87,208
25,068,155,419
IssuesEvent
2022-11-07 09:59:40
gitpod-io/gitpod
https://api.github.com/repos/gitpod-io/gitpod
closed
Webhooks for GitLab projects are disabled on Unauthorized Errors
type: bug git provider: gitlab feature: prebuilds feature: teams and projects
**Issue** We've learned now that GitLab webhooks are disabled automatically if the receiver (Gitpod) is responding with status codes other than 2xx. The rules for [failing webhooks](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#failing-webhooks) are basically: * on 5xx responses -> disable temporarily * on 4xx responses -> disable permanently, which requires manual re-enabling on GitLab There seems to be at least two cases where the Unauthorized Error might occur: * when the Gitpod user who created the project disconnects from the GitLab integration. * (as we assume after scanning logs) when there are repository forks involved, for which webhook events are registered, but the Gitpod users is not authorized. In any case, we should not upset GitLabs when webhooks cannot be triggered. **Solution** On Unauthorized Errors the GitLab webhook handler should respond with a code 2xx. Further we should investigate * the errors on events for repository forks. * how to make Unauthorized Errors visible to project owners.
1.0
Webhooks for GitLab projects are disabled on Unauthorized Errors - **Issue** We've learned now that GitLab webhooks are disabled automatically if the receiver (Gitpod) is responding with status codes other than 2xx. The rules for [failing webhooks](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#failing-webhooks) are basically: * on 5xx responses -> disable temporarily * on 4xx responses -> disable permanently, which requires manual re-enabling on GitLab There seems to be at least two cases where the Unauthorized Error might occur: * when the Gitpod user who created the project disconnects from the GitLab integration. * (as we assume after scanning logs) when there are repository forks involved, for which webhook events are registered, but the Gitpod users is not authorized. In any case, we should not upset GitLabs when webhooks cannot be triggered. **Solution** On Unauthorized Errors the GitLab webhook handler should respond with a code 2xx. Further we should investigate * the errors on events for repository forks. * how to make Unauthorized Errors visible to project owners.
non_process
webhooks for gitlab projects are disabled on unauthorized errors issue we ve learned now that gitlab webhooks are disabled automatically if the receiver gitpod is responding with status codes other than the rules for are basically on responses disable temporarily on responses disable permanently which requires manual re enabling on gitlab there seems to be at least two cases where the unauthorized error might occur when the gitpod user who created the project disconnects from the gitlab integration as we assume after scanning logs when there are repository forks involved for which webhook events are registered but the gitpod users is not authorized in any case we should not upset gitlabs when webhooks cannot be triggered solution on unauthorized errors the gitlab webhook handler should respond with a code further we should investigate the errors on events for repository forks how to make unauthorized errors visible to project owners
0
12,783
15,165,440,156
IssuesEvent
2021-02-12 15:04:07
endlessm/azafea
https://api.github.com/repos/endlessm/azafea
closed
Add a composite index on ping country + created_at
endless event processors enhancement
@ramcq tried to run the following query: ```sql SELECT DISTINCT p.country, (SELECT count(pq1.id) FROM ping_v1 pq1 WHERE pq1.country = p.country AND pq1.created_at >= '2019-01-01'::date AND pq1.created_at < '2019-04-01'::date) AS q1, (SELECT count(pq2.id) FROM ping_v1 pq2 WHERE pq2.country = p.country AND pq2.created_at >= '2019-04-01'::date AND pq2.created_at < '2019-07-01'::date) AS q2 FROM ping_v1 p WHERE p.created_at >= (now() - '1 day'::interval); ``` And it was very slow. The query plan looks like this: ``` QUERY PLAN -------------------------------------------------------------------------------------------------------------------------- Unique (cost=4780.17..4782.27 rows=144 width=32) -> Sort (cost=4780.17..4780.70 rows=210 width=32) Sort Key: p.country, ((SubPlan 1)), ((SubPlan 2)) -> Bitmap Heap Scan on ping_v1 p (cost=5.78..4772.07 rows=210 width=32) Recheck Cond: (created_at >= (now() - '1 day'::interval)) -> Bitmap Index Scan on ix_ping_v1_created_at (cost=0.00..5.73 rows=210 width=0) Index Cond: (created_at >= (now() - '1 day'::interval)) SubPlan 1 -> Aggregate (cost=11.31..11.32 rows=1 width=8) -> Bitmap Heap Scan on ping_v1 pq1 (cost=4.18..11.30 rows=1 width=4) Recheck Cond: ((created_at >= '2019-01-01'::date) AND (created_at < '2019-04-01'::date)) Filter: ((country)::text = (p.country)::text) -> Bitmap Index Scan on ix_ping_v1_created_at (cost=0.00..4.18 rows=3 width=0) Index Cond: ((created_at >= '2019-01-01'::date) AND (created_at < '2019-04-01'::date)) SubPlan 2 -> Aggregate (cost=11.31..11.32 rows=1 width=8) -> Bitmap Heap Scan on ping_v1 pq2 (cost=4.18..11.30 rows=1 width=4) Recheck Cond: ((created_at >= '2019-04-01'::date) AND (created_at < '2019-07-01'::date)) Filter: ((country)::text = (p.country)::text) -> Bitmap Index Scan on ix_ping_v1_created_at (cost=0.00..4.18 rows=3 width=0) Index Cond: ((created_at >= '2019-04-01'::date) AND (created_at < '2019-07-01'::date)) ``` Adding an index on `country` does not have any significant impact. However, adding a composite index on `country` and `created_at`, the query plan becomes: ``` QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------- Unique (cost=3465.26..3467.36 rows=144 width=32) -> Sort (cost=3465.26..3465.78 rows=210 width=32) Sort Key: p.country, ((SubPlan 1)), ((SubPlan 2)) -> Bitmap Heap Scan on ping_v1 p (cost=5.78..3457.16 rows=210 width=32) Recheck Cond: (created_at >= (now() - '1 day'::interval)) -> Bitmap Index Scan on ix_ping_v1_created_at (cost=0.00..5.73 rows=210 width=0) Index Cond: (created_at >= (now() - '1 day'::interval)) SubPlan 1 -> Aggregate (cost=8.17..8.18 rows=1 width=8) -> Index Scan using ix_ping_v1_country_created_at on ping_v1 pq1 (cost=0.15..8.17 rows=1 width=4) Index Cond: (((country)::text = (p.country)::text) AND (created_at >= '2019-01-01'::date) AND (created_at < '2019-04-01'::date)) SubPlan 2 -> Aggregate (cost=8.17..8.18 rows=1 width=8) -> Index Scan using ix_ping_v1_country_created_at on ping_v1 pq2 (cost=0.15..8.17 rows=1 width=4) Index Cond: (((country)::text = (p.country)::text) AND (created_at >= '2019-04-01'::date) AND (created_at < '2019-07-01'::date)) ``` The query still has a high cost, but it's roughly 27% better. This will require implementing #4 first.
1.0
Add a composite index on ping country + created_at - @ramcq tried to run the following query: ```sql SELECT DISTINCT p.country, (SELECT count(pq1.id) FROM ping_v1 pq1 WHERE pq1.country = p.country AND pq1.created_at >= '2019-01-01'::date AND pq1.created_at < '2019-04-01'::date) AS q1, (SELECT count(pq2.id) FROM ping_v1 pq2 WHERE pq2.country = p.country AND pq2.created_at >= '2019-04-01'::date AND pq2.created_at < '2019-07-01'::date) AS q2 FROM ping_v1 p WHERE p.created_at >= (now() - '1 day'::interval); ``` And it was very slow. The query plan looks like this: ``` QUERY PLAN -------------------------------------------------------------------------------------------------------------------------- Unique (cost=4780.17..4782.27 rows=144 width=32) -> Sort (cost=4780.17..4780.70 rows=210 width=32) Sort Key: p.country, ((SubPlan 1)), ((SubPlan 2)) -> Bitmap Heap Scan on ping_v1 p (cost=5.78..4772.07 rows=210 width=32) Recheck Cond: (created_at >= (now() - '1 day'::interval)) -> Bitmap Index Scan on ix_ping_v1_created_at (cost=0.00..5.73 rows=210 width=0) Index Cond: (created_at >= (now() - '1 day'::interval)) SubPlan 1 -> Aggregate (cost=11.31..11.32 rows=1 width=8) -> Bitmap Heap Scan on ping_v1 pq1 (cost=4.18..11.30 rows=1 width=4) Recheck Cond: ((created_at >= '2019-01-01'::date) AND (created_at < '2019-04-01'::date)) Filter: ((country)::text = (p.country)::text) -> Bitmap Index Scan on ix_ping_v1_created_at (cost=0.00..4.18 rows=3 width=0) Index Cond: ((created_at >= '2019-01-01'::date) AND (created_at < '2019-04-01'::date)) SubPlan 2 -> Aggregate (cost=11.31..11.32 rows=1 width=8) -> Bitmap Heap Scan on ping_v1 pq2 (cost=4.18..11.30 rows=1 width=4) Recheck Cond: ((created_at >= '2019-04-01'::date) AND (created_at < '2019-07-01'::date)) Filter: ((country)::text = (p.country)::text) -> Bitmap Index Scan on ix_ping_v1_created_at (cost=0.00..4.18 rows=3 width=0) Index Cond: ((created_at >= '2019-04-01'::date) AND (created_at < '2019-07-01'::date)) ``` Adding an index on `country` does not have any significant impact. However, adding a composite index on `country` and `created_at`, the query plan becomes: ``` QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------- Unique (cost=3465.26..3467.36 rows=144 width=32) -> Sort (cost=3465.26..3465.78 rows=210 width=32) Sort Key: p.country, ((SubPlan 1)), ((SubPlan 2)) -> Bitmap Heap Scan on ping_v1 p (cost=5.78..3457.16 rows=210 width=32) Recheck Cond: (created_at >= (now() - '1 day'::interval)) -> Bitmap Index Scan on ix_ping_v1_created_at (cost=0.00..5.73 rows=210 width=0) Index Cond: (created_at >= (now() - '1 day'::interval)) SubPlan 1 -> Aggregate (cost=8.17..8.18 rows=1 width=8) -> Index Scan using ix_ping_v1_country_created_at on ping_v1 pq1 (cost=0.15..8.17 rows=1 width=4) Index Cond: (((country)::text = (p.country)::text) AND (created_at >= '2019-01-01'::date) AND (created_at < '2019-04-01'::date)) SubPlan 2 -> Aggregate (cost=8.17..8.18 rows=1 width=8) -> Index Scan using ix_ping_v1_country_created_at on ping_v1 pq2 (cost=0.15..8.17 rows=1 width=4) Index Cond: (((country)::text = (p.country)::text) AND (created_at >= '2019-04-01'::date) AND (created_at < '2019-07-01'::date)) ``` The query still has a high cost, but it's roughly 27% better. This will require implementing #4 first.
process
add a composite index on ping country created at ramcq tried to run the following query sql select distinct p country select count id from ping where country p country and created at date and created at date as select count id from ping where country p country and created at date and created at date as from ping p where p created at now day interval and it was very slow the query plan looks like this query plan unique cost rows width sort cost rows width sort key p country subplan subplan bitmap heap scan on ping p cost rows width recheck cond created at now day interval bitmap index scan on ix ping created at cost rows width index cond created at now day interval subplan aggregate cost rows width bitmap heap scan on ping cost rows width recheck cond created at date and created at date filter country text p country text bitmap index scan on ix ping created at cost rows width index cond created at date and created at date subplan aggregate cost rows width bitmap heap scan on ping cost rows width recheck cond created at date and created at date filter country text p country text bitmap index scan on ix ping created at cost rows width index cond created at date and created at date adding an index on country does not have any significant impact however adding a composite index on country and created at the query plan becomes query plan unique cost rows width sort cost rows width sort key p country subplan subplan bitmap heap scan on ping p cost rows width recheck cond created at now day interval bitmap index scan on ix ping created at cost rows width index cond created at now day interval subplan aggregate cost rows width index scan using ix ping country created at on ping cost rows width index cond country text p country text and created at date and created at date subplan aggregate cost rows width index scan using ix ping country created at on ping cost rows width index cond country text p country text and created at date and created at date the query still has a high cost but it s roughly better this will require implementing first
1
148,086
5,658,731,927
IssuesEvent
2017-04-10 10:56:03
research-resource/research_resource
https://api.github.com/repos/research-resource/research_resource
closed
Alert on consent page to confirm saliva kit has been requested
priority-2 T2h
As a user, I want to be reassured that my request for a saliva sample kit has been acknowledged and that the kit is on the way to me What is the easiest way to do this? Ties in with #18 - so when the user enters their address and clicks to submit the request, a message would show with 'Thank you for requesting your saliva sample kit, it should be with you in X days'
1.0
Alert on consent page to confirm saliva kit has been requested - As a user, I want to be reassured that my request for a saliva sample kit has been acknowledged and that the kit is on the way to me What is the easiest way to do this? Ties in with #18 - so when the user enters their address and clicks to submit the request, a message would show with 'Thank you for requesting your saliva sample kit, it should be with you in X days'
non_process
alert on consent page to confirm saliva kit has been requested as a user i want to be reassured that my request for a saliva sample kit has been acknowledged and that the kit is on the way to me what is the easiest way to do this ties in with so when the user enters their address and clicks to submit the request a message would show with thank you for requesting your saliva sample kit it should be with you in x days
0
7,182
10,323,014,259
IssuesEvent
2019-08-31 17:27:30
AlbaHoo/cheeger.blog
https://api.github.com/repos/AlbaHoo/cheeger.blog
opened
Image processing with AWS lambda & S3 buckets
Gitalk Image processing with AWS lambda & S3 buckets
https://cheeger.com/develop/2018/03/17/Image-process-with-aws-lambda.html Image processing with AWS lambda & S3 buckets
1.0
Image processing with AWS lambda & S3 buckets - https://cheeger.com/develop/2018/03/17/Image-process-with-aws-lambda.html Image processing with AWS lambda & S3 buckets
process
image processing with aws lambda buckets image processing with aws lambda buckets
1
20,968
27,819,131,581
IssuesEvent
2023-03-19 02:06:51
cse442-at-ub/project_s23-cinco
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
closed
Set up demo for App's session authorization
Processing Task Sprint 2
*Test Cases* Local host with xampp apache running *Test 1* 1) After having the php file in the htdocs of the xampp folder on your pc and with xamp running apache, put in the local host url into your browser. Similar to what appears in the image right below in step 2 2) Inspect the page and go to the application section. Under cookies, you should see this. The persistent cookie is named session. ![Screenshot (1012).png](https://images.zenhubusercontent.com/63e2cfbfaca3b22752e618ee/e99483ea-f9af-425d-bb97-db7cd5e168f6) 3) Then go to phpmyadmin and check if there are changes made to the session id of user with id 1. For the purpose of testing, the token's expiration is a minute. So after a minute and closing out the browser (to change the session_id created by the php and make the change more noticeable in phpmyadmin), you can go to the page and see the changes in both the cookies and phpmyadmin. The database would look like this![Screenshot (1013).png](https://images.zenhubusercontent.com/63e2cfbfaca3b22752e618ee/5ed48b5c-84de-4952-87be-df6a91600d3e)
1.0
Set up demo for App's session authorization - *Test Cases* Local host with xampp apache running *Test 1* 1) After having the php file in the htdocs of the xampp folder on your pc and with xamp running apache, put in the local host url into your browser. Similar to what appears in the image right below in step 2 2) Inspect the page and go to the application section. Under cookies, you should see this. The persistent cookie is named session. ![Screenshot (1012).png](https://images.zenhubusercontent.com/63e2cfbfaca3b22752e618ee/e99483ea-f9af-425d-bb97-db7cd5e168f6) 3) Then go to phpmyadmin and check if there are changes made to the session id of user with id 1. For the purpose of testing, the token's expiration is a minute. So after a minute and closing out the browser (to change the session_id created by the php and make the change more noticeable in phpmyadmin), you can go to the page and see the changes in both the cookies and phpmyadmin. The database would look like this![Screenshot (1013).png](https://images.zenhubusercontent.com/63e2cfbfaca3b22752e618ee/5ed48b5c-84de-4952-87be-df6a91600d3e)
process
set up demo for app s session authorization test cases local host with xampp apache running test after having the php file in the htdocs of the xampp folder on your pc and with xamp running apache put in the local host url into your browser similar to what appears in the image right below in step inspect the page and go to the application section under cookies you should see this the persistent cookie is named session then go to phpmyadmin and check if there are changes made to the session id of user with id for the purpose of testing the token s expiration is a minute so after a minute and closing out the browser to change the session id created by the php and make the change more noticeable in phpmyadmin you can go to the page and see the changes in both the cookies and phpmyadmin the database would look like this
1
18,041
24,051,694,534
IssuesEvent
2022-09-16 13:22:03
calaldees/KaraKara
https://api.github.com/repos/calaldees/KaraKara
closed
Ensure every track has a title and some attachments
feature processmedia2
Right now the code assumes every track has `track.tags.title`, because the title is referenced in a ton of places and adding is-null checks in every place is a mess. But then if a track slips through without a title, things crash :( Similar for attachments - we assume every track has `video`, `preview`, and `image` attachments Can we update `export_track_data.py` to log a warning and avoid exporting the track if any of these fields are missing?
1.0
Ensure every track has a title and some attachments - Right now the code assumes every track has `track.tags.title`, because the title is referenced in a ton of places and adding is-null checks in every place is a mess. But then if a track slips through without a title, things crash :( Similar for attachments - we assume every track has `video`, `preview`, and `image` attachments Can we update `export_track_data.py` to log a warning and avoid exporting the track if any of these fields are missing?
process
ensure every track has a title and some attachments right now the code assumes every track has track tags title because the title is referenced in a ton of places and adding is null checks in every place is a mess but then if a track slips through without a title things crash similar for attachments we assume every track has video preview and image attachments can we update export track data py to log a warning and avoid exporting the track if any of these fields are missing
1
46,177
18,982,971,994
IssuesEvent
2021-11-21 08:01:57
Azure/azure-powershell
https://api.github.com/repos/Azure/azure-powershell
closed
Set-AzPublicIpAddress does not work with IPs created from Public IP Prefix
Service Attention question Network - Virtual Network customer-reported needs-author-feedback no-recent-activity
<!-- - Make sure you are able to reproduce this issue on the latest released version of Az - https://www.powershellgallery.com/packages/Az - Please search the existing issues to see if there has been a similar issue filed - For issue related to importing a module, please refer to our troubleshooting guide: - https://github.com/Azure/azure-powershell/blob/master/documentation/troubleshoot-module-load.md --> ## Description Set-AzPublicIpAddress produces zone related errors when Public IP Address is created fromh Public IP Prefix. ``` Set-AzPublicIpAddress : Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/provid ers/Microsoft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. StatusCode: 400 ReasonPhrase: Bad Request ErrorCode: ResourceAvailabilityZonesCannotBeModified ErrorMessage: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Micros oft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. OperationID : 105eca02-9d4a-44cc-b0ac-f3c2f261edc8 At line:1 char:16 + $updateMyPip | Set-AzPublicIpAddress + ~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : CloseError: (:) [Set-AzPublicIpAddress], NetworkCloudException + FullyQualifiedErrorId : Microsoft.Azure.Commands.Network.SetAzurePublicIpAddressCommand ``` ## Steps to reproduce ``` $rgName = "trr-mytestrg" $prefixName = "trr-mytestprefix" $pipName = "trr-mytestpip" $location = "eastus" $rg = New-AzResourceGroup ` -Name $rgName ` -Location $location $prefix = New-AzPublicIpPrefix ` -Name $prefixName ` -Location $location ` -ResourceGroupName $rgName ` -PrefixLength 31 ` -Sku Standard ` -IpAddressVersion IPv4 $pip = New-AzPublicIpAddress ` -Name $pipName ` -Location $location ` -ResourceGroupName $rgName ` -IpAddressVersion IPv4 ` -Sku Standard ` -AllocationMethod Static ` -PublicIpPrefix $prefix ` -IdleTimeoutInMinutes 10 $updateMyPip = Get-AzPublicIpAddress -Name $pipName -ResourceGroupName $rgName $updateMyPip.IdleTimeoutInMinutes = 10 $updateMyPip | Set-AzPublicIpAddress ``` ## Environment data <!-- Please run $PSVersionTable and paste the output in the below code block If running the Docker container image, indicate the tag of the image used and the version of Docker engine--> ``` Name Value ---- ----- PSVersion 5.1.18362.1171 PSEdition Desktop PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} BuildVersion 10.0.18362.1171 CLRVersion 4.0.30319.42000 WSManStackVersion 3.0 PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 ``` ## Module versions <!-- Please run (Get-Module -ListAvailable) and paste the output in the below code block --> ``` Script 2.2.1 Az.Accounts {Disable-AzDataCollection, Disable-AzContextAutosave, Enab... Script 1.1.1 Az.Advisor {Get-AzAdvisorRecommendation, Enable-AzAdvisorRecommendati... Script 2.0.1 Az.Aks {Get-AzAksCluster, New-AzAksCluster, Remove-AzAksCluster, ... Script 0.2.0 Az.AlertsManagement {Get-AzAlert, Get-AzAlertObjectHistory, Update-AzAlertStat... Script 1.1.4 Az.AnalysisServices {Resume-AzAnalysisServicesServer, Suspend-AzAnalysisServic... Script 2.1.0 Az.ApiManagement {Add-AzApiManagementApiToGateway, Add-AzApiManagementApiTo... Script 1.0.0 Az.AppConfiguration {Get-AzAppConfigurationStore, Get-AzAppConfigurationStoreK... Script 1.1.0 Az.ApplicationInsights {Get-AzApplicationInsights, New-AzApplicationInsights, Rem... Script 0.1.8 Az.Attestation {New-AzAttestation, Get-AzAttestation, Remove-AzAttestatio... Script 1.4.0 Az.Automation {Get-AzAutomationHybridWorkerGroup, Remove-AzAutomationHyb... Script 3.1.0 Az.Batch {Remove-AzBatchAccount, Get-AzBatchAccount, Get-AzBatchAcc... Script 2.0.0 Az.Billing {Get-AzBillingInvoice, Get-AzBillingPeriod, Get-AzEnrollme... Script 0.2.0 Az.Blockchain {Get-AzBlockchainConsortium, Get-AzBlockchainMember, Get-A... Script 0.2.13 Az.Blueprint {Get-AzBlueprint, Get-AzBlueprintAssignment, New-AzBluepri... Script 1.6.0 Az.Cdn {Get-AzCdnProfile, Get-AzCdnProfileSsoUrl, New-AzCdnProfil... Script 1.6 Az.CloneVirtualMachine New-AzVMClone Script 1.8.0 Az.CognitiveServices {Get-AzCognitiveServicesAccount, Get-AzCognitiveServicesAc... Script 4.6.0 Az.Compute {Remove-AzAvailabilitySet, Get-AzAvailabilitySet, New-AzAv... Script 0.7.0 Az.Compute.ManagedService ConvertTo-AzVhd Script 0.1.0 Az.ConnectedKubernetes {Get-AzConnectedKubernetes, New-AzConnectedKubernetes, Rem... Script 0.2.0 Az.ConnectedMachine {Connect-AzConnectedMachine, Get-AzConnectedMachine, Get-A... Script 0.7.0 Az.Consumption {Get-AzConsumptionBudget, Get-AzConsumptionMarketplace, Ge... Script 1.0.3 Az.ContainerInstance {New-AzContainerGroup, Get-AzContainerGroup, Remove-AzCont... Script 2.0.0 Az.ContainerRegistry {New-AzContainerRegistry, Get-AzContainerRegistry, Update-... Script 0.2.0 Az.CosmosDB {Get-AzCosmosDBSqlContainer, Get-AzCosmosDBSqlContainerThr... Script 0.1.0 Az.CustomProviders {Get-AzCustomProvider, Get-AzCustomProviderAssociation, Ne... Script 0.1.1 Az.DataBox {Get-AzDataBoxJob, Get-AzDataBoxCredential, Stop-AzDataBox... Script 1.1.0 Az.DataBoxEdge {Get-AzDataBoxEdgeJob, Get-AzDataBoxEdgeDevice, Invoke-AzD... Script 1.0.1 Az.Databricks {Get-AzDatabricksVNetPeering, Get-AzDatabricksWorkspace, N... Script 1.11.1 Az.DataFactory {Set-AzDataFactoryV2, Update-AzDataFactoryV2, Get-AzDataFa... Script 1.0.2 Az.DataLakeAnalytics {Get-AzDataLakeAnalyticsDataSource, New-AzDataLakeAnalytic... Script 1.3.0 Az.DataLakeStore {Get-AzDataLakeStoreTrustedIdProvider, Remove-AzDataLakeSt... Script 0.7.4 Az.DataMigration {New-AzDataMigrationDatabaseInfo, New-AzDataMigrationConne... Script 1.0.0 Az.DataShare {New-AzDataShareAccount, Get-AzDataShareAccount, Remove-Az... Script 0.1.0 Az.DedicatedHsm {Get-AzDedicatedHsm, New-AzDedicatedHsm, Remove-AzDedicate... Script 1.1.0 Az.DeploymentManager {Get-AzDeploymentManagerArtifactSource, New-AzDeploymentMa... Script 2.0.1 Az.DesktopVirtualization {Disconnect-AzWvdUserSession, Expand-AzWvdMsixImage, Get-A... Script 0.9.0 Az.DeviceProvisioningServices {New-AzIoTDeviceProvisioningService, Get-AzIoTDeviceProvis... Script 0.7.3 Az.DevSpaces {Get-AzDevSpacesController, New-AzDevSpacesController, Rem... Script 1.0.2 Az.DevTestLabs {Get-AzDtlAllowedVMSizesPolicy, Get-AzDtlAutoShutdownPolic... Script 0.1.0 Az.DigitalTwins {Get-AzDigitalTwinsEndpoint, Get-AzDigitalTwinsInstance, N... Script 1.1.2 Az.Dns {Get-AzDnsRecordSet, New-AzDnsRecordConfig, Remove-AzDnsRe... Script 1.3.0 Az.EventGrid {New-AzEventGridTopic, Get-AzEventGridTopic, Set-AzEventGr... Script 1.7.1 Az.EventHub {New-AzEventHubNamespace, Get-AzEventHubNamespace, Set-AzE... Script 1.6.1 Az.FrontDoor {New-AzFrontDoor, Get-AzFrontDoor, Set-AzFrontDoor, Remove... Script 2.0.0 Az.Functions {Get-AzFunctionApp, Get-AzFunctionAppAvailableLocation, Ge... Script 0.10.8 Az.GuestConfiguration {Get-AzVMGuestPolicyStatus, Get-AzVMGuestPolicyStatusHistory} Script 0.2.0 Az.HanaOnAzure {Get-AzSapMonitor, Get-AzSapMonitorProviderInstance, New-A... Script 4.1.0 Az.HDInsight {Get-AzHDInsightJob, New-AzHDInsightSqoopJobDefinition, Wa... Script 1.1.0 Az.HealthcareApis {New-AzHealthcareApisService, Remove-AzHealthcareApisServi... Script 0.1.1 Az.HPCCache {Get-AzHpcCacheSku, Get-AzHpcCacheUsageModel, Get-AzHpcCac... Script 0.1.2 Az.ImageBuilder {Get-AzImageBuilderRunOutput, Get-AzImageBuilderTemplate, ... Script 1.0.0.1 Az.ImageBuilder.Tools {Get-AIBBuildStatus, Initialize-AzureImageBuilder, Invoke-... Script 0.1.0 Az.ImportExport {Get-AzImportExport, Get-AzImportExportBitLockerKey, Get-A... Script 0.7.0 Az.Insights {Get-AzMetricDefinition, Get-AzMetric, Remove-AzLogProfile... Script 0.8.0 Az.IotCentral {New-AzIotCentralApp, Get-AzIotCentralApp, Set-AzIotCentra... Script 2.7.0 Az.IotHub {Add-AzIotHubKey, Get-AzIotHubEventHubConsumerGroup, Get-A... Script 3.1.0 Az.KeyVault {Add-AzManagedHsmKey, Get-AzManagedHsmKey, Remove-AzManage... Script 0.1.0 Az.KubernetesConfiguration {Get-AzKubernetesConfiguration, New-AzKubernetesConfigurat... Script 1.0.0 Az.Kusto {Add-AzKustoClusterLanguageExtension, Add-AzKustoDatabaseP... Script 1.4.0 Az.LogicApp {Get-AzIntegrationAccountAgreement, Get-AzIntegrationAccou... Script 1.1.3 Az.MachineLearning {Move-AzMlCommitmentAssociation, Get-AzMlCommitmentAssocia... Script 0.7.0 Az.MachineLearningCompute {Get-AzMlOpCluster, Get-AzMlOpClusterKey, Test-AzMlOpClust... Script 1.1.0 Az.Maintenance {Get-AzApplyUpdate, Get-AzConfigurationAssignment, Get-AzM... Script 0.7.3 Az.ManagedServiceIdentity {New-AzUserAssignedIdentity, Get-AzUserAssignedIdentity, R... Script 2.0.0 Az.ManagedServices {Get-AzManagedServicesAssignment, New-AzManagedServicesAss... Script 0.7.2 Az.ManagementPartner {Get-AzManagementPartner, New-AzManagementPartner, Update-... Script 0.7.3 Az.Maps {Get-AzMapsAccount, New-AzMapsAccount, Remove-AzMapsAccoun... Script 0.2.0 Az.MariaDb {Get-AzMariaDbConfiguration, Get-AzMariaDbConnectionString... Script 0.2.0 Az.Marketplace {Get-AzMarketplacePrivateStore, Get-AzMarketplacePrivateSt... Script 1.0.2 Az.MarketplaceOrdering {Get-AzMarketplaceTerms, Set-AzMarketplaceTerms} Script 1.1.1 Az.Media {Sync-AzMediaServiceStorageKey, Set-AzMediaServiceKey, Get... Script 0.1.1 Az.Migrate {Get-AzMigrateDiscoveredServer, Get-AzMigrateJob, Get-AzMi... Script 0.1.4 Az.MixedReality {Get-AzSpatialAnchorsAccount, Get-AzSpatialAnchorsAccountK... Script 2.2.0 Az.Monitor {Get-AzMetricDefinition, Get-AzMetric, Remove-AzLogProfile... Script 0.1.0 Az.MonitoringSolutions {Get-AzMonitorLogAnalyticsSolution, New-AzMonitorLogAnalyt... Script 0.2.0 Az.MySql {Get-AzMySqlConfiguration, Get-AzMySqlConnectionString, Ge... Script 0.2.0 Az.NetAppFiles {Get-AzNetAppFilesAccount, New-AzNetAppFilesAccount, Remov... Script 4.3.0 Az.Network {Add-AzApplicationGatewayAuthenticationCertificate, Get-Az... Script 4.1.0 Az.Network {Add-AzApplicationGatewayAuthenticationCertificate, Get-Az... Script 1.1.1 Az.NotificationHubs {Get-AzNotificationHub, Get-AzNotificationHubAuthorization... Script 2.3.0 Az.OperationalInsights {New-AzOperationalInsightsAzureActivityLogDataSource, New-... Script 0.2.0 Az.Peering {Get-AzPeering, Get-AzPeerAsn, New-AzPeerAsn, New-AzPeerin... Script 1.3.1 Az.PolicyInsights {Get-AzPolicyEvent, Get-AzPolicyState, Get-AzPolicyStateSu... Script 0.1.0 Az.Portal {Get-AzPortalDashboard, New-AzPortalDashboard, Remove-AzPo... Script 0.2.0 Az.PostgreSql {Get-AzPostgreSqlConfiguration, Get-AzPostgreSqlConnection... Script 1.1.2 Az.PowerBIEmbedded {Remove-AzPowerBIWorkspaceCollection, Get-AzPowerBIWorkspa... Script 1.0.3 Az.PrivateDns {Get-AzPrivateDnsZone, Remove-AzPrivateDnsZone, Set-AzPriv... Script 0.7.0 Az.Profile {Disable-AzDataCollection, Disable-AzContextAutosave, Enab... Script 3.0.1 Az.RecoveryServices {Get-AzRecoveryServicesBackupProperty, Get-AzRecoveryServi... Script 1.4.0 Az.RedisCache {Remove-AzRedisCachePatchSchedule, New-AzRedisCacheSchedul... Script 1.0.3 Az.Relay {New-AzRelayNamespace, Get-AzRelayNamespace, Set-AzRelayNa... Script 0.9.0 Az.Reservations {Get-AzReservationOrder, Get-AzReservation, Get-AzReservat... Script 0.7.7 Az.ResourceGraph Search-AzGraph Script 0.1.0 Az.ResourceMover {Add-AzResourceMoverMoveResource, Get-AzResourceMoverMoveC... Script 3.0.1 Az.Resources {Get-AzProviderOperation, Remove-AzRoleAssignment, Get-AzR... Script 0.7.4 Az.Search {New-AzSearchService, Get-AzSearchService, Set-AzSearchSer... Script 0.8.0 Az.Security {Get-AzSecurityAlert, Set-AzSecurityAlert, Get-AzSecurityA... Script 1.4.1 Az.ServiceBus {New-AzServiceBusNamespace, Get-AzServiceBusNamespace, Set... Script 2.2.0 Az.ServiceFabric {Add-AzServiceFabricClientCertificate, Add-AzServiceFabric... Script 1.2.0 Az.SignalR {New-AzSignalR, Get-AzSignalR, Get-AzSignalRKey, New-AzSig... Script 0.2.0 Az.SpringCloud {Deploy-AzSpringCloudApp, Get-AzSpringCloud, Get-AzSpringC... Script 2.12.0 Az.Sql {Get-AzSqlDatabaseTransparentDataEncryption, Get-AzSqlData... Script 1.1.0 Az.SqlVirtualMachine {New-AzSqlVM, Get-AzSqlVM, Update-AzSqlVM, Remove-AzSqlVM...} Script 0.1.0 Az.StackEdge {Get-AzStackEdgeJob, Get-AzStackEdgeDevice, Invoke-AzStack... Script 0.4.1 Az.StackHCI {Register-AzStackHCI, Unregister-AzStackHCI, Test-AzStackH... Script 3.0.0 Az.Storage {Get-AzStorageAccount, Get-AzStorageAccountKey, New-AzStor... Script 1.3.0 Az.StorageSync {Invoke-AzStorageSyncCompatibilityCheck, New-AzStorageSync... Manifest 1.0.7 Az.StorageTable {Add-AzStorageTableRow, Get-AzStorageTableRowAll, Get-AzSt... Script 1.0.1 Az.StreamAnalytics {Get-AzStreamAnalyticsFunction, Get-AzStreamAnalyticsDefau... Script 0.8.0 Az.Subscription {Update-AzSubscription, New-AzSubscriptionAlias, Get-AzSub... Script 1.0.0 Az.Support {Get-AzSupportService, Get-AzSupportProblemClassification,... Script 0.4.0 Az.Synapse {Get-AzSynapseSparkJob, Stop-AzSynapseSparkJob, Submit-AzS... Script 0.7.0 Az.Tags {Remove-AzTag, Get-AzTag, New-AzTag} Script 0.2.0 Az.TimeSeriesInsights {Get-AzTimeSeriesInsightsAccessPolicy, Get-AzTimeSeriesIns... Script 0.1.1 Az.Tools.Installer {Install-AzModule, Uninstall-AzModule, Update-AzModule} Script 0.1.0 Az.Tools.Migration {Disable-AzUpgradeDataCollection, Enable-AzUpgradeDataColl... Script 1.0.4 Az.TrafficManager {Add-AzTrafficManagerCustomHeaderToEndpoint, Remove-AzTraf... Script 0.7.0 Az.UsageAggregates Get-UsageAggregates Script 0.1.0 Az.VMWare {Get-AzVMWareAuthorization, Get-AzVMWareCluster, Get-AzVMW... Script 2.1.0 Az.Websites {Get-AzAppServicePlan, Set-AzAppServicePlan, New-AzAppServ... ``` ## Debug output <!-- Set $DebugPreference='Continue' before running the repro and paste the resulting debug stream in the below code block ATTENTION: Be sure to remove any sensitive information that may be in the logs --> ``` > $updateMyPip = Get-AzPublicIpAddress -Name $pipName -ResourceGroupName $rgName DEBUG: 9:25:29 AM - GetAzurePublicIpAddressCommand begin processing with ParameterSet 'NoExpandStandAloneIp'. DEBUG: 9:25:29 AM - using account id 'trindels@redapt.com'... DEBUG: [Common.Authentication]: Authenticating using Account: 'trindels@redapt.com', environment: 'AzureCloud', tenant: 'a9216385-5760-40b6-ac04-e893074255e0' DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2020-12-03T16:04:11.0000000+00:00 DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2020-12-03T16:04:11.0000000+00:00 DEBUG: [Common.Authentication]: Received token with LoginType 'User', Tenant: 'a9216385-5760-40b6-ac04-e893074255e0', UserId: 'trindels@redapt.com' DEBUG: ============================ HTTP REQUEST ============================ HTTP Method: GET Absolute Uri: https://management.azure.com/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/M icrosoft.Network/publicIPAddresses/trr-mytestpip?api-version=2020-07-01 Headers: x-ms-client-request-id : 499acecf-77a1-4711-bc34-789ac105fd9a accept-language : en-US Body: DEBUG: ============================ HTTP RESPONSE ============================ Status Code: OK Headers: Pragma : no-cache x-ms-request-id : e00175d2-f535-4354-a384-38e04cc711f5 x-ms-correlation-request-id : d6e751fb-7333-4e15-b2b1-eb742f9c8a08 x-ms-arm-service-request-id : 598cf03f-6699-40c0-ab88-501c74058dd4 Strict-Transport-Security : max-age=31536000; includeSubDomains Cache-Control : no-cache ETag : W/"f9d9c30b-6f40-43dd-a640-9443754a3d5b" Server : Microsoft-HTTPAPI/2.0,Microsoft-HTTPAPI/2.0 x-ms-ratelimit-remaining-subscription-reads: 11998 x-ms-routing-request-id : CENTRALUS:20201203T152528Z:d6e751fb-7333-4e15-b2b1-eb742f9c8a08 X-Content-Type-Options : nosniff Date : Thu, 03 Dec 2020 15:25:28 GMT Body: { "name": "trr-mytestpip", "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAd dresses/trr-mytestpip", "etag": "W/\"f9d9c30b-6f40-43dd-a640-9443754a3d5b\"", "location": "eastus", "properties": { "provisioningState": "Succeeded", "resourceGuid": "55c124b9-439d-4969-aca1-47c753fbac58", "ipAddress": "20.72.169.252", "publicIPAddressVersion": "IPv4", "publicIPAllocationMethod": "Static", "idleTimeoutInMinutes": 10, "ipTags": [], "publicIPPrefix": { "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPPr efixes/trr-mytestprefix" } }, "type": "Microsoft.Network/publicIPAddresses", "sku": { "name": "Standard", "tier": "Regional" } } DEBUG: AzureQoSEvent: CommandName - Get-AzPublicIpAddress; IsSuccess - True; Duration - 00:00:00.2538936; DEBUG: Finish sending metric. DEBUG: 9:25:30 AM - GetAzurePublicIpAddressCommand end processing. > $updateMyPip.IdleTimeoutInMinutes = 10 > $updateMyPip | Set-AzPublicIpAddress DEBUG: 9:25:31 AM - SetAzurePublicIpAddressCommand begin processing with ParameterSet '__AllParameterSets'. DEBUG: 9:25:31 AM - using account id 'trindels@redapt.com'... DEBUG: [Common.Authentication]: Authenticating using Account: 'trindels@redapt.com', environment: 'AzureCloud', tenant: 'a9216385-5760-40b6-ac04-e893074255e0' DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2020-12-03T16:04:11.0000000+00:00 DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2020-12-03T16:04:11.0000000+00:00 DEBUG: [Common.Authentication]: Received token with LoginType 'User', Tenant: 'a9216385-5760-40b6-ac04-e893074255e0', UserId: 'trindels@redapt.com' DEBUG: ============================ HTTP REQUEST ============================ HTTP Method: GET Absolute Uri: https://management.azure.com/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/M icrosoft.Network/publicIPAddresses/trr-mytestpip?api-version=2020-07-01 Headers: x-ms-client-request-id : 7b43b132-d095-422e-b084-5291e5201299 accept-language : en-US Body: DEBUG: ============================ HTTP RESPONSE ============================ Status Code: OK Headers: Pragma : no-cache x-ms-request-id : 1adf8628-e534-4426-b731-fe1c6461ea66 x-ms-correlation-request-id : 7ebfb5ca-f5c0-4282-a43f-4c4e68501fd7 x-ms-arm-service-request-id : 7ac36e0d-e8cc-42d0-b492-32097e6b8cd9 Strict-Transport-Security : max-age=31536000; includeSubDomains Cache-Control : no-cache ETag : W/"f9d9c30b-6f40-43dd-a640-9443754a3d5b" Server : Microsoft-HTTPAPI/2.0,Microsoft-HTTPAPI/2.0 x-ms-ratelimit-remaining-subscription-reads: 11999 x-ms-routing-request-id : CENTRALUS:20201203T152529Z:7ebfb5ca-f5c0-4282-a43f-4c4e68501fd7 X-Content-Type-Options : nosniff Date : Thu, 03 Dec 2020 15:25:29 GMT Body: { "name": "trr-mytestpip", "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAd dresses/trr-mytestpip", "etag": "W/\"f9d9c30b-6f40-43dd-a640-9443754a3d5b\"", "location": "eastus", "properties": { "provisioningState": "Succeeded", "resourceGuid": "55c124b9-439d-4969-aca1-47c753fbac58", "ipAddress": "20.72.169.252", "publicIPAddressVersion": "IPv4", "publicIPAllocationMethod": "Static", "idleTimeoutInMinutes": 10, "ipTags": [], "publicIPPrefix": { "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPPr efixes/trr-mytestprefix" } }, "type": "Microsoft.Network/publicIPAddresses", "sku": { "name": "Standard", "tier": "Regional" } } DEBUG: ============================ HTTP REQUEST ============================ HTTP Method: PUT Absolute Uri: https://management.azure.com/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/M icrosoft.Network/publicIPAddresses/trr-mytestpip?api-version=2020-07-01 Headers: x-ms-client-request-id : 21a7e76d-5a84-4d75-8fd7-4d9dda43bc66 accept-language : en-US Body: { "sku": { "name": "Standard", "tier": "Regional" }, "properties": { "publicIPAllocationMethod": "Static", "publicIPAddressVersion": "IPv4", "ipTags": [], "ipAddress": "20.72.169.252", "publicIPPrefix": { "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPPr efixes/trr-mytestprefix" }, "idleTimeoutInMinutes": 10 }, "zones": [], "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAd dresses/trr-mytestpip", "location": "eastus" } DEBUG: ============================ HTTP RESPONSE ============================ Status Code: BadRequest Headers: Pragma : no-cache x-ms-request-id : 105eca02-9d4a-44cc-b0ac-f3c2f261edc8 x-ms-correlation-request-id : 9166ba1b-5a36-440b-8ab7-17f55f96eaa0 x-ms-arm-service-request-id : 0b5b922e-5f62-4f8e-b56a-b0fe6a0837ec Strict-Transport-Security : max-age=31536000; includeSubDomains Cache-Control : no-cache Server : Microsoft-HTTPAPI/2.0,Microsoft-HTTPAPI/2.0 x-ms-ratelimit-remaining-subscription-writes: 1199 x-ms-routing-request-id : CENTRALUS:20201203T152530Z:9166ba1b-5a36-440b-8ab7-17f55f96eaa0 X-Content-Type-Options : nosniff Date : Thu, 03 Dec 2020 15:25:30 GMT Body: { "error": { "code": "ResourceAvailabilityZonesCannotBeModified", "message": "Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAdd resses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa.", "details": [] } } Set-AzPublicIpAddress : Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/provid ers/Microsoft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. StatusCode: 400 ReasonPhrase: Bad Request ErrorCode: ResourceAvailabilityZonesCannotBeModified ErrorMessage: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Micros oft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. OperationID : 105eca02-9d4a-44cc-b0ac-f3c2f261edc8 At line:1 char:16 + $updateMyPip | Set-AzPublicIpAddress + ~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : CloseError: (:) [Set-AzPublicIpAddress], NetworkCloudException + FullyQualifiedErrorId : Microsoft.Azure.Commands.Network.SetAzurePublicIpAddressCommand DEBUG: AzureQoSEvent: CommandName - Set-AzPublicIpAddress; IsSuccess - False; Duration - 00:00:00.8117733;; Exception - Microsoft.Azure.Commands.Network.Common.NetworkCloudException: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAdd resses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. StatusCode: 400 ReasonPhrase: Bad Request ErrorCode: ResourceAvailabilityZonesCannotBeModified ErrorMessage: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAdd resses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. OperationID : 105eca02-9d4a-44cc-b0ac-f3c2f261edc8 ---> Microsoft.Rest.Azure.CloudException: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAdd resses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. at Microsoft.Azure.Management.Network.PublicIPAddressesOperations.<BeginCreateOrUpdateWithHttpMessagesAsync>d__15.MoveNext () --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.Network.PublicIPAddressesOperations.<CreateOrUpdateWithHttpMessagesAsync>d__7.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.Network.PublicIPAddressesOperationsExtensions.<CreateOrUpdateAsync>d__5.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.Network.PublicIPAddressesOperationsExtensions.CreateOrUpdate(IPublicIPAddressesOperations operations, String resourceGroupName, String publicIpAddressName, PublicIPAddress parameters) at Microsoft.Azure.Commands.Network.SetAzurePublicIpAddressCommand.Execute() at Microsoft.Azure.Commands.Network.NetworkBaseCmdlet.ExecuteCmdlet() --- End of inner exception stack trace --- at Microsoft.Azure.Commands.Network.NetworkBaseCmdlet.ExecuteCmdlet() at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord(); DEBUG: Finish sending metric. DEBUG: 9:25:32 AM - SetAzurePublicIpAddressCommand end processing. ``` ## Error output <!-- Please run Resolve-AzError and paste the output in the below code block ATTENTION: Be sure to remove any sensitive information that may be in the logs --> ``` HistoryId: 16 RequestId : Message : Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Mic rosoft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. StatusCode: 400 ReasonPhrase: Bad Request ErrorCode: ResourceAvailabilityZonesCannotBeModified ErrorMessage: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg /providers/Microsoft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. OperationID : 105eca02-9d4a-44cc-b0ac-f3c2f261edc8 ServerMessage : ServerResponse : RequestMessage : InvocationInfo : {Set-AzPublicIpAddress} Line : $updateMyPip | Set-AzPublicIpAddress Position : At line:1 char:16 + $updateMyPip | Set-AzPublicIpAddress + ~~~~~~~~~~~~~~~~~~~~~ StackTrace : at Microsoft.Azure.Commands.Network.NetworkBaseCmdlet.ExecuteCmdlet() at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord() HistoryId : 16 ```
1.0
Set-AzPublicIpAddress does not work with IPs created from Public IP Prefix - <!-- - Make sure you are able to reproduce this issue on the latest released version of Az - https://www.powershellgallery.com/packages/Az - Please search the existing issues to see if there has been a similar issue filed - For issue related to importing a module, please refer to our troubleshooting guide: - https://github.com/Azure/azure-powershell/blob/master/documentation/troubleshoot-module-load.md --> ## Description Set-AzPublicIpAddress produces zone related errors when Public IP Address is created fromh Public IP Prefix. ``` Set-AzPublicIpAddress : Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/provid ers/Microsoft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. StatusCode: 400 ReasonPhrase: Bad Request ErrorCode: ResourceAvailabilityZonesCannotBeModified ErrorMessage: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Micros oft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. OperationID : 105eca02-9d4a-44cc-b0ac-f3c2f261edc8 At line:1 char:16 + $updateMyPip | Set-AzPublicIpAddress + ~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : CloseError: (:) [Set-AzPublicIpAddress], NetworkCloudException + FullyQualifiedErrorId : Microsoft.Azure.Commands.Network.SetAzurePublicIpAddressCommand ``` ## Steps to reproduce ``` $rgName = "trr-mytestrg" $prefixName = "trr-mytestprefix" $pipName = "trr-mytestpip" $location = "eastus" $rg = New-AzResourceGroup ` -Name $rgName ` -Location $location $prefix = New-AzPublicIpPrefix ` -Name $prefixName ` -Location $location ` -ResourceGroupName $rgName ` -PrefixLength 31 ` -Sku Standard ` -IpAddressVersion IPv4 $pip = New-AzPublicIpAddress ` -Name $pipName ` -Location $location ` -ResourceGroupName $rgName ` -IpAddressVersion IPv4 ` -Sku Standard ` -AllocationMethod Static ` -PublicIpPrefix $prefix ` -IdleTimeoutInMinutes 10 $updateMyPip = Get-AzPublicIpAddress -Name $pipName -ResourceGroupName $rgName $updateMyPip.IdleTimeoutInMinutes = 10 $updateMyPip | Set-AzPublicIpAddress ``` ## Environment data <!-- Please run $PSVersionTable and paste the output in the below code block If running the Docker container image, indicate the tag of the image used and the version of Docker engine--> ``` Name Value ---- ----- PSVersion 5.1.18362.1171 PSEdition Desktop PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} BuildVersion 10.0.18362.1171 CLRVersion 4.0.30319.42000 WSManStackVersion 3.0 PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 ``` ## Module versions <!-- Please run (Get-Module -ListAvailable) and paste the output in the below code block --> ``` Script 2.2.1 Az.Accounts {Disable-AzDataCollection, Disable-AzContextAutosave, Enab... Script 1.1.1 Az.Advisor {Get-AzAdvisorRecommendation, Enable-AzAdvisorRecommendati... Script 2.0.1 Az.Aks {Get-AzAksCluster, New-AzAksCluster, Remove-AzAksCluster, ... Script 0.2.0 Az.AlertsManagement {Get-AzAlert, Get-AzAlertObjectHistory, Update-AzAlertStat... Script 1.1.4 Az.AnalysisServices {Resume-AzAnalysisServicesServer, Suspend-AzAnalysisServic... Script 2.1.0 Az.ApiManagement {Add-AzApiManagementApiToGateway, Add-AzApiManagementApiTo... Script 1.0.0 Az.AppConfiguration {Get-AzAppConfigurationStore, Get-AzAppConfigurationStoreK... Script 1.1.0 Az.ApplicationInsights {Get-AzApplicationInsights, New-AzApplicationInsights, Rem... Script 0.1.8 Az.Attestation {New-AzAttestation, Get-AzAttestation, Remove-AzAttestatio... Script 1.4.0 Az.Automation {Get-AzAutomationHybridWorkerGroup, Remove-AzAutomationHyb... Script 3.1.0 Az.Batch {Remove-AzBatchAccount, Get-AzBatchAccount, Get-AzBatchAcc... Script 2.0.0 Az.Billing {Get-AzBillingInvoice, Get-AzBillingPeriod, Get-AzEnrollme... Script 0.2.0 Az.Blockchain {Get-AzBlockchainConsortium, Get-AzBlockchainMember, Get-A... Script 0.2.13 Az.Blueprint {Get-AzBlueprint, Get-AzBlueprintAssignment, New-AzBluepri... Script 1.6.0 Az.Cdn {Get-AzCdnProfile, Get-AzCdnProfileSsoUrl, New-AzCdnProfil... Script 1.6 Az.CloneVirtualMachine New-AzVMClone Script 1.8.0 Az.CognitiveServices {Get-AzCognitiveServicesAccount, Get-AzCognitiveServicesAc... Script 4.6.0 Az.Compute {Remove-AzAvailabilitySet, Get-AzAvailabilitySet, New-AzAv... Script 0.7.0 Az.Compute.ManagedService ConvertTo-AzVhd Script 0.1.0 Az.ConnectedKubernetes {Get-AzConnectedKubernetes, New-AzConnectedKubernetes, Rem... Script 0.2.0 Az.ConnectedMachine {Connect-AzConnectedMachine, Get-AzConnectedMachine, Get-A... Script 0.7.0 Az.Consumption {Get-AzConsumptionBudget, Get-AzConsumptionMarketplace, Ge... Script 1.0.3 Az.ContainerInstance {New-AzContainerGroup, Get-AzContainerGroup, Remove-AzCont... Script 2.0.0 Az.ContainerRegistry {New-AzContainerRegistry, Get-AzContainerRegistry, Update-... Script 0.2.0 Az.CosmosDB {Get-AzCosmosDBSqlContainer, Get-AzCosmosDBSqlContainerThr... Script 0.1.0 Az.CustomProviders {Get-AzCustomProvider, Get-AzCustomProviderAssociation, Ne... Script 0.1.1 Az.DataBox {Get-AzDataBoxJob, Get-AzDataBoxCredential, Stop-AzDataBox... Script 1.1.0 Az.DataBoxEdge {Get-AzDataBoxEdgeJob, Get-AzDataBoxEdgeDevice, Invoke-AzD... Script 1.0.1 Az.Databricks {Get-AzDatabricksVNetPeering, Get-AzDatabricksWorkspace, N... Script 1.11.1 Az.DataFactory {Set-AzDataFactoryV2, Update-AzDataFactoryV2, Get-AzDataFa... Script 1.0.2 Az.DataLakeAnalytics {Get-AzDataLakeAnalyticsDataSource, New-AzDataLakeAnalytic... Script 1.3.0 Az.DataLakeStore {Get-AzDataLakeStoreTrustedIdProvider, Remove-AzDataLakeSt... Script 0.7.4 Az.DataMigration {New-AzDataMigrationDatabaseInfo, New-AzDataMigrationConne... Script 1.0.0 Az.DataShare {New-AzDataShareAccount, Get-AzDataShareAccount, Remove-Az... Script 0.1.0 Az.DedicatedHsm {Get-AzDedicatedHsm, New-AzDedicatedHsm, Remove-AzDedicate... Script 1.1.0 Az.DeploymentManager {Get-AzDeploymentManagerArtifactSource, New-AzDeploymentMa... Script 2.0.1 Az.DesktopVirtualization {Disconnect-AzWvdUserSession, Expand-AzWvdMsixImage, Get-A... Script 0.9.0 Az.DeviceProvisioningServices {New-AzIoTDeviceProvisioningService, Get-AzIoTDeviceProvis... Script 0.7.3 Az.DevSpaces {Get-AzDevSpacesController, New-AzDevSpacesController, Rem... Script 1.0.2 Az.DevTestLabs {Get-AzDtlAllowedVMSizesPolicy, Get-AzDtlAutoShutdownPolic... Script 0.1.0 Az.DigitalTwins {Get-AzDigitalTwinsEndpoint, Get-AzDigitalTwinsInstance, N... Script 1.1.2 Az.Dns {Get-AzDnsRecordSet, New-AzDnsRecordConfig, Remove-AzDnsRe... Script 1.3.0 Az.EventGrid {New-AzEventGridTopic, Get-AzEventGridTopic, Set-AzEventGr... Script 1.7.1 Az.EventHub {New-AzEventHubNamespace, Get-AzEventHubNamespace, Set-AzE... Script 1.6.1 Az.FrontDoor {New-AzFrontDoor, Get-AzFrontDoor, Set-AzFrontDoor, Remove... Script 2.0.0 Az.Functions {Get-AzFunctionApp, Get-AzFunctionAppAvailableLocation, Ge... Script 0.10.8 Az.GuestConfiguration {Get-AzVMGuestPolicyStatus, Get-AzVMGuestPolicyStatusHistory} Script 0.2.0 Az.HanaOnAzure {Get-AzSapMonitor, Get-AzSapMonitorProviderInstance, New-A... Script 4.1.0 Az.HDInsight {Get-AzHDInsightJob, New-AzHDInsightSqoopJobDefinition, Wa... Script 1.1.0 Az.HealthcareApis {New-AzHealthcareApisService, Remove-AzHealthcareApisServi... Script 0.1.1 Az.HPCCache {Get-AzHpcCacheSku, Get-AzHpcCacheUsageModel, Get-AzHpcCac... Script 0.1.2 Az.ImageBuilder {Get-AzImageBuilderRunOutput, Get-AzImageBuilderTemplate, ... Script 1.0.0.1 Az.ImageBuilder.Tools {Get-AIBBuildStatus, Initialize-AzureImageBuilder, Invoke-... Script 0.1.0 Az.ImportExport {Get-AzImportExport, Get-AzImportExportBitLockerKey, Get-A... Script 0.7.0 Az.Insights {Get-AzMetricDefinition, Get-AzMetric, Remove-AzLogProfile... Script 0.8.0 Az.IotCentral {New-AzIotCentralApp, Get-AzIotCentralApp, Set-AzIotCentra... Script 2.7.0 Az.IotHub {Add-AzIotHubKey, Get-AzIotHubEventHubConsumerGroup, Get-A... Script 3.1.0 Az.KeyVault {Add-AzManagedHsmKey, Get-AzManagedHsmKey, Remove-AzManage... Script 0.1.0 Az.KubernetesConfiguration {Get-AzKubernetesConfiguration, New-AzKubernetesConfigurat... Script 1.0.0 Az.Kusto {Add-AzKustoClusterLanguageExtension, Add-AzKustoDatabaseP... Script 1.4.0 Az.LogicApp {Get-AzIntegrationAccountAgreement, Get-AzIntegrationAccou... Script 1.1.3 Az.MachineLearning {Move-AzMlCommitmentAssociation, Get-AzMlCommitmentAssocia... Script 0.7.0 Az.MachineLearningCompute {Get-AzMlOpCluster, Get-AzMlOpClusterKey, Test-AzMlOpClust... Script 1.1.0 Az.Maintenance {Get-AzApplyUpdate, Get-AzConfigurationAssignment, Get-AzM... Script 0.7.3 Az.ManagedServiceIdentity {New-AzUserAssignedIdentity, Get-AzUserAssignedIdentity, R... Script 2.0.0 Az.ManagedServices {Get-AzManagedServicesAssignment, New-AzManagedServicesAss... Script 0.7.2 Az.ManagementPartner {Get-AzManagementPartner, New-AzManagementPartner, Update-... Script 0.7.3 Az.Maps {Get-AzMapsAccount, New-AzMapsAccount, Remove-AzMapsAccoun... Script 0.2.0 Az.MariaDb {Get-AzMariaDbConfiguration, Get-AzMariaDbConnectionString... Script 0.2.0 Az.Marketplace {Get-AzMarketplacePrivateStore, Get-AzMarketplacePrivateSt... Script 1.0.2 Az.MarketplaceOrdering {Get-AzMarketplaceTerms, Set-AzMarketplaceTerms} Script 1.1.1 Az.Media {Sync-AzMediaServiceStorageKey, Set-AzMediaServiceKey, Get... Script 0.1.1 Az.Migrate {Get-AzMigrateDiscoveredServer, Get-AzMigrateJob, Get-AzMi... Script 0.1.4 Az.MixedReality {Get-AzSpatialAnchorsAccount, Get-AzSpatialAnchorsAccountK... Script 2.2.0 Az.Monitor {Get-AzMetricDefinition, Get-AzMetric, Remove-AzLogProfile... Script 0.1.0 Az.MonitoringSolutions {Get-AzMonitorLogAnalyticsSolution, New-AzMonitorLogAnalyt... Script 0.2.0 Az.MySql {Get-AzMySqlConfiguration, Get-AzMySqlConnectionString, Ge... Script 0.2.0 Az.NetAppFiles {Get-AzNetAppFilesAccount, New-AzNetAppFilesAccount, Remov... Script 4.3.0 Az.Network {Add-AzApplicationGatewayAuthenticationCertificate, Get-Az... Script 4.1.0 Az.Network {Add-AzApplicationGatewayAuthenticationCertificate, Get-Az... Script 1.1.1 Az.NotificationHubs {Get-AzNotificationHub, Get-AzNotificationHubAuthorization... Script 2.3.0 Az.OperationalInsights {New-AzOperationalInsightsAzureActivityLogDataSource, New-... Script 0.2.0 Az.Peering {Get-AzPeering, Get-AzPeerAsn, New-AzPeerAsn, New-AzPeerin... Script 1.3.1 Az.PolicyInsights {Get-AzPolicyEvent, Get-AzPolicyState, Get-AzPolicyStateSu... Script 0.1.0 Az.Portal {Get-AzPortalDashboard, New-AzPortalDashboard, Remove-AzPo... Script 0.2.0 Az.PostgreSql {Get-AzPostgreSqlConfiguration, Get-AzPostgreSqlConnection... Script 1.1.2 Az.PowerBIEmbedded {Remove-AzPowerBIWorkspaceCollection, Get-AzPowerBIWorkspa... Script 1.0.3 Az.PrivateDns {Get-AzPrivateDnsZone, Remove-AzPrivateDnsZone, Set-AzPriv... Script 0.7.0 Az.Profile {Disable-AzDataCollection, Disable-AzContextAutosave, Enab... Script 3.0.1 Az.RecoveryServices {Get-AzRecoveryServicesBackupProperty, Get-AzRecoveryServi... Script 1.4.0 Az.RedisCache {Remove-AzRedisCachePatchSchedule, New-AzRedisCacheSchedul... Script 1.0.3 Az.Relay {New-AzRelayNamespace, Get-AzRelayNamespace, Set-AzRelayNa... Script 0.9.0 Az.Reservations {Get-AzReservationOrder, Get-AzReservation, Get-AzReservat... Script 0.7.7 Az.ResourceGraph Search-AzGraph Script 0.1.0 Az.ResourceMover {Add-AzResourceMoverMoveResource, Get-AzResourceMoverMoveC... Script 3.0.1 Az.Resources {Get-AzProviderOperation, Remove-AzRoleAssignment, Get-AzR... Script 0.7.4 Az.Search {New-AzSearchService, Get-AzSearchService, Set-AzSearchSer... Script 0.8.0 Az.Security {Get-AzSecurityAlert, Set-AzSecurityAlert, Get-AzSecurityA... Script 1.4.1 Az.ServiceBus {New-AzServiceBusNamespace, Get-AzServiceBusNamespace, Set... Script 2.2.0 Az.ServiceFabric {Add-AzServiceFabricClientCertificate, Add-AzServiceFabric... Script 1.2.0 Az.SignalR {New-AzSignalR, Get-AzSignalR, Get-AzSignalRKey, New-AzSig... Script 0.2.0 Az.SpringCloud {Deploy-AzSpringCloudApp, Get-AzSpringCloud, Get-AzSpringC... Script 2.12.0 Az.Sql {Get-AzSqlDatabaseTransparentDataEncryption, Get-AzSqlData... Script 1.1.0 Az.SqlVirtualMachine {New-AzSqlVM, Get-AzSqlVM, Update-AzSqlVM, Remove-AzSqlVM...} Script 0.1.0 Az.StackEdge {Get-AzStackEdgeJob, Get-AzStackEdgeDevice, Invoke-AzStack... Script 0.4.1 Az.StackHCI {Register-AzStackHCI, Unregister-AzStackHCI, Test-AzStackH... Script 3.0.0 Az.Storage {Get-AzStorageAccount, Get-AzStorageAccountKey, New-AzStor... Script 1.3.0 Az.StorageSync {Invoke-AzStorageSyncCompatibilityCheck, New-AzStorageSync... Manifest 1.0.7 Az.StorageTable {Add-AzStorageTableRow, Get-AzStorageTableRowAll, Get-AzSt... Script 1.0.1 Az.StreamAnalytics {Get-AzStreamAnalyticsFunction, Get-AzStreamAnalyticsDefau... Script 0.8.0 Az.Subscription {Update-AzSubscription, New-AzSubscriptionAlias, Get-AzSub... Script 1.0.0 Az.Support {Get-AzSupportService, Get-AzSupportProblemClassification,... Script 0.4.0 Az.Synapse {Get-AzSynapseSparkJob, Stop-AzSynapseSparkJob, Submit-AzS... Script 0.7.0 Az.Tags {Remove-AzTag, Get-AzTag, New-AzTag} Script 0.2.0 Az.TimeSeriesInsights {Get-AzTimeSeriesInsightsAccessPolicy, Get-AzTimeSeriesIns... Script 0.1.1 Az.Tools.Installer {Install-AzModule, Uninstall-AzModule, Update-AzModule} Script 0.1.0 Az.Tools.Migration {Disable-AzUpgradeDataCollection, Enable-AzUpgradeDataColl... Script 1.0.4 Az.TrafficManager {Add-AzTrafficManagerCustomHeaderToEndpoint, Remove-AzTraf... Script 0.7.0 Az.UsageAggregates Get-UsageAggregates Script 0.1.0 Az.VMWare {Get-AzVMWareAuthorization, Get-AzVMWareCluster, Get-AzVMW... Script 2.1.0 Az.Websites {Get-AzAppServicePlan, Set-AzAppServicePlan, New-AzAppServ... ``` ## Debug output <!-- Set $DebugPreference='Continue' before running the repro and paste the resulting debug stream in the below code block ATTENTION: Be sure to remove any sensitive information that may be in the logs --> ``` > $updateMyPip = Get-AzPublicIpAddress -Name $pipName -ResourceGroupName $rgName DEBUG: 9:25:29 AM - GetAzurePublicIpAddressCommand begin processing with ParameterSet 'NoExpandStandAloneIp'. DEBUG: 9:25:29 AM - using account id 'trindels@redapt.com'... DEBUG: [Common.Authentication]: Authenticating using Account: 'trindels@redapt.com', environment: 'AzureCloud', tenant: 'a9216385-5760-40b6-ac04-e893074255e0' DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2020-12-03T16:04:11.0000000+00:00 DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2020-12-03T16:04:11.0000000+00:00 DEBUG: [Common.Authentication]: Received token with LoginType 'User', Tenant: 'a9216385-5760-40b6-ac04-e893074255e0', UserId: 'trindels@redapt.com' DEBUG: ============================ HTTP REQUEST ============================ HTTP Method: GET Absolute Uri: https://management.azure.com/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/M icrosoft.Network/publicIPAddresses/trr-mytestpip?api-version=2020-07-01 Headers: x-ms-client-request-id : 499acecf-77a1-4711-bc34-789ac105fd9a accept-language : en-US Body: DEBUG: ============================ HTTP RESPONSE ============================ Status Code: OK Headers: Pragma : no-cache x-ms-request-id : e00175d2-f535-4354-a384-38e04cc711f5 x-ms-correlation-request-id : d6e751fb-7333-4e15-b2b1-eb742f9c8a08 x-ms-arm-service-request-id : 598cf03f-6699-40c0-ab88-501c74058dd4 Strict-Transport-Security : max-age=31536000; includeSubDomains Cache-Control : no-cache ETag : W/"f9d9c30b-6f40-43dd-a640-9443754a3d5b" Server : Microsoft-HTTPAPI/2.0,Microsoft-HTTPAPI/2.0 x-ms-ratelimit-remaining-subscription-reads: 11998 x-ms-routing-request-id : CENTRALUS:20201203T152528Z:d6e751fb-7333-4e15-b2b1-eb742f9c8a08 X-Content-Type-Options : nosniff Date : Thu, 03 Dec 2020 15:25:28 GMT Body: { "name": "trr-mytestpip", "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAd dresses/trr-mytestpip", "etag": "W/\"f9d9c30b-6f40-43dd-a640-9443754a3d5b\"", "location": "eastus", "properties": { "provisioningState": "Succeeded", "resourceGuid": "55c124b9-439d-4969-aca1-47c753fbac58", "ipAddress": "20.72.169.252", "publicIPAddressVersion": "IPv4", "publicIPAllocationMethod": "Static", "idleTimeoutInMinutes": 10, "ipTags": [], "publicIPPrefix": { "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPPr efixes/trr-mytestprefix" } }, "type": "Microsoft.Network/publicIPAddresses", "sku": { "name": "Standard", "tier": "Regional" } } DEBUG: AzureQoSEvent: CommandName - Get-AzPublicIpAddress; IsSuccess - True; Duration - 00:00:00.2538936; DEBUG: Finish sending metric. DEBUG: 9:25:30 AM - GetAzurePublicIpAddressCommand end processing. > $updateMyPip.IdleTimeoutInMinutes = 10 > $updateMyPip | Set-AzPublicIpAddress DEBUG: 9:25:31 AM - SetAzurePublicIpAddressCommand begin processing with ParameterSet '__AllParameterSets'. DEBUG: 9:25:31 AM - using account id 'trindels@redapt.com'... DEBUG: [Common.Authentication]: Authenticating using Account: 'trindels@redapt.com', environment: 'AzureCloud', tenant: 'a9216385-5760-40b6-ac04-e893074255e0' DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2020-12-03T16:04:11.0000000+00:00 DEBUG: SharedTokenCacheCredential.GetToken invoked. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: DEBUG: SharedTokenCacheCredential.GetToken succeeded. Scopes: [ https://management.core.windows.net//.default ] ParentRequestId: ExpiresOn: 2020-12-03T16:04:11.0000000+00:00 DEBUG: [Common.Authentication]: Received token with LoginType 'User', Tenant: 'a9216385-5760-40b6-ac04-e893074255e0', UserId: 'trindels@redapt.com' DEBUG: ============================ HTTP REQUEST ============================ HTTP Method: GET Absolute Uri: https://management.azure.com/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/M icrosoft.Network/publicIPAddresses/trr-mytestpip?api-version=2020-07-01 Headers: x-ms-client-request-id : 7b43b132-d095-422e-b084-5291e5201299 accept-language : en-US Body: DEBUG: ============================ HTTP RESPONSE ============================ Status Code: OK Headers: Pragma : no-cache x-ms-request-id : 1adf8628-e534-4426-b731-fe1c6461ea66 x-ms-correlation-request-id : 7ebfb5ca-f5c0-4282-a43f-4c4e68501fd7 x-ms-arm-service-request-id : 7ac36e0d-e8cc-42d0-b492-32097e6b8cd9 Strict-Transport-Security : max-age=31536000; includeSubDomains Cache-Control : no-cache ETag : W/"f9d9c30b-6f40-43dd-a640-9443754a3d5b" Server : Microsoft-HTTPAPI/2.0,Microsoft-HTTPAPI/2.0 x-ms-ratelimit-remaining-subscription-reads: 11999 x-ms-routing-request-id : CENTRALUS:20201203T152529Z:7ebfb5ca-f5c0-4282-a43f-4c4e68501fd7 X-Content-Type-Options : nosniff Date : Thu, 03 Dec 2020 15:25:29 GMT Body: { "name": "trr-mytestpip", "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAd dresses/trr-mytestpip", "etag": "W/\"f9d9c30b-6f40-43dd-a640-9443754a3d5b\"", "location": "eastus", "properties": { "provisioningState": "Succeeded", "resourceGuid": "55c124b9-439d-4969-aca1-47c753fbac58", "ipAddress": "20.72.169.252", "publicIPAddressVersion": "IPv4", "publicIPAllocationMethod": "Static", "idleTimeoutInMinutes": 10, "ipTags": [], "publicIPPrefix": { "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPPr efixes/trr-mytestprefix" } }, "type": "Microsoft.Network/publicIPAddresses", "sku": { "name": "Standard", "tier": "Regional" } } DEBUG: ============================ HTTP REQUEST ============================ HTTP Method: PUT Absolute Uri: https://management.azure.com/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/M icrosoft.Network/publicIPAddresses/trr-mytestpip?api-version=2020-07-01 Headers: x-ms-client-request-id : 21a7e76d-5a84-4d75-8fd7-4d9dda43bc66 accept-language : en-US Body: { "sku": { "name": "Standard", "tier": "Regional" }, "properties": { "publicIPAllocationMethod": "Static", "publicIPAddressVersion": "IPv4", "ipTags": [], "ipAddress": "20.72.169.252", "publicIPPrefix": { "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPPr efixes/trr-mytestprefix" }, "idleTimeoutInMinutes": 10 }, "zones": [], "id": "/subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAd dresses/trr-mytestpip", "location": "eastus" } DEBUG: ============================ HTTP RESPONSE ============================ Status Code: BadRequest Headers: Pragma : no-cache x-ms-request-id : 105eca02-9d4a-44cc-b0ac-f3c2f261edc8 x-ms-correlation-request-id : 9166ba1b-5a36-440b-8ab7-17f55f96eaa0 x-ms-arm-service-request-id : 0b5b922e-5f62-4f8e-b56a-b0fe6a0837ec Strict-Transport-Security : max-age=31536000; includeSubDomains Cache-Control : no-cache Server : Microsoft-HTTPAPI/2.0,Microsoft-HTTPAPI/2.0 x-ms-ratelimit-remaining-subscription-writes: 1199 x-ms-routing-request-id : CENTRALUS:20201203T152530Z:9166ba1b-5a36-440b-8ab7-17f55f96eaa0 X-Content-Type-Options : nosniff Date : Thu, 03 Dec 2020 15:25:30 GMT Body: { "error": { "code": "ResourceAvailabilityZonesCannotBeModified", "message": "Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAdd resses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa.", "details": [] } } Set-AzPublicIpAddress : Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/provid ers/Microsoft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. StatusCode: 400 ReasonPhrase: Bad Request ErrorCode: ResourceAvailabilityZonesCannotBeModified ErrorMessage: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Micros oft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. OperationID : 105eca02-9d4a-44cc-b0ac-f3c2f261edc8 At line:1 char:16 + $updateMyPip | Set-AzPublicIpAddress + ~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : CloseError: (:) [Set-AzPublicIpAddress], NetworkCloudException + FullyQualifiedErrorId : Microsoft.Azure.Commands.Network.SetAzurePublicIpAddressCommand DEBUG: AzureQoSEvent: CommandName - Set-AzPublicIpAddress; IsSuccess - False; Duration - 00:00:00.8117733;; Exception - Microsoft.Azure.Commands.Network.Common.NetworkCloudException: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAdd resses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. StatusCode: 400 ReasonPhrase: Bad Request ErrorCode: ResourceAvailabilityZonesCannotBeModified ErrorMessage: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAdd resses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. OperationID : 105eca02-9d4a-44cc-b0ac-f3c2f261edc8 ---> Microsoft.Rest.Azure.CloudException: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Microsoft.Network/publicIPAdd resses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. at Microsoft.Azure.Management.Network.PublicIPAddressesOperations.<BeginCreateOrUpdateWithHttpMessagesAsync>d__15.MoveNext () --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.Network.PublicIPAddressesOperations.<CreateOrUpdateWithHttpMessagesAsync>d__7.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.Network.PublicIPAddressesOperationsExtensions.<CreateOrUpdateAsync>d__5.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.Network.PublicIPAddressesOperationsExtensions.CreateOrUpdate(IPublicIPAddressesOperations operations, String resourceGroupName, String publicIpAddressName, PublicIPAddress parameters) at Microsoft.Azure.Commands.Network.SetAzurePublicIpAddressCommand.Execute() at Microsoft.Azure.Commands.Network.NetworkBaseCmdlet.ExecuteCmdlet() --- End of inner exception stack trace --- at Microsoft.Azure.Commands.Network.NetworkBaseCmdlet.ExecuteCmdlet() at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord(); DEBUG: Finish sending metric. DEBUG: 9:25:32 AM - SetAzurePublicIpAddressCommand end processing. ``` ## Error output <!-- Please run Resolve-AzError and paste the output in the below code block ATTENTION: Be sure to remove any sensitive information that may be in the logs --> ``` HistoryId: 16 RequestId : Message : Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg/providers/Mic rosoft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. StatusCode: 400 ReasonPhrase: Bad Request ErrorCode: ResourceAvailabilityZonesCannotBeModified ErrorMessage: Resource /subscriptions/61f3232b-7b0a-4c0c-b558-b241ec754ca4/resourceGroups/trr-mytestrg /providers/Microsoft.Network/publicIPAddresses/trr-mytestpip has an existing availability zone constraint 1, 2, 3 and the request has availability zone constraint NoZone, which do not match. Zones cannot be added/updated/removed once the resource is created. The resource cannot be updated from regional to zonal or vice-versa. OperationID : 105eca02-9d4a-44cc-b0ac-f3c2f261edc8 ServerMessage : ServerResponse : RequestMessage : InvocationInfo : {Set-AzPublicIpAddress} Line : $updateMyPip | Set-AzPublicIpAddress Position : At line:1 char:16 + $updateMyPip | Set-AzPublicIpAddress + ~~~~~~~~~~~~~~~~~~~~~ StackTrace : at Microsoft.Azure.Commands.Network.NetworkBaseCmdlet.ExecuteCmdlet() at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord() HistoryId : 16 ```
non_process
set azpublicipaddress does not work with ips created from public ip prefix make sure you are able to reproduce this issue on the latest released version of az please search the existing issues to see if there has been a similar issue filed for issue related to importing a module please refer to our troubleshooting guide description set azpublicipaddress produces zone related errors when public ip address is created fromh public ip prefix set azpublicipaddress resource subscriptions resourcegroups trr mytestrg provid ers microsoft network publicipaddresses trr mytestpip has an existing availability zone constraint and the request has availability zone constraint nozone which do not match zones cannot be added updated removed once the resource is created the resource cannot be updated from regional to zonal or vice versa statuscode reasonphrase bad request errorcode resourceavailabilityzonescannotbemodified errormessage resource subscriptions resourcegroups trr mytestrg providers micros oft network publicipaddresses trr mytestpip has an existing availability zone constraint and the request has availability zone constraint nozone which do not match zones cannot be added updated removed once the resource is created the resource cannot be updated from regional to zonal or vice versa operationid at line char updatemypip set azpublicipaddress categoryinfo closeerror networkcloudexception fullyqualifiederrorid microsoft azure commands network setazurepublicipaddresscommand steps to reproduce rgname trr mytestrg prefixname trr mytestprefix pipname trr mytestpip location eastus rg new azresourcegroup name rgname location location prefix new azpublicipprefix name prefixname location location resourcegroupname rgname prefixlength sku standard ipaddressversion pip new azpublicipaddress name pipname location location resourcegroupname rgname ipaddressversion sku standard allocationmethod static publicipprefix prefix idletimeoutinminutes updatemypip get azpublicipaddress name pipname resourcegroupname rgname updatemypip idletimeoutinminutes updatemypip set azpublicipaddress environment data please run psversiontable and paste the output in the below code block if running the docker container image indicate the tag of the image used and the version of docker engine name value psversion psedition desktop pscompatibleversions buildversion clrversion wsmanstackversion psremotingprotocolversion serializationversion module versions script az accounts disable azdatacollection disable azcontextautosave enab script az advisor get azadvisorrecommendation enable azadvisorrecommendati script az aks get azakscluster new azakscluster remove azakscluster script az alertsmanagement get azalert get azalertobjecthistory update azalertstat script az analysisservices resume azanalysisservicesserver suspend azanalysisservic script az apimanagement add azapimanagementapitogateway add azapimanagementapito script az appconfiguration get azappconfigurationstore get azappconfigurationstorek script az applicationinsights get azapplicationinsights new azapplicationinsights rem script az attestation new azattestation get azattestation remove azattestatio script az automation get azautomationhybridworkergroup remove azautomationhyb script az batch remove azbatchaccount get azbatchaccount get azbatchacc script az billing get azbillinginvoice get azbillingperiod get azenrollme script az blockchain get azblockchainconsortium get azblockchainmember get a script az blueprint get azblueprint get azblueprintassignment new azbluepri script az cdn get azcdnprofile get azcdnprofilessourl new azcdnprofil script az clonevirtualmachine new azvmclone script az cognitiveservices get azcognitiveservicesaccount get azcognitiveservicesac script az compute remove azavailabilityset get azavailabilityset new azav script az compute managedservice convertto azvhd script az connectedkubernetes get azconnectedkubernetes new azconnectedkubernetes rem script az connectedmachine connect azconnectedmachine get azconnectedmachine get a script az consumption get azconsumptionbudget get azconsumptionmarketplace ge script az containerinstance new azcontainergroup get azcontainergroup remove azcont script az containerregistry new azcontainerregistry get azcontainerregistry update script az cosmosdb get azcosmosdbsqlcontainer get azcosmosdbsqlcontainerthr script az customproviders get azcustomprovider get azcustomproviderassociation ne script az databox get azdataboxjob get azdataboxcredential stop azdatabox script az databoxedge get azdataboxedgejob get azdataboxedgedevice invoke azd script az databricks get azdatabricksvnetpeering get azdatabricksworkspace n script az datafactory set update get azdatafa script az datalakeanalytics get azdatalakeanalyticsdatasource new azdatalakeanalytic script az datalakestore get azdatalakestoretrustedidprovider remove azdatalakest script az datamigration new azdatamigrationdatabaseinfo new azdatamigrationconne script az datashare new azdatashareaccount get azdatashareaccount remove az script az dedicatedhsm get azdedicatedhsm new azdedicatedhsm remove azdedicate script az deploymentmanager get azdeploymentmanagerartifactsource new azdeploymentma script az desktopvirtualization disconnect azwvdusersession expand azwvdmsiximage get a script az deviceprovisioningservices new aziotdeviceprovisioningservice get aziotdeviceprovis script az devspaces get azdevspacescontroller new azdevspacescontroller rem script az devtestlabs get azdtlallowedvmsizespolicy get azdtlautoshutdownpolic script az digitaltwins get azdigitaltwinsendpoint get azdigitaltwinsinstance n script az dns get azdnsrecordset new azdnsrecordconfig remove azdnsre script az eventgrid new azeventgridtopic get azeventgridtopic set azeventgr script az eventhub new azeventhubnamespace get azeventhubnamespace set aze script az frontdoor new azfrontdoor get azfrontdoor set azfrontdoor remove script az functions get azfunctionapp get azfunctionappavailablelocation ge script az guestconfiguration get azvmguestpolicystatus get azvmguestpolicystatushistory script az hanaonazure get azsapmonitor get azsapmonitorproviderinstance new a script az hdinsight get azhdinsightjob new azhdinsightsqoopjobdefinition wa script az healthcareapis new azhealthcareapisservice remove azhealthcareapisservi script az hpccache get azhpccachesku get azhpccacheusagemodel get azhpccac script az imagebuilder get azimagebuilderrunoutput get azimagebuildertemplate script az imagebuilder tools get aibbuildstatus initialize azureimagebuilder invoke script az importexport get azimportexport get azimportexportbitlockerkey get a script az insights get azmetricdefinition get azmetric remove azlogprofile script az iotcentral new aziotcentralapp get aziotcentralapp set aziotcentra script az iothub add aziothubkey get aziothubeventhubconsumergroup get a script az keyvault add azmanagedhsmkey get azmanagedhsmkey remove azmanage script az kubernetesconfiguration get azkubernetesconfiguration new azkubernetesconfigurat script az kusto add azkustoclusterlanguageextension add azkustodatabasep script az logicapp get azintegrationaccountagreement get azintegrationaccou script az machinelearning move azmlcommitmentassociation get azmlcommitmentassocia script az machinelearningcompute get azmlopcluster get azmlopclusterkey test azmlopclust script az maintenance get azapplyupdate get azconfigurationassignment get azm script az managedserviceidentity new azuserassignedidentity get azuserassignedidentity r script az managedservices get azmanagedservicesassignment new azmanagedservicesass script az managementpartner get azmanagementpartner new azmanagementpartner update script az maps get azmapsaccount new azmapsaccount remove azmapsaccoun script az mariadb get azmariadbconfiguration get azmariadbconnectionstring script az marketplace get azmarketplaceprivatestore get azmarketplaceprivatest script az marketplaceordering get azmarketplaceterms set azmarketplaceterms script az media sync azmediaservicestoragekey set azmediaservicekey get script az migrate get azmigratediscoveredserver get azmigratejob get azmi script az mixedreality get azspatialanchorsaccount get azspatialanchorsaccountk script az monitor get azmetricdefinition get azmetric remove azlogprofile script az monitoringsolutions get azmonitorloganalyticssolution new azmonitorloganalyt script az mysql get azmysqlconfiguration get azmysqlconnectionstring ge script az netappfiles get aznetappfilesaccount new aznetappfilesaccount remov script az network add azapplicationgatewayauthenticationcertificate get az script az network add azapplicationgatewayauthenticationcertificate get az script az notificationhubs get aznotificationhub get aznotificationhubauthorization script az operationalinsights new azoperationalinsightsazureactivitylogdatasource new script az peering get azpeering get azpeerasn new azpeerasn new azpeerin script az policyinsights get azpolicyevent get azpolicystate get azpolicystatesu script az portal get azportaldashboard new azportaldashboard remove azpo script az postgresql get azpostgresqlconfiguration get azpostgresqlconnection script az powerbiembedded remove azpowerbiworkspacecollection get azpowerbiworkspa script az privatedns get azprivatednszone remove azprivatednszone set azpriv script az profile disable azdatacollection disable azcontextautosave enab script az recoveryservices get azrecoveryservicesbackupproperty get azrecoveryservi script az rediscache remove azrediscachepatchschedule new azrediscacheschedul script az relay new azrelaynamespace get azrelaynamespace set azrelayna script az reservations get azreservationorder get azreservation get azreservat script az resourcegraph search azgraph script az resourcemover add azresourcemovermoveresource get azresourcemovermovec script az resources get azprovideroperation remove azroleassignment get azr script az search new azsearchservice get azsearchservice set azsearchser script az security get azsecurityalert set azsecurityalert get azsecuritya script az servicebus new azservicebusnamespace get azservicebusnamespace set script az servicefabric add azservicefabricclientcertificate add azservicefabric script az signalr new azsignalr get azsignalr get azsignalrkey new azsig script az springcloud deploy azspringcloudapp get azspringcloud get azspringc script az sql get azsqldatabasetransparentdataencryption get azsqldata script az sqlvirtualmachine new azsqlvm get azsqlvm update azsqlvm remove azsqlvm script az stackedge get azstackedgejob get azstackedgedevice invoke azstack script az stackhci register azstackhci unregister azstackhci test azstackh script az storage get azstorageaccount get azstorageaccountkey new azstor script az storagesync invoke azstoragesynccompatibilitycheck new azstoragesync manifest az storagetable add azstoragetablerow get azstoragetablerowall get azst script az streamanalytics get azstreamanalyticsfunction get azstreamanalyticsdefau script az subscription update azsubscription new azsubscriptionalias get azsub script az support get azsupportservice get azsupportproblemclassification script az synapse get azsynapsesparkjob stop azsynapsesparkjob submit azs script az tags remove aztag get aztag new aztag script az timeseriesinsights get aztimeseriesinsightsaccesspolicy get aztimeseriesins script az tools installer install azmodule uninstall azmodule update azmodule script az tools migration disable azupgradedatacollection enable azupgradedatacoll script az trafficmanager add aztrafficmanagercustomheadertoendpoint remove aztraf script az usageaggregates get usageaggregates script az vmware get azvmwareauthorization get azvmwarecluster get azvmw script az websites get azappserviceplan set azappserviceplan new azappserv debug output set debugpreference continue before running the repro and paste the resulting debug stream in the below code block attention be sure to remove any sensitive information that may be in the logs updatemypip get azpublicipaddress name pipname resourcegroupname rgname debug am getazurepublicipaddresscommand begin processing with parameterset noexpandstandaloneip debug am using account id trindels redapt com debug authenticating using account trindels redapt com environment azurecloud tenant debug sharedtokencachecredential gettoken invoked scopes parentrequestid debug sharedtokencachecredential gettoken succeeded scopes parentrequestid expireson debug sharedtokencachecredential gettoken invoked scopes parentrequestid debug sharedtokencachecredential gettoken succeeded scopes parentrequestid expireson debug received token with logintype user tenant userid trindels redapt com debug http request http method get absolute uri icrosoft network publicipaddresses trr mytestpip api version headers x ms client request id accept language en us body debug http response status code ok headers pragma no cache x ms request id x ms correlation request id x ms arm service request id strict transport security max age includesubdomains cache control no cache etag w server microsoft httpapi microsoft httpapi x ms ratelimit remaining subscription reads x ms routing request id centralus x content type options nosniff date thu dec gmt body name trr mytestpip id subscriptions resourcegroups trr mytestrg providers microsoft network publicipad dresses trr mytestpip etag w location eastus properties provisioningstate succeeded resourceguid ipaddress publicipaddressversion publicipallocationmethod static idletimeoutinminutes iptags publicipprefix id subscriptions resourcegroups trr mytestrg providers microsoft network publicippr efixes trr mytestprefix type microsoft network publicipaddresses sku name standard tier regional debug azureqosevent commandname get azpublicipaddress issuccess true duration debug finish sending metric debug am getazurepublicipaddresscommand end processing updatemypip idletimeoutinminutes updatemypip set azpublicipaddress debug am setazurepublicipaddresscommand begin processing with parameterset allparametersets debug am using account id trindels redapt com debug authenticating using account trindels redapt com environment azurecloud tenant debug sharedtokencachecredential gettoken invoked scopes parentrequestid debug sharedtokencachecredential gettoken succeeded scopes parentrequestid expireson debug sharedtokencachecredential gettoken invoked scopes parentrequestid debug sharedtokencachecredential gettoken succeeded scopes parentrequestid expireson debug received token with logintype user tenant userid trindels redapt com debug http request http method get absolute uri icrosoft network publicipaddresses trr mytestpip api version headers x ms client request id accept language en us body debug http response status code ok headers pragma no cache x ms request id x ms correlation request id x ms arm service request id strict transport security max age includesubdomains cache control no cache etag w server microsoft httpapi microsoft httpapi x ms ratelimit remaining subscription reads x ms routing request id centralus x content type options nosniff date thu dec gmt body name trr mytestpip id subscriptions resourcegroups trr mytestrg providers microsoft network publicipad dresses trr mytestpip etag w location eastus properties provisioningstate succeeded resourceguid ipaddress publicipaddressversion publicipallocationmethod static idletimeoutinminutes iptags publicipprefix id subscriptions resourcegroups trr mytestrg providers microsoft network publicippr efixes trr mytestprefix type microsoft network publicipaddresses sku name standard tier regional debug http request http method put absolute uri icrosoft network publicipaddresses trr mytestpip api version headers x ms client request id accept language en us body sku name standard tier regional properties publicipallocationmethod static publicipaddressversion iptags ipaddress publicipprefix id subscriptions resourcegroups trr mytestrg providers microsoft network publicippr efixes trr mytestprefix idletimeoutinminutes zones id subscriptions resourcegroups trr mytestrg providers microsoft network publicipad dresses trr mytestpip location eastus debug http response status code badrequest headers pragma no cache x ms request id x ms correlation request id x ms arm service request id strict transport security max age includesubdomains cache control no cache server microsoft httpapi microsoft httpapi x ms ratelimit remaining subscription writes x ms routing request id centralus x content type options nosniff date thu dec gmt body error code resourceavailabilityzonescannotbemodified message resource subscriptions resourcegroups trr mytestrg providers microsoft network publicipadd resses trr mytestpip has an existing availability zone constraint and the request has availability zone constraint nozone which do not match zones cannot be added updated removed once the resource is created the resource cannot be updated from regional to zonal or vice versa details set azpublicipaddress resource subscriptions resourcegroups trr mytestrg provid ers microsoft network publicipaddresses trr mytestpip has an existing availability zone constraint and the request has availability zone constraint nozone which do not match zones cannot be added updated removed once the resource is created the resource cannot be updated from regional to zonal or vice versa statuscode reasonphrase bad request errorcode resourceavailabilityzonescannotbemodified errormessage resource subscriptions resourcegroups trr mytestrg providers micros oft network publicipaddresses trr mytestpip has an existing availability zone constraint and the request has availability zone constraint nozone which do not match zones cannot be added updated removed once the resource is created the resource cannot be updated from regional to zonal or vice versa operationid at line char updatemypip set azpublicipaddress categoryinfo closeerror networkcloudexception fullyqualifiederrorid microsoft azure commands network setazurepublicipaddresscommand debug azureqosevent commandname set azpublicipaddress issuccess false duration exception microsoft azure commands network common networkcloudexception resource subscriptions resourcegroups trr mytestrg providers microsoft network publicipadd resses trr mytestpip has an existing availability zone constraint and the request has availability zone constraint nozone which do not match zones cannot be added updated removed once the resource is created the resource cannot be updated from regional to zonal or vice versa statuscode reasonphrase bad request errorcode resourceavailabilityzonescannotbemodified errormessage resource subscriptions resourcegroups trr mytestrg providers microsoft network publicipadd resses trr mytestpip has an existing availability zone constraint and the request has availability zone constraint nozone which do not match zones cannot be added updated removed once the resource is created the resource cannot be updated from regional to zonal or vice versa operationid microsoft rest azure cloudexception resource subscriptions resourcegroups trr mytestrg providers microsoft network publicipadd resses trr mytestpip has an existing availability zone constraint and the request has availability zone constraint nozone which do not match zones cannot be added updated removed once the resource is created the resource cannot be updated from regional to zonal or vice versa at microsoft azure management network publicipaddressesoperations d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft azure management network publicipaddressesoperations d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft azure management network publicipaddressesoperationsextensions d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft azure management network publicipaddressesoperationsextensions createorupdate ipublicipaddressesoperations operations string resourcegroupname string publicipaddressname publicipaddress parameters at microsoft azure commands network setazurepublicipaddresscommand execute at microsoft azure commands network networkbasecmdlet executecmdlet end of inner exception stack trace at microsoft azure commands network networkbasecmdlet executecmdlet at microsoft windowsazure commands utilities common azurepscmdlet processrecord debug finish sending metric debug am setazurepublicipaddresscommand end processing error output please run resolve azerror and paste the output in the below code block attention be sure to remove any sensitive information that may be in the logs historyid requestid message resource subscriptions resourcegroups trr mytestrg providers mic rosoft network publicipaddresses trr mytestpip has an existing availability zone constraint and the request has availability zone constraint nozone which do not match zones cannot be added updated removed once the resource is created the resource cannot be updated from regional to zonal or vice versa statuscode reasonphrase bad request errorcode resourceavailabilityzonescannotbemodified errormessage resource subscriptions resourcegroups trr mytestrg providers microsoft network publicipaddresses trr mytestpip has an existing availability zone constraint and the request has availability zone constraint nozone which do not match zones cannot be added updated removed once the resource is created the resource cannot be updated from regional to zonal or vice versa operationid servermessage serverresponse requestmessage invocationinfo set azpublicipaddress line updatemypip set azpublicipaddress position at line char updatemypip set azpublicipaddress stacktrace at microsoft azure commands network networkbasecmdlet executecmdlet at microsoft windowsazure commands utilities common azurepscmdlet processrecord historyid
0
316,776
9,654,801,900
IssuesEvent
2019-05-19 16:49:45
JustArchiNET/ASF-ui
https://api.github.com/repos/JustArchiNET/ASF-ui
closed
Set focus to textfield for BGR
Enhancement Priority: Low
When you open up the BGR view most likely you want to copy keys into the textfield. To make this action smoother for the user we should set the focus to the textfield upon opening the view. Similar to what we do in the commands view.
1.0
Set focus to textfield for BGR - When you open up the BGR view most likely you want to copy keys into the textfield. To make this action smoother for the user we should set the focus to the textfield upon opening the view. Similar to what we do in the commands view.
non_process
set focus to textfield for bgr when you open up the bgr view most likely you want to copy keys into the textfield to make this action smoother for the user we should set the focus to the textfield upon opening the view similar to what we do in the commands view
0
61,874
25,763,639,642
IssuesEvent
2022-12-08 23:03:04
microsoft/BotFramework-WebChat
https://api.github.com/repos/microsoft/BotFramework-WebChat
opened
A11y_Chat window_HighContrast:Cross button is not visible in desert HC mode.
bug area-accessibility customer-reported Bot Services external-pva
### Is it an issue related to Adaptive Cards? No. ### What is the PWD impact? Users viewing the page in HC mode will face difficulties. ### What browsers and screen readers do this issue affect? Windows: Edge with Windows Narrator, Windows: Chrome with NVDA, Windows: Chrome/Firefox with JAWS ### Are there any code-based customization done to Web Chat? No, I am using Web Chat without any customizations except "styleOptions". ### What version of Web Chat are you using? Latest production ### Which area does this issue affect? Contrast ratio, Forced colors (high contrast mode) ### What is the public URL for the website? https://web.powerva.microsoft.com/environments/839eace6-59ab-4243-97ec-a5b8fcc104e4/bots/107725f1-d827-41a4-8375-c91d1a97affc ### How to reproduce the issue? Pre-Requisite: Set Contrast theme to desert theme. 1. Launch the application using URL: [Home - Test | Power Virtual Agents (microsoft.com)](https://web.powerva.microsoft.com/environments/839eace6-59ab-4243-97ec-a5b8fcc104e4/bots/107725f1-d827-41a4-8375-c91d1a97affc) 1. Tab till chat box and press enter to open it. 1. Press tab key to navigate to cross button. ### What do you expect? Cross button should be clearly visible in HC mode. ### What actually happened? Cross button is not visible in desert HC mode. ### Do you have any screenshots or recordings to repro the issue? ![image](https://user-images.githubusercontent.com/1622400/206585414-0cfdcdb3-b21a-4e81-841a-d6cb599800da.png) ### Did you find any DOM elements that might have caused the issue? _No response_ ### MAS reference https://aka.ms/MAS4.3.1 ### WCAG reference _No response_ ### WAI-ARIA reference _No response_ ### Adaptive Card JSON _No response_ ### Additional context The color use in SVG should be a CSS system color of "text color"
1.0
A11y_Chat window_HighContrast:Cross button is not visible in desert HC mode. - ### Is it an issue related to Adaptive Cards? No. ### What is the PWD impact? Users viewing the page in HC mode will face difficulties. ### What browsers and screen readers do this issue affect? Windows: Edge with Windows Narrator, Windows: Chrome with NVDA, Windows: Chrome/Firefox with JAWS ### Are there any code-based customization done to Web Chat? No, I am using Web Chat without any customizations except "styleOptions". ### What version of Web Chat are you using? Latest production ### Which area does this issue affect? Contrast ratio, Forced colors (high contrast mode) ### What is the public URL for the website? https://web.powerva.microsoft.com/environments/839eace6-59ab-4243-97ec-a5b8fcc104e4/bots/107725f1-d827-41a4-8375-c91d1a97affc ### How to reproduce the issue? Pre-Requisite: Set Contrast theme to desert theme. 1. Launch the application using URL: [Home - Test | Power Virtual Agents (microsoft.com)](https://web.powerva.microsoft.com/environments/839eace6-59ab-4243-97ec-a5b8fcc104e4/bots/107725f1-d827-41a4-8375-c91d1a97affc) 1. Tab till chat box and press enter to open it. 1. Press tab key to navigate to cross button. ### What do you expect? Cross button should be clearly visible in HC mode. ### What actually happened? Cross button is not visible in desert HC mode. ### Do you have any screenshots or recordings to repro the issue? ![image](https://user-images.githubusercontent.com/1622400/206585414-0cfdcdb3-b21a-4e81-841a-d6cb599800da.png) ### Did you find any DOM elements that might have caused the issue? _No response_ ### MAS reference https://aka.ms/MAS4.3.1 ### WCAG reference _No response_ ### WAI-ARIA reference _No response_ ### Adaptive Card JSON _No response_ ### Additional context The color use in SVG should be a CSS system color of "text color"
non_process
chat window highcontrast cross button is not visible in desert hc mode is it an issue related to adaptive cards no what is the pwd impact users viewing the page in hc mode will face difficulties what browsers and screen readers do this issue affect windows edge with windows narrator windows chrome with nvda windows chrome firefox with jaws are there any code based customization done to web chat no i am using web chat without any customizations except styleoptions what version of web chat are you using latest production which area does this issue affect contrast ratio forced colors high contrast mode what is the public url for the website how to reproduce the issue pre requisite set contrast theme to desert theme launch the application using url tab till chat box and press enter to open it press tab key to navigate to cross button what do you expect cross button should be clearly visible in hc mode what actually happened cross button is not visible in desert hc mode do you have any screenshots or recordings to repro the issue did you find any dom elements that might have caused the issue no response mas reference wcag reference no response wai aria reference no response adaptive card json no response additional context the color use in svg should be a css system color of text color
0
21,571
29,924,710,483
IssuesEvent
2023-06-22 03:49:17
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
asyncssh 2.13.2 has 1 GuardDog issues
guarddog silent-process-execution
https://pypi.org/project/asyncssh https://inspector.pypi.io/project/asyncssh ```{ "dependency": "asyncssh", "version": "2.13.2", "result": { "issues": 1, "errors": {}, "results": { "silent-process-execution": [ { "location": "asyncssh-2.13.2/asyncssh/config.py:47", "code": " return subprocess.run(cmd, check=False, shell=True, stdin=DEVNULL,\n stdout=DEVNULL, stderr=DEVNULL).returncode == 0", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpplsnv46r/asyncssh" } }```
1.0
asyncssh 2.13.2 has 1 GuardDog issues - https://pypi.org/project/asyncssh https://inspector.pypi.io/project/asyncssh ```{ "dependency": "asyncssh", "version": "2.13.2", "result": { "issues": 1, "errors": {}, "results": { "silent-process-execution": [ { "location": "asyncssh-2.13.2/asyncssh/config.py:47", "code": " return subprocess.run(cmd, check=False, shell=True, stdin=DEVNULL,\n stdout=DEVNULL, stderr=DEVNULL).returncode == 0", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpplsnv46r/asyncssh" } }```
process
asyncssh has guarddog issues dependency asyncssh version result issues errors results silent process execution location asyncssh asyncssh config py code return subprocess run cmd check false shell true stdin devnull n stdout devnull stderr devnull returncode message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp asyncssh
1
431,557
12,483,207,109
IssuesEvent
2020-05-30 08:00:03
oslopride/oslopride.no
https://api.github.com/repos/oslopride/oslopride.no
closed
Headliners block
priority twist
A list of headliners. It should allow stuff to overflow below it (as shown with articles here), and lign up perfectly with the hero: <img width="336" alt="Screenshot 2020-05-10 at 16 31 10" src="https://user-images.githubusercontent.com/6630430/81502030-c216f900-92db-11ea-8e7e-a6ed201e6235.png">
1.0
Headliners block - A list of headliners. It should allow stuff to overflow below it (as shown with articles here), and lign up perfectly with the hero: <img width="336" alt="Screenshot 2020-05-10 at 16 31 10" src="https://user-images.githubusercontent.com/6630430/81502030-c216f900-92db-11ea-8e7e-a6ed201e6235.png">
non_process
headliners block a list of headliners it should allow stuff to overflow below it as shown with articles here and lign up perfectly with the hero img width alt screenshot at src
0
105,884
23,131,333,515
IssuesEvent
2022-07-28 10:38:30
SteeltoeOSS/Steeltoe
https://api.github.com/repos/SteeltoeOSS/Steeltoe
opened
Address S138: Functions should not have too many lines of code
Type/code-quality
Address existing violations of [S138: Functions should not have too many lines of code](https://rules.sonarsource.com/csharp/RSPEC-138) in the codebase and set severity to `Warning` in `Steeltoe.Debug.ruleset` and `Steeltoe.Release.ruleset`. To find existing violations, enable the rule (see above) and rebuild `src/Steeltoe.All.sln` to make them appear in the Output window. To address the violations, choose from the following on a case-by-case basis: - Fix the violation by changing the code to not violate the rule (usually extract method) - Suppress the violation in code using `#pragma warning disable/restore`, preceded by a justification comment if not obvious To be decided: the guideline that tests should contain everything relevant to understand the test supersedes this rule, so we may want to turn off this rule for test projects (using `NoWarn` in `sharedtest.props`).
1.0
Address S138: Functions should not have too many lines of code - Address existing violations of [S138: Functions should not have too many lines of code](https://rules.sonarsource.com/csharp/RSPEC-138) in the codebase and set severity to `Warning` in `Steeltoe.Debug.ruleset` and `Steeltoe.Release.ruleset`. To find existing violations, enable the rule (see above) and rebuild `src/Steeltoe.All.sln` to make them appear in the Output window. To address the violations, choose from the following on a case-by-case basis: - Fix the violation by changing the code to not violate the rule (usually extract method) - Suppress the violation in code using `#pragma warning disable/restore`, preceded by a justification comment if not obvious To be decided: the guideline that tests should contain everything relevant to understand the test supersedes this rule, so we may want to turn off this rule for test projects (using `NoWarn` in `sharedtest.props`).
non_process
address functions should not have too many lines of code address existing violations of in the codebase and set severity to warning in steeltoe debug ruleset and steeltoe release ruleset to find existing violations enable the rule see above and rebuild src steeltoe all sln to make them appear in the output window to address the violations choose from the following on a case by case basis fix the violation by changing the code to not violate the rule usually extract method suppress the violation in code using pragma warning disable restore preceded by a justification comment if not obvious to be decided the guideline that tests should contain everything relevant to understand the test supersedes this rule so we may want to turn off this rule for test projects using nowarn in sharedtest props
0
452,354
32,057,743,971
IssuesEvent
2023-09-24 09:39:54
privacy-scaling-explorations/bandada
https://api.github.com/repos/privacy-scaling-explorations/bandada
closed
Add the `x-api-key` parameter as required in the API Docs.
documentation :book: refactoring :recycle:
Add the `x-api-key` parameter as required in `addMembers` in the API Docs.
1.0
Add the `x-api-key` parameter as required in the API Docs. - Add the `x-api-key` parameter as required in `addMembers` in the API Docs.
non_process
add the x api key parameter as required in the api docs add the x api key parameter as required in addmembers in the api docs
0
10,048
13,044,161,653
IssuesEvent
2020-07-29 03:47:25
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `SubDateDatetimeReal` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `SubDateDatetimeReal` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @iosmanthus ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `SubDateDatetimeReal` from TiDB - ## Description Port the scalar function `SubDateDatetimeReal` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @iosmanthus ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function subdatedatetimereal from tidb description port the scalar function subdatedatetimereal from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
1
8,216
11,405,981,739
IssuesEvent
2020-01-31 13:23:31
prisma/specs
https://api.github.com/repos/prisma/specs
opened
Spec "consistency check" command
area/cli process/candidate spec/new
We need a command, probably directly exposed via the `prisma2` CLI, to check, if a `schema.prisma` and a database is in sync or if there are any discrepancies between tables / columns. This would help a lot to make sure that Prisma Client can successfully work with a database instead of getting runtime errors.
1.0
Spec "consistency check" command - We need a command, probably directly exposed via the `prisma2` CLI, to check, if a `schema.prisma` and a database is in sync or if there are any discrepancies between tables / columns. This would help a lot to make sure that Prisma Client can successfully work with a database instead of getting runtime errors.
process
spec consistency check command we need a command probably directly exposed via the cli to check if a schema prisma and a database is in sync or if there are any discrepancies between tables columns this would help a lot to make sure that prisma client can successfully work with a database instead of getting runtime errors
1
39,926
2,860,874,761
IssuesEvent
2015-06-03 17:53:52
Krasnyanskiy/jrsh
https://api.github.com/repos/Krasnyanskiy/jrsh
closed
As a User I want to be able to import zip and directory exports
in progress low priority
## Acceptance criteria For zip export we should use next syntax ```bash $> import zip /Users/alex/data.zip ``` For directory export we should use syntax like this ```bash $> import directory /Users/alex/folder ```
1.0
As a User I want to be able to import zip and directory exports - ## Acceptance criteria For zip export we should use next syntax ```bash $> import zip /Users/alex/data.zip ``` For directory export we should use syntax like this ```bash $> import directory /Users/alex/folder ```
non_process
as a user i want to be able to import zip and directory exports acceptance criteria for zip export we should use next syntax bash import zip users alex data zip for directory export we should use syntax like this bash import directory users alex folder
0
809,413
30,191,771,864
IssuesEvent
2023-07-04 15:59:54
bcgov/foi-flow
https://api.github.com/repos/bcgov/foi-flow
closed
When we replace a child attachment of a msg file (since msg file is a duplicate record on other division) replace happens on other divisions as well during pdf stich
bug high priority
**Describe the bug in current situation** A clear and concise description of what the bug is. **Link bug to the User Story** **Impact of this bug** Describe the impact, i.e. what the impact is, and number of users impacted. **Chance of Occurring (high/medium/low/very low)** **Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?** Dev-marshal/Test **Steps to Reproduce** Steps to reproduce the behavior: 1. Go to Ministry login 2. Try to upload same message files with attachments on two divisions 3. Try to replace a child attachment only on one division, it will reflect on other division's child attachments as well on pdf stich **Actual/ observed behaviour/ results** **Expected behaviour** A clear and concise description of what you expected to happen. Use the gherking language. **Screenshots/ Visual Reference/ Source** If applicable, add screenshots to help explain your problem. You an use screengrab. ![image.png](https://images.zenhubusercontent.com/63e6c81be9adb8e6f876c7e0/1050889a-e035-47e5-bd10-93efd42f7864)
1.0
When we replace a child attachment of a msg file (since msg file is a duplicate record on other division) replace happens on other divisions as well during pdf stich - **Describe the bug in current situation** A clear and concise description of what the bug is. **Link bug to the User Story** **Impact of this bug** Describe the impact, i.e. what the impact is, and number of users impacted. **Chance of Occurring (high/medium/low/very low)** **Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?** Dev-marshal/Test **Steps to Reproduce** Steps to reproduce the behavior: 1. Go to Ministry login 2. Try to upload same message files with attachments on two divisions 3. Try to replace a child attachment only on one division, it will reflect on other division's child attachments as well on pdf stich **Actual/ observed behaviour/ results** **Expected behaviour** A clear and concise description of what you expected to happen. Use the gherking language. **Screenshots/ Visual Reference/ Source** If applicable, add screenshots to help explain your problem. You an use screengrab. ![image.png](https://images.zenhubusercontent.com/63e6c81be9adb8e6f876c7e0/1050889a-e035-47e5-bd10-93efd42f7864)
non_process
when we replace a child attachment of a msg file since msg file is a duplicate record on other division replace happens on other divisions as well during pdf stich describe the bug in current situation a clear and concise description of what the bug is link bug to the user story impact of this bug describe the impact i e what the impact is and number of users impacted chance of occurring high medium low very low pre conditions which env any pre requesites or assumptions to execute steps dev marshal test steps to reproduce steps to reproduce the behavior go to ministry login try to upload same message files with attachments on two divisions try to replace a child attachment only on one division it will reflect on other division s child attachments as well on pdf stich actual observed behaviour results expected behaviour a clear and concise description of what you expected to happen use the gherking language screenshots visual reference source if applicable add screenshots to help explain your problem you an use screengrab
0
9,478
12,476,753,834
IssuesEvent
2020-05-29 14:00:13
kubeflow/kubeflow
https://api.github.com/repos/kubeflow/kubeflow
closed
Cut Kubeflow 1.0 RC
area/engprod kind/feature kind/process lifecycle/stale priority/p0
/kind process Tracking bug for the Kubeflow 1.0 initial RC. @richardsliu Has cut a 1.0 branch for kubeflow/manifests @jlewi Has cut a 1.0 branch for kubeflow/kubeflow We should begin setting up tests etc... for those branches. If need be we can rebase those branches on master if there is significant changes that need to be pulled in. Links: * Demo script - http://bit.ly/demo-v1-0 * [Kanban Board For 1.0](https://github.com/orgs/kubeflow/projects/25?card_filter_query=label%3Apriority%2Fp0)
1.0
Cut Kubeflow 1.0 RC - /kind process Tracking bug for the Kubeflow 1.0 initial RC. @richardsliu Has cut a 1.0 branch for kubeflow/manifests @jlewi Has cut a 1.0 branch for kubeflow/kubeflow We should begin setting up tests etc... for those branches. If need be we can rebase those branches on master if there is significant changes that need to be pulled in. Links: * Demo script - http://bit.ly/demo-v1-0 * [Kanban Board For 1.0](https://github.com/orgs/kubeflow/projects/25?card_filter_query=label%3Apriority%2Fp0)
process
cut kubeflow rc kind process tracking bug for the kubeflow initial rc richardsliu has cut a branch for kubeflow manifests jlewi has cut a branch for kubeflow kubeflow we should begin setting up tests etc for those branches if need be we can rebase those branches on master if there is significant changes that need to be pulled in links demo script
1
11,856
14,664,530,454
IssuesEvent
2020-12-29 12:13:31
parcel-bundler/parcel
https://api.github.com/repos/parcel-bundler/parcel
closed
How do I exclude a css file from being included as a css module?
:grey_question: Question CSS Preprocessing Stale
# โ” Question Can I exclude certain css files from being bundled as css modules? ## ๐Ÿ”ฆ Context I'm building a SPA and have split the project into several npm modules. I'm using React, Typescript and PostCSS to CSS Modules for my components. This works fine. However, sometimes I want to use third party react components, that I need to style with custom CSS. In particular I'm currently trying to style the datepicker "react-datetime". To include the default css that comes with the component I simply add `import "react-datetime/css/react-datetime.css"` and the css is bundled correctly. But, I do not want to use the default styling, so I've added my own css file. To import this one I use: `import "./custom.css";` But this time the css file is imported as a module and all css names are converted and no longer match the css names found in the html produced by the datetime component. So the question is: Can I exclude certain files from this "css module" processing? To enable Css Modules in parcel I added a postcss.config.js to the root of my project. It has the following content: ``` module.exports = { "modules": true }; ``` ## ๐Ÿ’ป Code Sample <!-- If you are seeing an error, please provide a code repository, gist or sample files to reproduce the issue --> ## ๐ŸŒ Your Environment <!--- Include as many relevant details about the environment you are using --> | Software | Version(s) | | ---------------- | ---------- | | Parcel |1.9.7 | Node |8.11.3 | npm/Yarn | 5.6.0 | Operating System | Win10 <!-- Love parcel? Please consider supporting our collective: ๐Ÿ‘‰ https://opencollective.com/parcel/donate -->
1.0
How do I exclude a css file from being included as a css module? - # โ” Question Can I exclude certain css files from being bundled as css modules? ## ๐Ÿ”ฆ Context I'm building a SPA and have split the project into several npm modules. I'm using React, Typescript and PostCSS to CSS Modules for my components. This works fine. However, sometimes I want to use third party react components, that I need to style with custom CSS. In particular I'm currently trying to style the datepicker "react-datetime". To include the default css that comes with the component I simply add `import "react-datetime/css/react-datetime.css"` and the css is bundled correctly. But, I do not want to use the default styling, so I've added my own css file. To import this one I use: `import "./custom.css";` But this time the css file is imported as a module and all css names are converted and no longer match the css names found in the html produced by the datetime component. So the question is: Can I exclude certain files from this "css module" processing? To enable Css Modules in parcel I added a postcss.config.js to the root of my project. It has the following content: ``` module.exports = { "modules": true }; ``` ## ๐Ÿ’ป Code Sample <!-- If you are seeing an error, please provide a code repository, gist or sample files to reproduce the issue --> ## ๐ŸŒ Your Environment <!--- Include as many relevant details about the environment you are using --> | Software | Version(s) | | ---------------- | ---------- | | Parcel |1.9.7 | Node |8.11.3 | npm/Yarn | 5.6.0 | Operating System | Win10 <!-- Love parcel? Please consider supporting our collective: ๐Ÿ‘‰ https://opencollective.com/parcel/donate -->
process
how do i exclude a css file from being included as a css module โ” question can i exclude certain css files from being bundled as css modules ๐Ÿ”ฆ context i m building a spa and have split the project into several npm modules i m using react typescript and postcss to css modules for my components this works fine however sometimes i want to use third party react components that i need to style with custom css in particular i m currently trying to style the datepicker react datetime to include the default css that comes with the component i simply add import react datetime css react datetime css and the css is bundled correctly but i do not want to use the default styling so i ve added my own css file to import this one i use import custom css but this time the css file is imported as a module and all css names are converted and no longer match the css names found in the html produced by the datetime component so the question is can i exclude certain files from this css module processing to enable css modules in parcel i added a postcss config js to the root of my project it has the following content module exports modules true ๐Ÿ’ป code sample ๐ŸŒ your environment software version s parcel node npm yarn operating system love parcel please consider supporting our collective ๐Ÿ‘‰
1
227,097
7,526,932,730
IssuesEvent
2018-04-13 15:25:23
nco/nco
https://api.github.com/repos/nco/nco
reopened
Benchmark chunking
medium priority
Please design and implement an input data set exactly the same as described in https://www.unidata.ucar.edu/blogs/developer/en/entry/chunking_data_choosing_shapes and construct a script that runs ncks, ncccopy, and h5repack to (re-)chunk and the dataset exactly as in the tables in that page. 1. Verify that the NCO rew algorithm produces the same chunksizes as in Table 1. 2. Then re-run tests in Table 2, where now there's a third column for ncks with rew. nccopy and ncks should have similar timings. 3. Then re-do the timings in Table 3 with ncks and rew
1.0
Benchmark chunking - Please design and implement an input data set exactly the same as described in https://www.unidata.ucar.edu/blogs/developer/en/entry/chunking_data_choosing_shapes and construct a script that runs ncks, ncccopy, and h5repack to (re-)chunk and the dataset exactly as in the tables in that page. 1. Verify that the NCO rew algorithm produces the same chunksizes as in Table 1. 2. Then re-run tests in Table 2, where now there's a third column for ncks with rew. nccopy and ncks should have similar timings. 3. Then re-do the timings in Table 3 with ncks and rew
non_process
benchmark chunking please design and implement an input data set exactly the same as described in and construct a script that runs ncks ncccopy and to re chunk and the dataset exactly as in the tables in that page verify that the nco rew algorithm produces the same chunksizes as in table then re run tests in table where now there s a third column for ncks with rew nccopy and ncks should have similar timings then re do the timings in table with ncks and rew
0
384,785
11,403,313,393
IssuesEvent
2020-01-31 06:47:35
woocommerce/woocommerce-gateway-paypal-express-checkout
https://api.github.com/repos/woocommerce/woocommerce-gateway-paypal-express-checkout
closed
woocommerce_after_checkout_validation is running twice
Priority: Low [Type] Bug
I have a ReCaptcha v2 [plugin](https://codecanyon.net/item/woocommerce-checkout-recaptcha/17346203) that hooks into `woocommerce_after_checkout_validation` When starting the Paypal pop-up it appears the `woocommerce_after_checkout_validation` action runs and when you complete your payment details in Paypal and the pop-up closes the `woocommerce_after_checkout_validation` action runs a second time. This is causing issues for the ReCaptcha plugin, because the ReCaptcha token is only good for one verification check and will fail any subsequent verification. (Attempt #1: success -> Attempt #2: failure -> Result: failure) So customer orders are unable to process. The ReCaptcha is required because of "carding" issues and Paypal has asked for a Captcha to be installed on the website. My assumption is that the action should only run when the Paypal pop-up is closing. Running on the latest versions of WooCommerce and this plugin.
1.0
woocommerce_after_checkout_validation is running twice - I have a ReCaptcha v2 [plugin](https://codecanyon.net/item/woocommerce-checkout-recaptcha/17346203) that hooks into `woocommerce_after_checkout_validation` When starting the Paypal pop-up it appears the `woocommerce_after_checkout_validation` action runs and when you complete your payment details in Paypal and the pop-up closes the `woocommerce_after_checkout_validation` action runs a second time. This is causing issues for the ReCaptcha plugin, because the ReCaptcha token is only good for one verification check and will fail any subsequent verification. (Attempt #1: success -> Attempt #2: failure -> Result: failure) So customer orders are unable to process. The ReCaptcha is required because of "carding" issues and Paypal has asked for a Captcha to be installed on the website. My assumption is that the action should only run when the Paypal pop-up is closing. Running on the latest versions of WooCommerce and this plugin.
non_process
woocommerce after checkout validation is running twice i have a recaptcha that hooks into woocommerce after checkout validation when starting the paypal pop up it appears the woocommerce after checkout validation action runs and when you complete your payment details in paypal and the pop up closes the woocommerce after checkout validation action runs a second time this is causing issues for the recaptcha plugin because the recaptcha token is only good for one verification check and will fail any subsequent verification attempt success attempt failure result failure so customer orders are unable to process the recaptcha is required because of carding issues and paypal has asked for a captcha to be installed on the website my assumption is that the action should only run when the paypal pop up is closing running on the latest versions of woocommerce and this plugin
0
242,158
7,838,837,224
IssuesEvent
2018-06-18 11:45:41
soedinglab/BaMM_webserver
https://api.github.com/repos/soedinglab/BaMM_webserver
closed
[9dd888d1] error in plotPvalStats.R
bug priority
Urgh :-( ``` Error in evaluateMotif(pvalues, filename = filename, rerank = FALSE, data_eta0 = data_eta0) : Error: input p-values must all be in the range 0 to 1! Execution halted ```
1.0
[9dd888d1] error in plotPvalStats.R - Urgh :-( ``` Error in evaluateMotif(pvalues, filename = filename, rerank = FALSE, data_eta0 = data_eta0) : Error: input p-values must all be in the range 0 to 1! Execution halted ```
non_process
error in plotpvalstats r urgh error in evaluatemotif pvalues filename filename rerank false data data error input p values must all be in the range to execution halted
0
7,830
11,008,543,845
IssuesEvent
2019-12-04 10:44:06
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
Merge 'suppression by symbiont of x' and 'negative regulation of x'
multi-species process term merge
Hello, Similar to what we did in #18043 for induction and positive regulation, suppression and negative regulation terms should be merged. For example: - 'negative regulation by symbiont of host defense response' -> merge into: 'suppression of host defenses'
1.0
Merge 'suppression by symbiont of x' and 'negative regulation of x' - Hello, Similar to what we did in #18043 for induction and positive regulation, suppression and negative regulation terms should be merged. For example: - 'negative regulation by symbiont of host defense response' -> merge into: 'suppression of host defenses'
process
merge suppression by symbiont of x and negative regulation of x hello similar to what we did in for induction and positive regulation suppression and negative regulation terms should be merged for example negative regulation by symbiont of host defense response merge into suppression of host defenses
1
15,864
20,035,863,822
IssuesEvent
2022-02-02 11:49:15
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] 'Skip' and 'Next' buttons are not displayed for text choice questions configured in question step
Bug P1 iOS Process: Fixed Process: Tested QA Process: Tested dev
**Steps:** 1. Edit a study 2. Navigate to questionnaires > Add question step 3. Add a text choice question with Skippable 'yes' 4. Publish updates 5. Open the activity in iOS mobile app and observe **Actual:** 'Done' button is displayed **Expected:** 'Skip' and 'Next' should be displayed and user should be able to skip the question Notes: 1. Issue not observed for other response types 2. Issue not observed for form step - response types including text choice 3. Issue not observed in Android iOS: ![IMG_2292](https://user-images.githubusercontent.com/60386291/138225268-c55bb073-4e46-4eff-9657-5520de7be73b.PNG) SB: ![SB](https://user-images.githubusercontent.com/60386291/138225324-528f8536-53b4-47dc-a1f3-7e354bf1b09c.png)
3.0
[iOS] 'Skip' and 'Next' buttons are not displayed for text choice questions configured in question step - **Steps:** 1. Edit a study 2. Navigate to questionnaires > Add question step 3. Add a text choice question with Skippable 'yes' 4. Publish updates 5. Open the activity in iOS mobile app and observe **Actual:** 'Done' button is displayed **Expected:** 'Skip' and 'Next' should be displayed and user should be able to skip the question Notes: 1. Issue not observed for other response types 2. Issue not observed for form step - response types including text choice 3. Issue not observed in Android iOS: ![IMG_2292](https://user-images.githubusercontent.com/60386291/138225268-c55bb073-4e46-4eff-9657-5520de7be73b.PNG) SB: ![SB](https://user-images.githubusercontent.com/60386291/138225324-528f8536-53b4-47dc-a1f3-7e354bf1b09c.png)
process
skip and next buttons are not displayed for text choice questions configured in question step steps edit a study navigate to questionnaires add question step add a text choice question with skippable yes publish updates open the activity in ios mobile app and observe actual done button is displayed expected skip and next should be displayed and user should be able to skip the question notes issue not observed for other response types issue not observed for form step response types including text choice issue not observed in android ios sb
1
2,295
2,525,019,045
IssuesEvent
2015-01-20 21:38:52
graybeal/ont
https://api.github.com/repos/graybeal/ont
closed
admin interface to remove an ontology
1 star enhancement imported Milestone-Beta1 ont Priority-High
_From [caru...@gmail.com](https://code.google.com/u/113886747689301365533/) on March 30, 2010 16:04:16_ What capability do you want added or improved? Ability to remove an ontology (especifically, a given version). Where do you want this capability to be accessible? In the portal. What sort of input/command mechanism do you want? Just for users with admin privileges, it could be something like the following: - a special section to remove entries - a button "delete" next to each entry in the main browse table What is the desired output (content, format, location)? confirmation that the ontology was removed from the registry. Other details of your desired capability? the underlying triple store may be updated immediately or scheduled to be updated later. Add the respective explanation in the confirmation message. What version of the product are you using? MMI Portal 1.8.1.alpha (20091229171344) _Original issue: http://code.google.com/p/mmisw/issues/detail?id=242_
1.0
admin interface to remove an ontology - _From [caru...@gmail.com](https://code.google.com/u/113886747689301365533/) on March 30, 2010 16:04:16_ What capability do you want added or improved? Ability to remove an ontology (especifically, a given version). Where do you want this capability to be accessible? In the portal. What sort of input/command mechanism do you want? Just for users with admin privileges, it could be something like the following: - a special section to remove entries - a button "delete" next to each entry in the main browse table What is the desired output (content, format, location)? confirmation that the ontology was removed from the registry. Other details of your desired capability? the underlying triple store may be updated immediately or scheduled to be updated later. Add the respective explanation in the confirmation message. What version of the product are you using? MMI Portal 1.8.1.alpha (20091229171344) _Original issue: http://code.google.com/p/mmisw/issues/detail?id=242_
non_process
admin interface to remove an ontology from on march what capability do you want added or improved ability to remove an ontology especifically a given version where do you want this capability to be accessible in the portal what sort of input command mechanism do you want just for users with admin privileges it could be something like the following a special section to remove entries a button delete next to each entry in the main browse table what is the desired output content format location confirmation that the ontology was removed from the registry other details of your desired capability the underlying triple store may be updated immediately or scheduled to be updated later add the respective explanation in the confirmation message what version of the product are you using mmi portal alpha original issue
0
876
3,341,555,094
IssuesEvent
2015-11-14 00:09:06
getdrifter/drifter-configuration-collector
https://api.github.com/repos/getdrifter/drifter-configuration-collector
closed
[EPIC] Project Management
epic process task
- [x] Set up board #5 - [x] Set up label scheme #7 - [x] Set up board columns #9
1.0
[EPIC] Project Management - - [x] Set up board #5 - [x] Set up label scheme #7 - [x] Set up board columns #9
process
project management set up board set up label scheme set up board columns
1
17,471
23,296,425,153
IssuesEvent
2022-08-06 16:46:45
pyanodon/pybugreports
https://api.github.com/repos/pyanodon/pybugreports
closed
Laboratory Instruments seem to have wrong icon while in an inserter
invalid question mod:pycoalprocessing
### Mod source PyAE Beta ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [X] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [ ] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [X] Graphical - [ ] Crash - [ ] Progression - [ ] Balance - [ ] Pypostprocessing failure - [ ] Other ### What is the problem? the laboratory instruments look like a mortar and pestle when in an inserter. they look correct on the ground. is it just set to use the wrong icon in one case? ### Steps to reproduce put a laboratory instrument in an inserter and look at it ### Additional context ![20220804174722_1](https://user-images.githubusercontent.com/74868214/182907495-5e60e975-6a33-44dd-8390-e2e890feab0f.jpg) ### Log file [factorio-current.log](https://github.com/pyanodon/pybugreports/files/9261736/factorio-current.log)
1.0
Laboratory Instruments seem to have wrong icon while in an inserter - ### Mod source PyAE Beta ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [X] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [ ] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [X] Graphical - [ ] Crash - [ ] Progression - [ ] Balance - [ ] Pypostprocessing failure - [ ] Other ### What is the problem? the laboratory instruments look like a mortar and pestle when in an inserter. they look correct on the ground. is it just set to use the wrong icon in one case? ### Steps to reproduce put a laboratory instrument in an inserter and look at it ### Additional context ![20220804174722_1](https://user-images.githubusercontent.com/74868214/182907495-5e60e975-6a33-44dd-8390-e2e890feab0f.jpg) ### Log file [factorio-current.log](https://github.com/pyanodon/pybugreports/files/9261736/factorio-current.log)
process
laboratory instruments seem to have wrong icon while in an inserter mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem the laboratory instruments look like a mortar and pestle when in an inserter they look correct on the ground is it just set to use the wrong icon in one case steps to reproduce put a laboratory instrument in an inserter and look at it additional context log file
1
823,612
31,026,111,748
IssuesEvent
2023-08-10 09:14:59
National-Forestry-Authority/forests
https://api.github.com/repos/National-Forestry-Authority/forests
opened
Open layer sidebar on all maps
Priority/High Ready to start
On all maps the layer switcher sidebar should be open by default ![Image](https://github.com/National-Forestry-Authority/forests/assets/1312689/c7fcbb4e-f7e0-485f-8137-08dba117c922)
1.0
Open layer sidebar on all maps - On all maps the layer switcher sidebar should be open by default ![Image](https://github.com/National-Forestry-Authority/forests/assets/1312689/c7fcbb4e-f7e0-485f-8137-08dba117c922)
non_process
open layer sidebar on all maps on all maps the layer switcher sidebar should be open by default
0
10,834
13,616,805,779
IssuesEvent
2020-09-23 16:06:09
CDLUC3/Make-Data-Count
https://api.github.com/repos/CDLUC3/Make-Data-Count
closed
Long-term home for documentation
Log Processing S07: Document Log Processor review
Design and deploy published version of public docs in a long-term location.
2.0
Long-term home for documentation - Design and deploy published version of public docs in a long-term location.
process
long term home for documentation design and deploy published version of public docs in a long term location
1
192,103
14,599,678,878
IssuesEvent
2020-12-21 04:58:01
github-vet/rangeloop-pointer-findings
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
closed
barakmich/go_sse2: src/sync/atomic/value_test.go; 30 LoC
fresh small test
Found a possible issue in [barakmich/go_sse2](https://www.github.com/barakmich/go_sse2) at [src/sync/atomic/value_test.go](https://github.com/barakmich/go_sse2/blob/a6a26455d4f4f81cfe89b1ec3261da5b80fa96aa/src/sync/atomic/value_test.go#L92-L121) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > range-loop variable test used in defer or goroutine at line 101 [Click here to see the code in its original context.](https://github.com/barakmich/go_sse2/blob/a6a26455d4f4f81cfe89b1ec3261da5b80fa96aa/src/sync/atomic/value_test.go#L92-L121) <details> <summary>Click here to show the 30 line(s) of Go which triggered the analyzer.</summary> ```go for _, test := range tests { var v Value done := make(chan bool, p) for i := 0; i < p; i++ { go func() { r := rand.New(rand.NewSource(rand.Int63())) expected := true loop: for j := 0; j < N; j++ { x := test[r.Intn(len(test))] v.Store(x) x = v.Load() for _, x1 := range test { if x == x1 { continue loop } } t.Logf("loaded unexpected value %+v, want %+v", x, test) expected = false break } done <- expected }() } for i := 0; i < p; i++ { if !<-done { t.FailNow() } } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: a6a26455d4f4f81cfe89b1ec3261da5b80fa96aa
1.0
barakmich/go_sse2: src/sync/atomic/value_test.go; 30 LoC - Found a possible issue in [barakmich/go_sse2](https://www.github.com/barakmich/go_sse2) at [src/sync/atomic/value_test.go](https://github.com/barakmich/go_sse2/blob/a6a26455d4f4f81cfe89b1ec3261da5b80fa96aa/src/sync/atomic/value_test.go#L92-L121) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > range-loop variable test used in defer or goroutine at line 101 [Click here to see the code in its original context.](https://github.com/barakmich/go_sse2/blob/a6a26455d4f4f81cfe89b1ec3261da5b80fa96aa/src/sync/atomic/value_test.go#L92-L121) <details> <summary>Click here to show the 30 line(s) of Go which triggered the analyzer.</summary> ```go for _, test := range tests { var v Value done := make(chan bool, p) for i := 0; i < p; i++ { go func() { r := rand.New(rand.NewSource(rand.Int63())) expected := true loop: for j := 0; j < N; j++ { x := test[r.Intn(len(test))] v.Store(x) x = v.Load() for _, x1 := range test { if x == x1 { continue loop } } t.Logf("loaded unexpected value %+v, want %+v", x, test) expected = false break } done <- expected }() } for i := 0; i < p; i++ { if !<-done { t.FailNow() } } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: a6a26455d4f4f81cfe89b1ec3261da5b80fa96aa
non_process
barakmich go src sync atomic value test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable test used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for test range tests var v value done make chan bool p for i i p i go func r rand new rand newsource rand expected true loop for j j n j x test v store x x v load for range test if x continue loop t logf loaded unexpected value v want v x test expected false break done expected for i i p i if done t failnow leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
678,937
23,216,392,586
IssuesEvent
2022-08-02 14:23:55
Apicurio/apicurio-registry
https://api.github.com/repos/Apicurio/apicurio-registry
closed
NullPointerException while creating artifact on version 2.2.1.Final
Bug component/registry priority/high
``` 2022-03-22 14:48:46 WARN <_> [io.apicurio.registry.metrics.health.liveness.PersistenceExceptionLivenessCheck] (executor-thread-3) Liveness problem suspected in PersistenceExceptionLivenessCheck because of an exception: : java.lang.NullPointerException at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage.nextClusterContentId(KafkaSqlRegistryStorage.java:325) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage.ensureContent(KafkaSqlRegistryStorage.java:343) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage.createArtifactWithMetadata(KafkaSqlRegistryStorage.java:399) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage_Subclass.createArtifactWithMetadata$$superforward1(KafkaSqlRegistryStorage_Subclass.zig:3642) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage_Subclass$$function$$9.apply(KafkaSqlRegistryStorage_Subclass$$function$$9.zig:65) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:54) at io.apicurio.registry.metrics.health.readiness.PersistenceTimeoutReadinessInterceptor.intercept(PersistenceTimeoutReadinessInterceptor.java:32) at io.apicurio.registry.metrics.health.readiness.PersistenceTimeoutReadinessInterceptor_Bean.intercept(PersistenceTimeoutReadinessInterceptor_Bean.zig:429) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50) at io.apicurio.registry.metrics.health.liveness.PersistenceExceptionLivenessInterceptor.intercept(PersistenceExceptionLivenessInterceptor.java:27) at io.apicurio.registry.metrics.health.liveness.PersistenceExceptionLivenessInterceptor_Bean.intercept(PersistenceExceptionLivenessInterceptor_Bean.zig:378) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50) at io.apicurio.registry.logging.LoggingInterceptor.logMethodEntry(LoggingInterceptor.java:55) at io.apicurio.registry.logging.LoggingInterceptor_Bean.intercept(LoggingInterceptor_Bean.zig:327) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50) at io.apicurio.registry.metrics.StorageMetricsInterceptor.intercept(StorageMetricsInterceptor.java:48) at io.apicurio.registry.metrics.StorageMetricsInterceptor_Bean.intercept(StorageMetricsInterceptor_Bean.zig:429) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.perform(AroundInvokeInvocationContext.java:41) at io.quarkus.arc.impl.InvocationContexts.performAroundInvoke(InvocationContexts.java:32) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage_Subclass.createArtifactWithMetadata(KafkaSqlRegistryStorage_Subclass.zig:6078) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage_ClientProxy.createArtifactWithMetadata(KafkaSqlRegistryStorage_ClientProxy.zig:2240) at io.apicurio.registry.storage.RegistryStorageProducer_ProducerMethod_realImpl_cf1c876861dd1c25dca504d30a12bfedeafd47bd_ClientProxy.createArtifactWithMetadata(RegistryStorageProducer_ProducerMethod_realImpl_cf1c876861dd1c25dca504d30a12bfedeafd47bd_ClientProxy.zig:1197) at io.apicurio.registry.rest.v2.GroupsResourceImpl.createArtifact(GroupsResourceImpl.java:561) at io.apicurio.registry.rest.v2.GroupsResourceImpl_Subclass.createArtifact$$superforward1(GroupsResourceImpl_Subclass.zig:2487) at io.apicurio.registry.rest.v2.GroupsResourceImpl_Subclass$$function$$1.apply(GroupsResourceImpl_Subclass$$function$$1.zig:95) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:54) at io.apicurio.registry.auth.AuthorizedInterceptor.authorizeMethod(AuthorizedInterceptor.java:86) at io.apicurio.registry.auth.AuthorizedInterceptor_Bean.intercept(AuthorizedInterceptor_Bean.zig:743) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50) at io.apicurio.registry.logging.LoggingInterceptor.logMethodEntry(LoggingInterceptor.java:55) at io.apicurio.registry.logging.LoggingInterceptor_Bean.intercept(LoggingInterceptor_Bean.zig:327) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50) at io.apicurio.registry.logging.audit.AuditedInterceptor.auditMethod(AuditedInterceptor.java:79) at io.apicurio.registry.logging.audit.AuditedInterceptor_Bean.intercept(AuditedInterceptor_Bean.zig:383) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.perform(AroundInvokeInvocationContext.java:41) at io.quarkus.arc.impl.InvocationContexts.performAroundInvoke(InvocationContexts.java:32) at io.apicurio.registry.rest.v2.GroupsResourceImpl_Subclass.createArtifact(GroupsResourceImpl_Subclass.zig:3256) at io.apicurio.registry.rest.v2.GroupsResourceImpl_ClientProxy.createArtifact(GroupsResourceImpl_ClientProxy.zig:1068) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:170) at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:130) at org.jboss.resteasy.core.ResourceMethodInvoker.internalInvokeOnTarget(ResourceMethodInvoker.java:660) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTargetAfterFilter(ResourceMethodInvoker.java:524) at org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invokeOnTarget$2(ResourceMethodInvoker.java:474) at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:364) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:476) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:434) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:408) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:69) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:492) at org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:261) at org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:161) at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:364) at org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:164) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:247) at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:249) at io.quarkus.resteasy.runtime.ResteasyFilter.doFilter(ResteasyFilter.java:35) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.apicurio.registry.ui.servlets.HSTSFilter.doFilter(HSTSFilter.java:62) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.apicurio.registry.ui.servlets.ResourceCacheControlFilter.doFilter(ResourceCacheControlFilter.java:92) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.apicurio.registry.rest.RegistryApplicationServletFilter.doFilter(RegistryApplicationServletFilter.java:162) at io.apicurio.registry.rest.RegistryApplicationServletFilter_ClientProxy.doFilter(RegistryApplicationServletFilter_ClientProxy.zig:196) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84) at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:63) at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68) at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:67) at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:133) at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:65) at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60) at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77) at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50) at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:247) at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:56) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:111) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:108) at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48) at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43) at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$9$1.call(UndertowDeploymentRecorder.java:593) at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:227) at io.undertow.servlet.handlers.ServletInitialHandler.handleRequest(ServletInitialHandler.java:152) at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$1.handleRequest(UndertowDeploymentRecorder.java:119) at io.undertow.server.Connectors.executeRootHandler(Connectors.java:290) at io.undertow.server.DefaultExchangeHandler.handle(DefaultExchangeHandler.java:18) at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$5$1.run(UndertowDeploymentRecorder.java:415) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at io.quarkus.vertx.core.runtime.VertxCoreRecorder$13.runWith(VertxCoreRecorder.java:543) at org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2449) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1478) at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29) at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:829) ```
1.0
NullPointerException while creating artifact on version 2.2.1.Final - ``` 2022-03-22 14:48:46 WARN <_> [io.apicurio.registry.metrics.health.liveness.PersistenceExceptionLivenessCheck] (executor-thread-3) Liveness problem suspected in PersistenceExceptionLivenessCheck because of an exception: : java.lang.NullPointerException at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage.nextClusterContentId(KafkaSqlRegistryStorage.java:325) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage.ensureContent(KafkaSqlRegistryStorage.java:343) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage.createArtifactWithMetadata(KafkaSqlRegistryStorage.java:399) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage_Subclass.createArtifactWithMetadata$$superforward1(KafkaSqlRegistryStorage_Subclass.zig:3642) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage_Subclass$$function$$9.apply(KafkaSqlRegistryStorage_Subclass$$function$$9.zig:65) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:54) at io.apicurio.registry.metrics.health.readiness.PersistenceTimeoutReadinessInterceptor.intercept(PersistenceTimeoutReadinessInterceptor.java:32) at io.apicurio.registry.metrics.health.readiness.PersistenceTimeoutReadinessInterceptor_Bean.intercept(PersistenceTimeoutReadinessInterceptor_Bean.zig:429) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50) at io.apicurio.registry.metrics.health.liveness.PersistenceExceptionLivenessInterceptor.intercept(PersistenceExceptionLivenessInterceptor.java:27) at io.apicurio.registry.metrics.health.liveness.PersistenceExceptionLivenessInterceptor_Bean.intercept(PersistenceExceptionLivenessInterceptor_Bean.zig:378) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50) at io.apicurio.registry.logging.LoggingInterceptor.logMethodEntry(LoggingInterceptor.java:55) at io.apicurio.registry.logging.LoggingInterceptor_Bean.intercept(LoggingInterceptor_Bean.zig:327) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50) at io.apicurio.registry.metrics.StorageMetricsInterceptor.intercept(StorageMetricsInterceptor.java:48) at io.apicurio.registry.metrics.StorageMetricsInterceptor_Bean.intercept(StorageMetricsInterceptor_Bean.zig:429) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.perform(AroundInvokeInvocationContext.java:41) at io.quarkus.arc.impl.InvocationContexts.performAroundInvoke(InvocationContexts.java:32) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage_Subclass.createArtifactWithMetadata(KafkaSqlRegistryStorage_Subclass.zig:6078) at io.apicurio.registry.storage.impl.kafkasql.KafkaSqlRegistryStorage_ClientProxy.createArtifactWithMetadata(KafkaSqlRegistryStorage_ClientProxy.zig:2240) at io.apicurio.registry.storage.RegistryStorageProducer_ProducerMethod_realImpl_cf1c876861dd1c25dca504d30a12bfedeafd47bd_ClientProxy.createArtifactWithMetadata(RegistryStorageProducer_ProducerMethod_realImpl_cf1c876861dd1c25dca504d30a12bfedeafd47bd_ClientProxy.zig:1197) at io.apicurio.registry.rest.v2.GroupsResourceImpl.createArtifact(GroupsResourceImpl.java:561) at io.apicurio.registry.rest.v2.GroupsResourceImpl_Subclass.createArtifact$$superforward1(GroupsResourceImpl_Subclass.zig:2487) at io.apicurio.registry.rest.v2.GroupsResourceImpl_Subclass$$function$$1.apply(GroupsResourceImpl_Subclass$$function$$1.zig:95) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:54) at io.apicurio.registry.auth.AuthorizedInterceptor.authorizeMethod(AuthorizedInterceptor.java:86) at io.apicurio.registry.auth.AuthorizedInterceptor_Bean.intercept(AuthorizedInterceptor_Bean.zig:743) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50) at io.apicurio.registry.logging.LoggingInterceptor.logMethodEntry(LoggingInterceptor.java:55) at io.apicurio.registry.logging.LoggingInterceptor_Bean.intercept(LoggingInterceptor_Bean.zig:327) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:50) at io.apicurio.registry.logging.audit.AuditedInterceptor.auditMethod(AuditedInterceptor.java:79) at io.apicurio.registry.logging.audit.AuditedInterceptor_Bean.intercept(AuditedInterceptor_Bean.zig:383) at io.quarkus.arc.impl.InterceptorInvocation.invoke(InterceptorInvocation.java:41) at io.quarkus.arc.impl.AroundInvokeInvocationContext.perform(AroundInvokeInvocationContext.java:41) at io.quarkus.arc.impl.InvocationContexts.performAroundInvoke(InvocationContexts.java:32) at io.apicurio.registry.rest.v2.GroupsResourceImpl_Subclass.createArtifact(GroupsResourceImpl_Subclass.zig:3256) at io.apicurio.registry.rest.v2.GroupsResourceImpl_ClientProxy.createArtifact(GroupsResourceImpl_ClientProxy.zig:1068) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:170) at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:130) at org.jboss.resteasy.core.ResourceMethodInvoker.internalInvokeOnTarget(ResourceMethodInvoker.java:660) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTargetAfterFilter(ResourceMethodInvoker.java:524) at org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invokeOnTarget$2(ResourceMethodInvoker.java:474) at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:364) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:476) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:434) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:408) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:69) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:492) at org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:261) at org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:161) at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:364) at org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:164) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:247) at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:249) at io.quarkus.resteasy.runtime.ResteasyFilter.doFilter(ResteasyFilter.java:35) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.apicurio.registry.ui.servlets.HSTSFilter.doFilter(HSTSFilter.java:62) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.apicurio.registry.ui.servlets.ResourceCacheControlFilter.doFilter(ResourceCacheControlFilter.java:92) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.apicurio.registry.rest.RegistryApplicationServletFilter.doFilter(RegistryApplicationServletFilter.java:162) at io.apicurio.registry.rest.RegistryApplicationServletFilter_ClientProxy.doFilter(RegistryApplicationServletFilter_ClientProxy.zig:196) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84) at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:63) at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68) at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:67) at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:133) at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:65) at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60) at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77) at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50) at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:247) at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:56) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:111) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:108) at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48) at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43) at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$9$1.call(UndertowDeploymentRecorder.java:593) at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:227) at io.undertow.servlet.handlers.ServletInitialHandler.handleRequest(ServletInitialHandler.java:152) at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$1.handleRequest(UndertowDeploymentRecorder.java:119) at io.undertow.server.Connectors.executeRootHandler(Connectors.java:290) at io.undertow.server.DefaultExchangeHandler.handle(DefaultExchangeHandler.java:18) at io.quarkus.undertow.runtime.UndertowDeploymentRecorder$5$1.run(UndertowDeploymentRecorder.java:415) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at io.quarkus.vertx.core.runtime.VertxCoreRecorder$13.runWith(VertxCoreRecorder.java:543) at org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2449) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1478) at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29) at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:829) ```
non_process
nullpointerexception while creating artifact on version final warn executor thread liveness problem suspected in persistenceexceptionlivenesscheck because of an exception java lang nullpointerexception at io apicurio registry storage impl kafkasql kafkasqlregistrystorage nextclustercontentid kafkasqlregistrystorage java at io apicurio registry storage impl kafkasql kafkasqlregistrystorage ensurecontent kafkasqlregistrystorage java at io apicurio registry storage impl kafkasql kafkasqlregistrystorage createartifactwithmetadata kafkasqlregistrystorage java at io apicurio registry storage impl kafkasql kafkasqlregistrystorage subclass createartifactwithmetadata kafkasqlregistrystorage subclass zig at io apicurio registry storage impl kafkasql kafkasqlregistrystorage subclass function apply kafkasqlregistrystorage subclass function zig at io quarkus arc impl aroundinvokeinvocationcontext proceed aroundinvokeinvocationcontext java at io apicurio registry metrics health readiness persistencetimeoutreadinessinterceptor intercept persistencetimeoutreadinessinterceptor java at io apicurio registry metrics health readiness persistencetimeoutreadinessinterceptor bean intercept persistencetimeoutreadinessinterceptor bean zig at io quarkus arc impl interceptorinvocation invoke interceptorinvocation java at io quarkus arc impl aroundinvokeinvocationcontext proceed aroundinvokeinvocationcontext java at io apicurio registry metrics health liveness persistenceexceptionlivenessinterceptor intercept persistenceexceptionlivenessinterceptor java at io apicurio registry metrics health liveness persistenceexceptionlivenessinterceptor bean intercept persistenceexceptionlivenessinterceptor bean zig at io quarkus arc impl interceptorinvocation invoke interceptorinvocation java at io quarkus arc impl aroundinvokeinvocationcontext proceed aroundinvokeinvocationcontext java at io apicurio registry logging logginginterceptor logmethodentry logginginterceptor java at io apicurio registry logging logginginterceptor bean intercept logginginterceptor bean zig at io quarkus arc impl interceptorinvocation invoke interceptorinvocation java at io quarkus arc impl aroundinvokeinvocationcontext proceed aroundinvokeinvocationcontext java at io apicurio registry metrics storagemetricsinterceptor intercept storagemetricsinterceptor java at io apicurio registry metrics storagemetricsinterceptor bean intercept storagemetricsinterceptor bean zig at io quarkus arc impl interceptorinvocation invoke interceptorinvocation java at io quarkus arc impl aroundinvokeinvocationcontext perform aroundinvokeinvocationcontext java at io quarkus arc impl invocationcontexts performaroundinvoke invocationcontexts java at io apicurio registry storage impl kafkasql kafkasqlregistrystorage subclass createartifactwithmetadata kafkasqlregistrystorage subclass zig at io apicurio registry storage impl kafkasql kafkasqlregistrystorage clientproxy createartifactwithmetadata kafkasqlregistrystorage clientproxy zig at io apicurio registry storage registrystorageproducer producermethod realimpl clientproxy createartifactwithmetadata registrystorageproducer producermethod realimpl clientproxy zig at io apicurio registry rest groupsresourceimpl createartifact groupsresourceimpl java at io apicurio registry rest groupsresourceimpl subclass createartifact groupsresourceimpl subclass zig at io apicurio registry rest groupsresourceimpl subclass function apply groupsresourceimpl subclass function zig at io quarkus arc impl aroundinvokeinvocationcontext proceed aroundinvokeinvocationcontext java at io apicurio registry auth authorizedinterceptor authorizemethod authorizedinterceptor java at io apicurio registry auth authorizedinterceptor bean intercept authorizedinterceptor bean zig at io quarkus arc impl interceptorinvocation invoke interceptorinvocation java at io quarkus arc impl aroundinvokeinvocationcontext proceed aroundinvokeinvocationcontext java at io apicurio registry logging logginginterceptor logmethodentry logginginterceptor java at io apicurio registry logging logginginterceptor bean intercept logginginterceptor bean zig at io quarkus arc impl interceptorinvocation invoke interceptorinvocation java at io quarkus arc impl aroundinvokeinvocationcontext proceed aroundinvokeinvocationcontext java at io apicurio registry logging audit auditedinterceptor auditmethod auditedinterceptor java at io apicurio registry logging audit auditedinterceptor bean intercept auditedinterceptor bean zig at io quarkus arc impl interceptorinvocation invoke interceptorinvocation java at io quarkus arc impl aroundinvokeinvocationcontext perform aroundinvokeinvocationcontext java at io quarkus arc impl invocationcontexts performaroundinvoke invocationcontexts java at io apicurio registry rest groupsresourceimpl subclass createartifact groupsresourceimpl subclass zig at io apicurio registry rest groupsresourceimpl clientproxy createartifact groupsresourceimpl clientproxy zig at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org jboss resteasy core methodinjectorimpl invoke methodinjectorimpl java at org jboss resteasy core methodinjectorimpl invoke methodinjectorimpl java at org jboss resteasy core resourcemethodinvoker internalinvokeontarget resourcemethodinvoker java at org jboss resteasy core resourcemethodinvoker invokeontargetafterfilter resourcemethodinvoker java at org jboss resteasy core resourcemethodinvoker lambda invokeontarget resourcemethodinvoker java at org jboss resteasy core interception jaxrs prematchcontainerrequestcontext filter prematchcontainerrequestcontext java at org jboss resteasy core resourcemethodinvoker invokeontarget resourcemethodinvoker java at org jboss resteasy core resourcemethodinvoker invoke resourcemethodinvoker java at org jboss resteasy core resourcemethodinvoker invoke resourcemethodinvoker java at org jboss resteasy core resourcemethodinvoker invoke resourcemethodinvoker java at org jboss resteasy core synchronousdispatcher invoke synchronousdispatcher java at org jboss resteasy core synchronousdispatcher lambda invoke synchronousdispatcher java at org jboss resteasy core synchronousdispatcher lambda preprocess synchronousdispatcher java at org jboss resteasy core interception jaxrs prematchcontainerrequestcontext filter prematchcontainerrequestcontext java at org jboss resteasy core synchronousdispatcher preprocess synchronousdispatcher java at org jboss resteasy core synchronousdispatcher invoke synchronousdispatcher java at org jboss resteasy plugins server servlet servletcontainerdispatcher service servletcontainerdispatcher java at io quarkus resteasy runtime resteasyfilter dofilter resteasyfilter java at io undertow servlet core managedfilter dofilter managedfilter java at io undertow servlet handlers filterhandler filterchainimpl dofilter filterhandler java at io apicurio registry ui servlets hstsfilter dofilter hstsfilter java at io undertow servlet core managedfilter dofilter managedfilter java at io undertow servlet handlers filterhandler filterchainimpl dofilter filterhandler java at io apicurio registry ui servlets resourcecachecontrolfilter dofilter resourcecachecontrolfilter java at io undertow servlet core managedfilter dofilter managedfilter java at io undertow servlet handlers filterhandler filterchainimpl dofilter filterhandler java at io apicurio registry rest registryapplicationservletfilter dofilter registryapplicationservletfilter java at io apicurio registry rest registryapplicationservletfilter clientproxy dofilter registryapplicationservletfilter clientproxy zig at io undertow servlet core managedfilter dofilter managedfilter java at io undertow servlet handlers filterhandler filterchainimpl dofilter filterhandler java at io undertow servlet handlers filterhandler handlerequest filterhandler java at io undertow servlet handlers security servletsecurityrolehandler handlerequest servletsecurityrolehandler java at io undertow servlet handlers servletchain handlerequest servletchain java at io undertow servlet handlers servletdispatchinghandler handlerequest servletdispatchinghandler java at io undertow servlet handlers redirectdirhandler handlerequest redirectdirhandler java at io undertow servlet handlers security sslinformationassociationhandler handlerequest sslinformationassociationhandler java at io undertow servlet handlers security servletauthenticationcallhandler handlerequest servletauthenticationcallhandler java at io undertow server handlers predicatehandler handlerequest predicatehandler java at io undertow security handlers abstractconfidentialityhandler handlerequest abstractconfidentialityhandler java at io undertow servlet handlers security servletconfidentialityconstrainthandler handlerequest servletconfidentialityconstrainthandler java at io undertow security handlers authenticationmechanismshandler handlerequest authenticationmechanismshandler java at io undertow servlet handlers security cachedauthenticatedsessionhandler handlerequest cachedauthenticatedsessionhandler java at io undertow security handlers notificationreceiverhandler handlerequest notificationreceiverhandler java at io undertow security handlers abstractsecuritycontextassociationhandler handlerequest abstractsecuritycontextassociationhandler java at io undertow server handlers predicatehandler handlerequest predicatehandler java at io undertow server handlers predicatehandler handlerequest predicatehandler java at io undertow servlet handlers servletinitialhandler handlefirstrequest servletinitialhandler java at io undertow servlet handlers servletinitialhandler access servletinitialhandler java at io undertow servlet handlers servletinitialhandler call servletinitialhandler java at io undertow servlet handlers servletinitialhandler call servletinitialhandler java at io undertow servlet core servletrequestcontextthreadsetupaction call servletrequestcontextthreadsetupaction java at io undertow servlet core contextclassloadersetupaction call contextclassloadersetupaction java at io quarkus undertow runtime undertowdeploymentrecorder call undertowdeploymentrecorder java at io undertow servlet handlers servletinitialhandler dispatchrequest servletinitialhandler java at io undertow servlet handlers servletinitialhandler handlerequest servletinitialhandler java at io quarkus undertow runtime undertowdeploymentrecorder handlerequest undertowdeploymentrecorder java at io undertow server connectors executeroothandler connectors java at io undertow server defaultexchangehandler handle defaultexchangehandler java at io quarkus undertow runtime undertowdeploymentrecorder run undertowdeploymentrecorder java at java base java util concurrent executors runnableadapter call executors java at java base java util concurrent futuretask run futuretask java at io quarkus vertx core runtime vertxcorerecorder runwith vertxcorerecorder java at org jboss threads enhancedqueueexecutor task run enhancedqueueexecutor java at org jboss threads enhancedqueueexecutor threadbody run enhancedqueueexecutor java at org jboss threads delegatingrunnable run delegatingrunnable java at org jboss threads threadlocalresettingrunnable run threadlocalresettingrunnable java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java base java lang thread run thread java
0
9,399
12,398,102,636
IssuesEvent
2020-05-21 00:46:08
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Druid: allow filtering on metrics
Database/Druid Querying/Processor Type:Bug
Split of from #10935 > we currently don't support filtering on metrics
1.0
Druid: allow filtering on metrics - Split of from #10935 > we currently don't support filtering on metrics
process
druid allow filtering on metrics split of from we currently don t support filtering on metrics
1
604,625
18,715,748,249
IssuesEvent
2021-11-03 04:15:00
FoxxieBot/Foxxie
https://api.github.com/repos/FoxxieBot/Foxxie
closed
Request(events): Refactor events to be more centralized
enhancement Refactor Priority: Low
With this issue I want to make the general events more centralized with one listener on the Discord api that emits all the Foxxie events. Discord events will mostly be raw emitters.
1.0
Request(events): Refactor events to be more centralized - With this issue I want to make the general events more centralized with one listener on the Discord api that emits all the Foxxie events. Discord events will mostly be raw emitters.
non_process
request events refactor events to be more centralized with this issue i want to make the general events more centralized with one listener on the discord api that emits all the foxxie events discord events will mostly be raw emitters
0