Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
503,393 | 14,590,886,018 | IssuesEvent | 2020-12-19 10:10:52 | bounswe/bounswe2020group2 | https://api.github.com/repos/bounswe/bounswe2020group2 | closed | [ANDROID] payment option and actions | effort: high priority: critical type: android | In android, payment page does not exist. "pay" option has to be added in cart fragment and it has to navigate user to the payment fragment. In payment fragment, user has to enter credit card information and other necessary details. | 1.0 | [ANDROID] payment option and actions - In android, payment page does not exist. "pay" option has to be added in cart fragment and it has to navigate user to the payment fragment. In payment fragment, user has to enter credit card information and other necessary details. | priority | payment option and actions in android payment page does not exist pay option has to be added in cart fragment and it has to navigate user to the payment fragment in payment fragment user has to enter credit card information and other necessary details | 1 |
302,589 | 9,284,185,389 | IssuesEvent | 2019-03-21 00:19:42 | CS2103-AY1819S2-T12-2/main | https://api.github.com/repos/CS2103-AY1819S2-T12-2/main | closed | Enhance Find Method | priority.High status.Ongoing type.Enhancement | The current find method in AB4 only finds based on "name". For our project we want to find Flashcards based on text but also based on tags. For example, we can find "Spanish" Flashcards. | 1.0 | Enhance Find Method - The current find method in AB4 only finds based on "name". For our project we want to find Flashcards based on text but also based on tags. For example, we can find "Spanish" Flashcards. | priority | enhance find method the current find method in only finds based on name for our project we want to find flashcards based on text but also based on tags for example we can find spanish flashcards | 1 |
176,854 | 6,565,812,488 | IssuesEvent | 2017-09-08 09:50:59 | JujaLabs/users | https://api.github.com/repos/JujaLabs/users | opened | Return Exception to other services if users not found or found extra users | High priority | Return Exception to other services if users not found or found extra users by slacknames or uuids.
Exception message must contain slacknames or uuids | 1.0 | Return Exception to other services if users not found or found extra users - Return Exception to other services if users not found or found extra users by slacknames or uuids.
Exception message must contain slacknames or uuids | priority | return exception to other services if users not found or found extra users return exception to other services if users not found or found extra users by slacknames or uuids exception message must contain slacknames or uuids | 1 |
278,101 | 8,636,533,386 | IssuesEvent | 2018-11-23 07:55:44 | xournalpp/xournalpp | https://api.github.com/repos/xournalpp/xournalpp | closed | Remove gdk_threads_enter() | enhancement high priority | Since Gtk3.6 gdk_threads_enter() has been deprecated along with gdk_threads_leave(). The focus should be to remove any direct gdk calls (including Cairo calls) from the secondary threads, and clean up the gdk_threads_*.
The recommendation appears to be that we use gdk_threads_add_idle() on all secondary threads. In addition the g_idle_add() and g_timeout_add() are apparently only safe when no code is gdk_threads_enter() dependent - therefore, these should be changed, at least for now, to gdk_threads_add_idle() etc.
| 1.0 | Remove gdk_threads_enter() - Since Gtk3.6 gdk_threads_enter() has been deprecated along with gdk_threads_leave(). The focus should be to remove any direct gdk calls (including Cairo calls) from the secondary threads, and clean up the gdk_threads_*.
The recommendation appears to be that we use gdk_threads_add_idle() on all secondary threads. In addition the g_idle_add() and g_timeout_add() are apparently only safe when no code is gdk_threads_enter() dependent - therefore, these should be changed, at least for now, to gdk_threads_add_idle() etc.
| priority | remove gdk threads enter since gdk threads enter has been deprecated along with gdk threads leave the focus should be to remove any direct gdk calls including cairo calls from the secondary threads and clean up the gdk threads the recommendation appears to be that we use gdk threads add idle on all secondary threads in addition the g idle add and g timeout add are apparently only safe when no code is gdk threads enter dependent therefore these should be changed at least for now to gdk threads add idle etc | 1 |
22,905 | 2,651,255,134 | IssuesEvent | 2015-03-16 10:04:50 | GermanCentralLibraryForTheBlind/readium-js-viewer | https://api.github.com/repos/GermanCentralLibraryForTheBlind/readium-js-viewer | opened | Thema Readium / Ebooks in der DZB bekannt machen | discussion high priority | - Thema ist nach wie vor im Haus wenig präsent
- via Rundmail
- in Verbindung mit einem Buch/Büchern, das/die viel Interesse auf sich ziehen (Bestseller) | 1.0 | Thema Readium / Ebooks in der DZB bekannt machen - - Thema ist nach wie vor im Haus wenig präsent
- via Rundmail
- in Verbindung mit einem Buch/Büchern, das/die viel Interesse auf sich ziehen (Bestseller) | priority | thema readium ebooks in der dzb bekannt machen thema ist nach wie vor im haus wenig präsent via rundmail in verbindung mit einem buch büchern das die viel interesse auf sich ziehen bestseller | 1 |
753,522 | 26,351,127,182 | IssuesEvent | 2023-01-11 05:02:07 | TampaDevs/tampadevs | https://api.github.com/repos/TampaDevs/tampadevs | closed | Builds Are Broken Since 3b9a88b | bug high priority dependencies | Hello all,
Builds have been broken since commit [`3b9a88b`](https://github.com/TampaDevs/tampadevs/commit/3b9a88b5d85cc56679d96bdf2a772af54bc12622). The currently deployed version of the site is [`c8d7b25`](https://github.com/TampaDevs/tampadevs/commit/c8d7b255afa4fa7ad6b2fb273c034bf95edf8f7a).
<img width="1274" alt="image" src="https://user-images.githubusercontent.com/7227500/211381338-97765d84-0c6f-4ccd-8975-10a95139e820.png">
Builds are run with a Node version of `17.9.1`. Per the Pages build log output below, the issue appears to be related to `babel-loader`:
```
22:46:55.220 | Cloning repository...
-- | --
22:46:58.240 | From https://github.com/TampaDevs/tampadevs
22:46:58.240 | * branch 112583fafb29a78a84f808e73d134f436efcf431 -> FETCH_HEAD
22:46:58.241 |
22:46:58.698 | HEAD is now at 112583f update survey for 2022
22:46:58.698 |
22:46:58.855 |
22:46:58.883 | Success: Finished cloning repository files
22:46:59.607 | Installing dependencies
22:46:59.620 | Python version set to 2.7
22:47:03.308 | Downloading and installing node v17.9.1...
22:47:03.758 | Downloading https://nodejs.org/dist/v17.9.1/node-v17.9.1-linux-x64.tar.xz...
22:47:04.216 | Computing checksum with sha256sum
22:47:04.349 | Checksums matched!
22:47:09.576 | Now using node v17.9.1 (npm v8.11.0)
22:47:10.010 | Started restoring cached build plugins
22:47:10.024 | Finished restoring cached build plugins
22:47:10.580 | Attempting ruby version 2.7.1, read from environment
22:47:14.588 | Using ruby version 2.7.1
22:47:14.960 | Using PHP version 5.6
22:47:15.138 | 5.2 is already installed.
22:47:15.166 | Using Swift version 5.2
22:47:15.167 | Started restoring cached node modules
22:47:15.186 | Finished restoring cached node modules
22:47:15.711 | Installing NPM modules using NPM version 8.11.0
22:47:16.134 | npm WARN config tmp This setting is no longer used. npm stores temporary files in a special
22:47:16.135 | npm WARN config location in the cache, and they are managed by
22:47:16.135 | npm WARN config [`cacache`](http://npm.im/cacache).
22:47:16.549 | npm WARN config tmp This setting is no longer used. npm stores temporary files in a special
22:47:16.549 | npm WARN config location in the cache, and they are managed by
22:47:16.549 | npm WARN config [`cacache`](http://npm.im/cacache).
22:47:22.496 | npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
22:47:22.621 | npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
22:47:22.734 | npm WARN deprecated source-map-url@0.4.0: See https://github.com/lydell/source-map-url#deprecated
22:47:23.544 | npm WARN deprecated stable@0.1.8: Modern JS already guarantees Array#sort() is a stable sort, so this library is deprecated. See the compatibility table on MDN: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort#browser_compatibility
22:47:24.567 | npm WARN deprecated source-map-resolve@0.5.2: See https://github.com/lydell/source-map-resolve#deprecated
22:47:24.657 | npm WARN deprecated querystring@0.2.0: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
22:47:29.380 | npm WARN deprecated chokidar@2.1.8: Chokidar 2 does not receive security updates since 2019. Upgrade to chokidar 3 with 15x fewer dependencies
22:47:35.659 |
22:47:35.660 | added 946 packages, and audited 947 packages in 19s
22:47:35.660 |
22:47:35.660 | 58 packages are looking for funding
22:47:35.660 | run `npm fund` for details
22:47:35.675 |
22:47:35.676 | 7 vulnerabilities (2 moderate, 5 high)
22:47:35.676 |
22:47:35.676 | To address issues that do not require attention, run:
22:47:35.676 | npm audit fix
22:47:35.677 |
22:47:35.677 | Some issues need review, and may require choosing
22:47:35.677 | a different dependency.
22:47:35.677 |
22:47:35.677 | Run `npm audit` for details.
22:47:35.697 | NPM modules installed
22:47:36.290 | npm WARN config tmp This setting is no longer used. npm stores temporary files in a special
22:47:36.291 | npm WARN config location in the cache, and they are managed by
22:47:36.291 | npm WARN config [`cacache`](http://npm.im/cacache).
22:47:36.311 | Installing Hugo 0.54.0
22:47:37.015 | Hugo Static Site Generator v0.54.0-B1A82C61A/extended linux/amd64 BuildDate: 2019-02-01T10:04:38Z
22:47:37.019 | Started restoring cached go cache
22:47:37.041 | Finished restoring cached go cache
22:47:37.200 | go version go1.14.4 linux/amd64
22:47:37.216 | go version go1.14.4 linux/amd64
22:47:37.219 | Installing missing commands
22:47:37.219 | Verify run directory
22:47:37.220 | Executing user command: npm run build
22:47:37.698 | npm WARN config tmp This setting is no longer used. npm stores temporary files in a special
22:47:37.699 | npm WARN config location in the cache, and they are managed by
22:47:37.699 | npm WARN config [`cacache`](http://npm.im/cacache).
22:47:37.715 |
22:47:37.715 | > tampadevs-eleventy@0.0.1 build
22:47:37.715 | > ELEVENTY_ENV=production eleventy
22:47:37.715 |
22:47:47.458 | [11ty] Unhandled rejection in promise: (more in DEBUG output)
22:47:47.459 | [11ty] error:0308010C:digital envelope routines::unsupported (via Error)
22:47:47.459 | [11ty]
22:47:47.459 | [11ty] Original error stack trace: Error: error:0308010C:digital envelope routines::unsupported
22:47:47.459 | [11ty] at new Hash (node:internal/crypto/hash:67:19)
22:47:47.460 | [11ty] at Object.createHash (node:crypto:135:10)
22:47:47.460 | [11ty] at module.exports (/opt/buildhome/repo/node_modules/webpack/lib/util/createHash.js:135:53)
22:47:47.460 | [11ty] at NormalModule._initBuildHash (/opt/buildhome/repo/node_modules/webpack/lib/NormalModule.js:417:16)
22:47:47.460 | [11ty] at handleParseError (/opt/buildhome/repo/node_modules/webpack/lib/NormalModule.js:471:10)
22:47:47.460 | [11ty] at /opt/buildhome/repo/node_modules/webpack/lib/NormalModule.js:503:5
22:47:47.460 | [11ty] at /opt/buildhome/repo/node_modules/webpack/lib/NormalModule.js:358:12
22:47:47.460 | [11ty] at /opt/buildhome/repo/node_modules/loader-runner/lib/LoaderRunner.js:373:3
22:47:47.460 | [11ty] at iterateNormalLoaders (/opt/buildhome/repo/node_modules/loader-runner/lib/LoaderRunner.js:214:10)
22:47:47.461 | [11ty] at iterateNormalLoaders (/opt/buildhome/repo/node_modules/loader-runner/lib/LoaderRunner.js:221:10)
22:47:47.461 | [11ty] at /opt/buildhome/repo/node_modules/loader-runner/lib/LoaderRunner.js:236:3
22:47:47.461 | [11ty] at context.callback (/opt/buildhome/repo/node_modules/loader-runner/lib/LoaderRunner.js:111:13)
22:47:47.461 | [11ty] at /opt/buildhome/repo/node_modules/babel-loader/lib/index.js:59:71
22:47:47.564 | Failed: build command exited with code: 1
22:47:48.453 | Failed: an internal error occurred
```
| 1.0 | Builds Are Broken Since 3b9a88b - Hello all,
Builds have been broken since commit [`3b9a88b`](https://github.com/TampaDevs/tampadevs/commit/3b9a88b5d85cc56679d96bdf2a772af54bc12622). The currently deployed version of the site is [`c8d7b25`](https://github.com/TampaDevs/tampadevs/commit/c8d7b255afa4fa7ad6b2fb273c034bf95edf8f7a).
<img width="1274" alt="image" src="https://user-images.githubusercontent.com/7227500/211381338-97765d84-0c6f-4ccd-8975-10a95139e820.png">
Builds are run with a Node version of `17.9.1`. Per the Pages build log output below, the issue appears to be related to `babel-loader`:
```
22:46:55.220 | Cloning repository...
-- | --
22:46:58.240 | From https://github.com/TampaDevs/tampadevs
22:46:58.240 | * branch 112583fafb29a78a84f808e73d134f436efcf431 -> FETCH_HEAD
22:46:58.241 |
22:46:58.698 | HEAD is now at 112583f update survey for 2022
22:46:58.698 |
22:46:58.855 |
22:46:58.883 | Success: Finished cloning repository files
22:46:59.607 | Installing dependencies
22:46:59.620 | Python version set to 2.7
22:47:03.308 | Downloading and installing node v17.9.1...
22:47:03.758 | Downloading https://nodejs.org/dist/v17.9.1/node-v17.9.1-linux-x64.tar.xz...
22:47:04.216 | Computing checksum with sha256sum
22:47:04.349 | Checksums matched!
22:47:09.576 | Now using node v17.9.1 (npm v8.11.0)
22:47:10.010 | Started restoring cached build plugins
22:47:10.024 | Finished restoring cached build plugins
22:47:10.580 | Attempting ruby version 2.7.1, read from environment
22:47:14.588 | Using ruby version 2.7.1
22:47:14.960 | Using PHP version 5.6
22:47:15.138 | 5.2 is already installed.
22:47:15.166 | Using Swift version 5.2
22:47:15.167 | Started restoring cached node modules
22:47:15.186 | Finished restoring cached node modules
22:47:15.711 | Installing NPM modules using NPM version 8.11.0
22:47:16.134 | npm WARN config tmp This setting is no longer used. npm stores temporary files in a special
22:47:16.135 | npm WARN config location in the cache, and they are managed by
22:47:16.135 | npm WARN config [`cacache`](http://npm.im/cacache).
22:47:16.549 | npm WARN config tmp This setting is no longer used. npm stores temporary files in a special
22:47:16.549 | npm WARN config location in the cache, and they are managed by
22:47:16.549 | npm WARN config [`cacache`](http://npm.im/cacache).
22:47:22.496 | npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
22:47:22.621 | npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
22:47:22.734 | npm WARN deprecated source-map-url@0.4.0: See https://github.com/lydell/source-map-url#deprecated
22:47:23.544 | npm WARN deprecated stable@0.1.8: Modern JS already guarantees Array#sort() is a stable sort, so this library is deprecated. See the compatibility table on MDN: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort#browser_compatibility
22:47:24.567 | npm WARN deprecated source-map-resolve@0.5.2: See https://github.com/lydell/source-map-resolve#deprecated
22:47:24.657 | npm WARN deprecated querystring@0.2.0: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
22:47:29.380 | npm WARN deprecated chokidar@2.1.8: Chokidar 2 does not receive security updates since 2019. Upgrade to chokidar 3 with 15x fewer dependencies
22:47:35.659 |
22:47:35.660 | added 946 packages, and audited 947 packages in 19s
22:47:35.660 |
22:47:35.660 | 58 packages are looking for funding
22:47:35.660 | run `npm fund` for details
22:47:35.675 |
22:47:35.676 | 7 vulnerabilities (2 moderate, 5 high)
22:47:35.676 |
22:47:35.676 | To address issues that do not require attention, run:
22:47:35.676 | npm audit fix
22:47:35.677 |
22:47:35.677 | Some issues need review, and may require choosing
22:47:35.677 | a different dependency.
22:47:35.677 |
22:47:35.677 | Run `npm audit` for details.
22:47:35.697 | NPM modules installed
22:47:36.290 | npm WARN config tmp This setting is no longer used. npm stores temporary files in a special
22:47:36.291 | npm WARN config location in the cache, and they are managed by
22:47:36.291 | npm WARN config [`cacache`](http://npm.im/cacache).
22:47:36.311 | Installing Hugo 0.54.0
22:47:37.015 | Hugo Static Site Generator v0.54.0-B1A82C61A/extended linux/amd64 BuildDate: 2019-02-01T10:04:38Z
22:47:37.019 | Started restoring cached go cache
22:47:37.041 | Finished restoring cached go cache
22:47:37.200 | go version go1.14.4 linux/amd64
22:47:37.216 | go version go1.14.4 linux/amd64
22:47:37.219 | Installing missing commands
22:47:37.219 | Verify run directory
22:47:37.220 | Executing user command: npm run build
22:47:37.698 | npm WARN config tmp This setting is no longer used. npm stores temporary files in a special
22:47:37.699 | npm WARN config location in the cache, and they are managed by
22:47:37.699 | npm WARN config [`cacache`](http://npm.im/cacache).
22:47:37.715 |
22:47:37.715 | > tampadevs-eleventy@0.0.1 build
22:47:37.715 | > ELEVENTY_ENV=production eleventy
22:47:37.715 |
22:47:47.458 | [11ty] Unhandled rejection in promise: (more in DEBUG output)
22:47:47.459 | [11ty] error:0308010C:digital envelope routines::unsupported (via Error)
22:47:47.459 | [11ty]
22:47:47.459 | [11ty] Original error stack trace: Error: error:0308010C:digital envelope routines::unsupported
22:47:47.459 | [11ty] at new Hash (node:internal/crypto/hash:67:19)
22:47:47.460 | [11ty] at Object.createHash (node:crypto:135:10)
22:47:47.460 | [11ty] at module.exports (/opt/buildhome/repo/node_modules/webpack/lib/util/createHash.js:135:53)
22:47:47.460 | [11ty] at NormalModule._initBuildHash (/opt/buildhome/repo/node_modules/webpack/lib/NormalModule.js:417:16)
22:47:47.460 | [11ty] at handleParseError (/opt/buildhome/repo/node_modules/webpack/lib/NormalModule.js:471:10)
22:47:47.460 | [11ty] at /opt/buildhome/repo/node_modules/webpack/lib/NormalModule.js:503:5
22:47:47.460 | [11ty] at /opt/buildhome/repo/node_modules/webpack/lib/NormalModule.js:358:12
22:47:47.460 | [11ty] at /opt/buildhome/repo/node_modules/loader-runner/lib/LoaderRunner.js:373:3
22:47:47.460 | [11ty] at iterateNormalLoaders (/opt/buildhome/repo/node_modules/loader-runner/lib/LoaderRunner.js:214:10)
22:47:47.461 | [11ty] at iterateNormalLoaders (/opt/buildhome/repo/node_modules/loader-runner/lib/LoaderRunner.js:221:10)
22:47:47.461 | [11ty] at /opt/buildhome/repo/node_modules/loader-runner/lib/LoaderRunner.js:236:3
22:47:47.461 | [11ty] at context.callback (/opt/buildhome/repo/node_modules/loader-runner/lib/LoaderRunner.js:111:13)
22:47:47.461 | [11ty] at /opt/buildhome/repo/node_modules/babel-loader/lib/index.js:59:71
22:47:47.564 | Failed: build command exited with code: 1
22:47:48.453 | Failed: an internal error occurred
```
| priority | builds are broken since hello all builds have been broken since commit the currently deployed version of the site is img width alt image src builds are run with a node version of per the pages build log output below the issue appears to be related to babel loader cloning repository from branch fetch head head is now at update survey for success finished cloning repository files installing dependencies python version set to downloading and installing node downloading computing checksum with checksums matched now using node npm started restoring cached build plugins finished restoring cached build plugins attempting ruby version read from environment using ruby version using php version is already installed using swift version started restoring cached node modules finished restoring cached node modules installing npm modules using npm version npm warn config tmp this setting is no longer used npm stores temporary files in a special npm warn config location in the cache and they are managed by npm warn config npm warn config tmp this setting is no longer used npm stores temporary files in a special npm warn config location in the cache and they are managed by npm warn config npm warn deprecated urix please see npm warn deprecated resolve url npm warn deprecated source map url see npm warn deprecated stable modern js already guarantees array sort is a stable sort so this library is deprecated see the compatibility table on mdn npm warn deprecated source map resolve see npm warn deprecated querystring the querystring api is considered legacy new code should use the urlsearchparams api instead npm warn deprecated chokidar chokidar does not receive security updates since upgrade to chokidar with fewer dependencies added packages and audited packages in packages are looking for funding run npm fund for details vulnerabilities moderate high to address issues that do not require attention run npm audit fix some issues need review and may require choosing a different dependency run npm audit for details npm modules installed npm warn config tmp this setting is no longer used npm stores temporary files in a special npm warn config location in the cache and they are managed by npm warn config installing hugo hugo static site generator extended linux builddate started restoring cached go cache finished restoring cached go cache go version linux go version linux installing missing commands verify run directory executing user command npm run build npm warn config tmp this setting is no longer used npm stores temporary files in a special npm warn config location in the cache and they are managed by npm warn config tampadevs eleventy build eleventy env production eleventy unhandled rejection in promise more in debug output error digital envelope routines unsupported via error original error stack trace error error digital envelope routines unsupported at new hash node internal crypto hash at object createhash node crypto at module exports opt buildhome repo node modules webpack lib util createhash js at normalmodule initbuildhash opt buildhome repo node modules webpack lib normalmodule js at handleparseerror opt buildhome repo node modules webpack lib normalmodule js at opt buildhome repo node modules webpack lib normalmodule js at opt buildhome repo node modules webpack lib normalmodule js at opt buildhome repo node modules loader runner lib loaderrunner js at iteratenormalloaders opt buildhome repo node modules loader runner lib loaderrunner js at iteratenormalloaders opt buildhome repo node modules loader runner lib loaderrunner js at opt buildhome repo node modules loader runner lib loaderrunner js at context callback opt buildhome repo node modules loader runner lib loaderrunner js at opt buildhome repo node modules babel loader lib index js failed build command exited with code failed an internal error occurred | 1 |
780,021 | 27,376,155,032 | IssuesEvent | 2023-02-28 06:10:10 | phetsims/joist | https://api.github.com/repos/phetsims/joist | closed | Add phetioDocumentation to preferencesModel linked elements | priority:2-high dev:phet-io status:ready-for-review | From https://github.com/phetsims/phet-io/issues/1913
So that something like `. . . preferencesModel.audioModel.voicePitchProperty` has doc.
<details>
```diff
Subject: [PATCH] open based on stringified state, not escaped, https://github.com/phetsims/phet-io/issues/1913
---
Index: js/preferences/PreferencesModel.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/preferences/PreferencesModel.ts b/js/preferences/PreferencesModel.ts
--- a/js/preferences/PreferencesModel.ts (revision e4babfbaee30f5c36fe5d8b160c1a8acd503b8a7)
+++ b/js/preferences/PreferencesModel.ts (date 1677523304456)
@@ -337,7 +337,7 @@
] );
this.addPhetioLinkedElementsForModel( options.tandem, this.audioModel, [
- { property: this.audioModel.audioEnabledProperty, tandemName: 'audioEnabledProperty' },
+ { property: this.audioModel.audioEnabledProperty, tandemName: 'audioEnabledProperty', phetioDocumentation: 'hi documentation' },
{ property: this.audioModel.soundEnabledProperty, tandemName: 'soundEnabledProperty' },
{ property: this.audioModel.extraSoundEnabledProperty, tandemName: 'extraSoundEnabledProperty' },
{ property: this.audioModel.voicingEnabledProperty, tandemName: 'voicingEnabledProperty' },
@@ -443,7 +443,11 @@
for ( let j = 0; j < propertiesToLink.length; j++ ) {
const modelPropertyObject = propertiesToLink[ j ];
const tandemName = modelPropertyObject.tandemName || modelPropertyObject.property.tandem.name;
- this.addLinkedElement( modelPropertyObject.property, { tandem: tandem.createTandem( tandemName ) } );
+ const options = { tandem: tandem.createTandem( tandemName ) };
+ if ( modelPropertyObject.phetioDocumentation ) {
+ options.phetioDocumentation = modelPropertyObject.phetioDocumentation
+ }
+ this.addLinkedElement( modelPropertyObject.property, options );
}
}
| 1.0 | Add phetioDocumentation to preferencesModel linked elements - From https://github.com/phetsims/phet-io/issues/1913
So that something like `. . . preferencesModel.audioModel.voicePitchProperty` has doc.
<details>
```diff
Subject: [PATCH] open based on stringified state, not escaped, https://github.com/phetsims/phet-io/issues/1913
---
Index: js/preferences/PreferencesModel.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/preferences/PreferencesModel.ts b/js/preferences/PreferencesModel.ts
--- a/js/preferences/PreferencesModel.ts (revision e4babfbaee30f5c36fe5d8b160c1a8acd503b8a7)
+++ b/js/preferences/PreferencesModel.ts (date 1677523304456)
@@ -337,7 +337,7 @@
] );
this.addPhetioLinkedElementsForModel( options.tandem, this.audioModel, [
- { property: this.audioModel.audioEnabledProperty, tandemName: 'audioEnabledProperty' },
+ { property: this.audioModel.audioEnabledProperty, tandemName: 'audioEnabledProperty', phetioDocumentation: 'hi documentation' },
{ property: this.audioModel.soundEnabledProperty, tandemName: 'soundEnabledProperty' },
{ property: this.audioModel.extraSoundEnabledProperty, tandemName: 'extraSoundEnabledProperty' },
{ property: this.audioModel.voicingEnabledProperty, tandemName: 'voicingEnabledProperty' },
@@ -443,7 +443,11 @@
for ( let j = 0; j < propertiesToLink.length; j++ ) {
const modelPropertyObject = propertiesToLink[ j ];
const tandemName = modelPropertyObject.tandemName || modelPropertyObject.property.tandem.name;
- this.addLinkedElement( modelPropertyObject.property, { tandem: tandem.createTandem( tandemName ) } );
+ const options = { tandem: tandem.createTandem( tandemName ) };
+ if ( modelPropertyObject.phetioDocumentation ) {
+ options.phetioDocumentation = modelPropertyObject.phetioDocumentation
+ }
+ this.addLinkedElement( modelPropertyObject.property, options );
}
}
| priority | add phetiodocumentation to preferencesmodel linked elements from so that something like preferencesmodel audiomodel voicepitchproperty has doc diff subject open based on stringified state not escaped index js preferences preferencesmodel ts idea additional info subsystem com intellij openapi diff impl patch charsetep utf diff git a js preferences preferencesmodel ts b js preferences preferencesmodel ts a js preferences preferencesmodel ts revision b js preferences preferencesmodel ts date this addphetiolinkedelementsformodel options tandem this audiomodel property this audiomodel audioenabledproperty tandemname audioenabledproperty property this audiomodel audioenabledproperty tandemname audioenabledproperty phetiodocumentation hi documentation property this audiomodel soundenabledproperty tandemname soundenabledproperty property this audiomodel extrasoundenabledproperty tandemname extrasoundenabledproperty property this audiomodel voicingenabledproperty tandemname voicingenabledproperty for let j j propertiestolink length j const modelpropertyobject propertiestolink const tandemname modelpropertyobject tandemname modelpropertyobject property tandem name this addlinkedelement modelpropertyobject property tandem tandem createtandem tandemname const options tandem tandem createtandem tandemname if modelpropertyobject phetiodocumentation options phetiodocumentation modelpropertyobject phetiodocumentation this addlinkedelement modelpropertyobject property options | 1 |
783,582 | 27,537,421,250 | IssuesEvent | 2023-03-07 05:17:05 | MariCornelio/markdown-links | https://api.github.com/repos/MariCornelio/markdown-links | closed | Verificación si path es directorio | API Priority: High | Crear un código para verificar si path ingresado es un directorio | 1.0 | Verificación si path es directorio - Crear un código para verificar si path ingresado es un directorio | priority | verificación si path es directorio crear un código para verificar si path ingresado es un directorio | 1 |
114,029 | 4,600,736,918 | IssuesEvent | 2016-09-22 07:02:21 | California-Planet-Search/radvel | https://api.github.com/repos/California-Planet-Search/radvel | opened | "per tc e w k" fitting basis not working | bug priority:high | Fitted values are crazy when working in this basis. Need to check the basis conversions. | 1.0 | "per tc e w k" fitting basis not working - Fitted values are crazy when working in this basis. Need to check the basis conversions. | priority | per tc e w k fitting basis not working fitted values are crazy when working in this basis need to check the basis conversions | 1 |
5,765 | 2,579,449,305 | IssuesEvent | 2015-02-13 10:20:07 | olga-jane/prizm | https://api.github.com/repos/olga-jane/prizm | closed | Release note problems (client 6 Feb 2015) | CLIENT Coding HIGH priority Release note | В разрешениях на отгрузку столкнулся со следующим:
1. Список труб в дроп-дауне Номер трубы несортированный. Удобнее включить сортировку по возрастанию
2. Этот же список, при добавлении трубы в список отфильтровывается только последняя добавленная. Предыдущие из списка добавленных опять появляются для выбора.
3. Этот же список, при повторном добавлении трубы диагностируется необрабатываемое исключение.
4. В таблицу Список труб надо добавить столбец с номером вагона.
5. Поле Номер вагона при вводе номера вагона с подтверждением Enter остается предыдущее значение, а не то что вводилось.
6. После заполнения Списка труб, нажатия Отгрузить (без Сохранить) выдается диагностика Отправка вагона без труб невозможна
7. После заполнения Списка труб, нажатия Сохранить и Отгрузить все поля деактивируются, вкладка не закрывается. При закрытии по крестику или Ctrl+F4 требует подтверждения сохранения.
8. Периодические необрабатываемые исключения при работе макрорекордера.
Немного грешу на БД, с утра уставил новый проект, и поставил заполняться на 2000 записей, все текстовые поля заполняю латиницей для минимизации рисков. После повторно попробую запустить макрос с генерацией разрешений на отгрузку.
Дополнение:
1. Список труб в дроп-дауне, если впечатать номер несуществующей в БД трубы – Необрабатываемое исключение
| 1.0 | Release note problems (client 6 Feb 2015) - В разрешениях на отгрузку столкнулся со следующим:
1. Список труб в дроп-дауне Номер трубы несортированный. Удобнее включить сортировку по возрастанию
2. Этот же список, при добавлении трубы в список отфильтровывается только последняя добавленная. Предыдущие из списка добавленных опять появляются для выбора.
3. Этот же список, при повторном добавлении трубы диагностируется необрабатываемое исключение.
4. В таблицу Список труб надо добавить столбец с номером вагона.
5. Поле Номер вагона при вводе номера вагона с подтверждением Enter остается предыдущее значение, а не то что вводилось.
6. После заполнения Списка труб, нажатия Отгрузить (без Сохранить) выдается диагностика Отправка вагона без труб невозможна
7. После заполнения Списка труб, нажатия Сохранить и Отгрузить все поля деактивируются, вкладка не закрывается. При закрытии по крестику или Ctrl+F4 требует подтверждения сохранения.
8. Периодические необрабатываемые исключения при работе макрорекордера.
Немного грешу на БД, с утра уставил новый проект, и поставил заполняться на 2000 записей, все текстовые поля заполняю латиницей для минимизации рисков. После повторно попробую запустить макрос с генерацией разрешений на отгрузку.
Дополнение:
1. Список труб в дроп-дауне, если впечатать номер несуществующей в БД трубы – Необрабатываемое исключение
| priority | release note problems client feb в разрешениях на отгрузку столкнулся со следующим список труб в дроп дауне номер трубы несортированный удобнее включить сортировку по возрастанию этот же список при добавлении трубы в список отфильтровывается только последняя добавленная предыдущие из списка добавленных опять появляются для выбора этот же список при повторном добавлении трубы диагностируется необрабатываемое исключение в таблицу список труб надо добавить столбец с номером вагона поле номер вагона при вводе номера вагона с подтверждением enter остается предыдущее значение а не то что вводилось после заполнения списка труб нажатия отгрузить без сохранить выдается диагностика отправка вагона без труб невозможна после заполнения списка труб нажатия сохранить и отгрузить все поля деактивируются вкладка не закрывается при закрытии по крестику или ctrl требует подтверждения сохранения периодические необрабатываемые исключения при работе макрорекордера немного грешу на бд с утра уставил новый проект и поставил заполняться на записей все текстовые поля заполняю латиницей для минимизации рисков после повторно попробую запустить макрос с генерацией разрешений на отгрузку дополнение список труб в дроп дауне если впечатать номер несуществующей в бд трубы – необрабатываемое исключение | 1 |
325,641 | 9,933,981,503 | IssuesEvent | 2019-07-02 13:33:25 | telstra/open-kilda | https://api.github.com/repos/telstra/open-kilda | closed | NB api to get all flows that goes through switch | C/NB C/NBWORKER feature priority/2-high | introduce new api
GET /v1/switches/switch_id/flows
params:
port - optional param to specify port for search | 1.0 | NB api to get all flows that goes through switch - introduce new api
GET /v1/switches/switch_id/flows
params:
port - optional param to specify port for search | priority | nb api to get all flows that goes through switch introduce new api get switches switch id flows params port optional param to specify port for search | 1 |
591,920 | 17,865,368,814 | IssuesEvent | 2021-09-06 08:48:28 | gap-packages/recog | https://api.github.com/repos/gap-packages/recog | opened | Check naming of classical groups | enhancement groups: classical priority: high | I want to compare the current code for naming classical groups, i.e. the function `RecogniseClassical`, to the corresponding magma code. If there are discrepancies with the magma code, then I'd simply use the magma version.
`RecogniseClassical` is designed to be used with natural matrix groups. Until now we pass representatives of generators of a projective matrix group. Is it possible, that we pass matrices, such that taken projectively they generate a simple group, but as a matrix group they do not generate the full corresponding quasi simple group, that is parts of the center could be missing? If that can happen, then I suggest the following workaround: after determining the forms that we leave invariant, create a new group to whose generating set we add generators of the center of the quasi-simple group corresponding to the form. That one is then guaranteed to contain the quasi-simple group.
Also, the function should probably be renamed to `NameClassical`. | 1.0 | Check naming of classical groups - I want to compare the current code for naming classical groups, i.e. the function `RecogniseClassical`, to the corresponding magma code. If there are discrepancies with the magma code, then I'd simply use the magma version.
`RecogniseClassical` is designed to be used with natural matrix groups. Until now we pass representatives of generators of a projective matrix group. Is it possible, that we pass matrices, such that taken projectively they generate a simple group, but as a matrix group they do not generate the full corresponding quasi simple group, that is parts of the center could be missing? If that can happen, then I suggest the following workaround: after determining the forms that we leave invariant, create a new group to whose generating set we add generators of the center of the quasi-simple group corresponding to the form. That one is then guaranteed to contain the quasi-simple group.
Also, the function should probably be renamed to `NameClassical`. | priority | check naming of classical groups i want to compare the current code for naming classical groups i e the function recogniseclassical to the corresponding magma code if there are discrepancies with the magma code then i d simply use the magma version recogniseclassical is designed to be used with natural matrix groups until now we pass representatives of generators of a projective matrix group is it possible that we pass matrices such that taken projectively they generate a simple group but as a matrix group they do not generate the full corresponding quasi simple group that is parts of the center could be missing if that can happen then i suggest the following workaround after determining the forms that we leave invariant create a new group to whose generating set we add generators of the center of the quasi simple group corresponding to the form that one is then guaranteed to contain the quasi simple group also the function should probably be renamed to nameclassical | 1 |
610,846 | 18,926,168,221 | IssuesEvent | 2021-11-17 09:46:42 | spacemeshos/go-spacemesh | https://api.github.com/repos/spacemeshos/go-spacemesh | closed | tortoise, beacon: joiner enters self healing mode starting from 3 epoch and will never switch to verifying | bug high priority | ## Description
Beacon is not present in the database when tortoise verifies synced blocks, so every block is not good. And as block goodness is recursive this node will never be able to switch back to verifying tortoise.
```
2085:2021-11-15T05:19:44.132+0200 INFO 1e0ba.trtl handling incoming layer {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "old_pbase": 16, "incoming_layer": 18, "name": "trtl"}
2087:2021-11-15T05:19:44.132+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "10c9b3e3ef", "layer_id": 18, "base_block_id": "060c767753"}
2088:2021-11-15T05:19:44.132+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "10c9b3e3ef", "layer_id": 18, "name": ""}
2090:2021-11-15T05:19:44.132+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "294d86f2fa", "layer_id": 18, "base_block_id": "40db609fcc"}
2091:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "294d86f2fa", "layer_id": 18, "name": ""}
2093:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "2f5d6b457e", "layer_id": 18, "base_block_id": "1945d76c09"}
2094:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "2f5d6b457e", "layer_id": 18, "name": ""}
2096:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "48794d8183", "layer_id": 18, "base_block_id": "15c05ca3b9"}
2097:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "48794d8183", "layer_id": 18, "name": ""}
2099:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "66b7a114fe", "layer_id": 18, "base_block_id": "4117d0f13c"}
2100:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "66b7a114fe", "layer_id": 18, "name": ""}
2102:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7436d38a13", "layer_id": 18, "base_block_id": "4117d0f13c"}
2103:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7436d38a13", "layer_id": 18, "name": ""}
2105:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "79b2370ddc", "layer_id": 18, "base_block_id": "0a8f39afd3"}
2106:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "79b2370ddc", "layer_id": 18, "name": ""}
2108:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7e409e1bca", "layer_id": 18, "base_block_id": "d9e8e4c5c5"}
2109:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7e409e1bca", "layer_id": 18, "name": ""}
2111:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7f64b16980", "layer_id": 18, "base_block_id": "060c767753"}
2112:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7f64b16980", "layer_id": 18, "name": ""}
2114:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "8c9d102846", "layer_id": 18, "base_block_id": "2be43ed2f5"}
2115:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "8c9d102846", "layer_id": 18, "name": ""}
2117:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b33dd1d8f8", "layer_id": 18, "base_block_id": "dc33d134e4"}
2118:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b33dd1d8f8", "layer_id": 18, "name": ""}
2120:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b4e9630dba", "layer_id": 18, "base_block_id": "5fd31e080d"}
2121:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b4e9630dba", "layer_id": 18, "name": ""}
2123:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b542c1db2e", "layer_id": 18, "base_block_id": "4117d0f13c"}
2124:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b542c1db2e", "layer_id": 18, "name": ""}
2126:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "bdedac250e", "layer_id": 18, "base_block_id": "2be43ed2f5"}
2127:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "bdedac250e", "layer_id": 18, "name": ""}
2129:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "c8be067b4f", "layer_id": 18, "base_block_id": "15c05ca3b9"}
2130:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "c8be067b4f", "layer_id": 18, "name": ""}
2132:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "cd821f2931", "layer_id": 18, "base_block_id": "2be43ed2f5"}
2133:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "cd821f2931", "layer_id": 18, "name": ""}
2135:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "ceb69665a1", "layer_id": 18, "base_block_id": "5fd31e080d"}
2136:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "ceb69665a1", "layer_id": 18, "name": ""}
2138:2021-11-15T05:19:44.134+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "e3fd4b4052", "layer_id": 18, "base_block_id": "5fd31e080d"}
2139:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "e3fd4b4052", "layer_id": 18, "name": ""}
2141:2021-11-15T05:19:44.134+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "ee8f21cc67", "layer_id": 18, "base_block_id": "060c767753"}
2142:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "ee8f21cc67", "layer_id": 18, "name": ""}
2144:2021-11-15T05:19:44.134+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "f2683bb9cc", "layer_id": 18, "base_block_id": "4dc6ad47bf"}
2145:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "f2683bb9cc", "layer_id": 18, "name": ""}
2147:2021-11-15T05:19:44.134+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "fc9776693e", "layer_id": 18, "base_block_id": "65638eb979"}
2148:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "fc9776693e", "layer_id": 18, "name": ""}
2149:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle finished marking good blocks {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "total_blocks": 21, "good_blocks": 0, "name": ""}
```
## Steps to reproduce
Sync the node using a client from develop.
## Actual Behavior
Synced node runs in self-healing mode.
## Expected Behavior
Synced node runs in verifying mode.
| 1.0 | tortoise, beacon: joiner enters self healing mode starting from 3 epoch and will never switch to verifying - ## Description
Beacon is not present in the database when tortoise verifies synced blocks, so every block is not good. And as block goodness is recursive this node will never be able to switch back to verifying tortoise.
```
2085:2021-11-15T05:19:44.132+0200 INFO 1e0ba.trtl handling incoming layer {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "old_pbase": 16, "incoming_layer": 18, "name": "trtl"}
2087:2021-11-15T05:19:44.132+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "10c9b3e3ef", "layer_id": 18, "base_block_id": "060c767753"}
2088:2021-11-15T05:19:44.132+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "10c9b3e3ef", "layer_id": 18, "name": ""}
2090:2021-11-15T05:19:44.132+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "294d86f2fa", "layer_id": 18, "base_block_id": "40db609fcc"}
2091:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "294d86f2fa", "layer_id": 18, "name": ""}
2093:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "2f5d6b457e", "layer_id": 18, "base_block_id": "1945d76c09"}
2094:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "2f5d6b457e", "layer_id": 18, "name": ""}
2096:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "48794d8183", "layer_id": 18, "base_block_id": "15c05ca3b9"}
2097:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "48794d8183", "layer_id": 18, "name": ""}
2099:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "66b7a114fe", "layer_id": 18, "base_block_id": "4117d0f13c"}
2100:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "66b7a114fe", "layer_id": 18, "name": ""}
2102:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7436d38a13", "layer_id": 18, "base_block_id": "4117d0f13c"}
2103:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7436d38a13", "layer_id": 18, "name": ""}
2105:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "79b2370ddc", "layer_id": 18, "base_block_id": "0a8f39afd3"}
2106:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "79b2370ddc", "layer_id": 18, "name": ""}
2108:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7e409e1bca", "layer_id": 18, "base_block_id": "d9e8e4c5c5"}
2109:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7e409e1bca", "layer_id": 18, "name": ""}
2111:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7f64b16980", "layer_id": 18, "base_block_id": "060c767753"}
2112:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "7f64b16980", "layer_id": 18, "name": ""}
2114:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "8c9d102846", "layer_id": 18, "base_block_id": "2be43ed2f5"}
2115:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "8c9d102846", "layer_id": 18, "name": ""}
2117:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b33dd1d8f8", "layer_id": 18, "base_block_id": "dc33d134e4"}
2118:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b33dd1d8f8", "layer_id": 18, "name": ""}
2120:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b4e9630dba", "layer_id": 18, "base_block_id": "5fd31e080d"}
2121:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b4e9630dba", "layer_id": 18, "name": ""}
2123:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b542c1db2e", "layer_id": 18, "base_block_id": "4117d0f13c"}
2124:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "b542c1db2e", "layer_id": 18, "name": ""}
2126:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "bdedac250e", "layer_id": 18, "base_block_id": "2be43ed2f5"}
2127:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "bdedac250e", "layer_id": 18, "name": ""}
2129:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "c8be067b4f", "layer_id": 18, "base_block_id": "15c05ca3b9"}
2130:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "c8be067b4f", "layer_id": 18, "name": ""}
2132:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "cd821f2931", "layer_id": 18, "base_block_id": "2be43ed2f5"}
2133:2021-11-15T05:19:44.133+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "cd821f2931", "layer_id": 18, "name": ""}
2135:2021-11-15T05:19:44.133+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "ceb69665a1", "layer_id": 18, "base_block_id": "5fd31e080d"}
2136:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "ceb69665a1", "layer_id": 18, "name": ""}
2138:2021-11-15T05:19:44.134+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "e3fd4b4052", "layer_id": 18, "base_block_id": "5fd31e080d"}
2139:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "e3fd4b4052", "layer_id": 18, "name": ""}
2141:2021-11-15T05:19:44.134+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "ee8f21cc67", "layer_id": 18, "base_block_id": "060c767753"}
2142:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "ee8f21cc67", "layer_id": 18, "name": ""}
2144:2021-11-15T05:19:44.134+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "f2683bb9cc", "layer_id": 18, "base_block_id": "4dc6ad47bf"}
2145:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "f2683bb9cc", "layer_id": 18, "name": ""}
2147:2021-11-15T05:19:44.134+0200 ERROR 1e0ba.trtl .turtle failed to get beacon for epoch%!(EXTRA types.EpochID=3) {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "fc9776693e", "layer_id": 18, "base_block_id": "65638eb979"}
2148:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle not marking block good {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "block_id": "fc9776693e", "layer_id": 18, "name": ""}
2149:2021-11-15T05:19:44.134+0200 INFO 1e0ba.trtl .turtle finished marking good blocks {"node_id": "1e0ba226c5bf7917a1551ce0addd03fd7aa72f66fd5c411f6b322514e4d2ceb1", "module": "trtl", "sessionId": "31534b53-92c8-4ff1-b1f5-8e600308e05f", "total_blocks": 21, "good_blocks": 0, "name": ""}
```
## Steps to reproduce
Sync the node using a client from develop.
## Actual Behavior
Synced node runs in self-healing mode.
## Expected Behavior
Synced node runs in verifying mode.
| priority | tortoise beacon joiner enters self healing mode starting from epoch and will never switch to verifying description beacon is not present in the database when tortoise verifies synced blocks so every block is not good and as block goodness is recursive this node will never be able to switch back to verifying tortoise info trtl handling incoming layer node id module trtl sessionid old pbase incoming layer name trtl error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name error trtl turtle failed to get beacon for epoch extra types epochid node id module trtl sessionid block id layer id base block id info trtl turtle not marking block good node id module trtl sessionid block id layer id name info trtl turtle finished marking good blocks node id module trtl sessionid total blocks good blocks name steps to reproduce sync the node using a client from develop actual behavior synced node runs in self healing mode expected behavior synced node runs in verifying mode | 1 |
343,013 | 10,324,392,877 | IssuesEvent | 2019-09-01 08:44:21 | OpenSRP/opensrp-client-chw | https://api.github.com/repos/OpenSRP/opensrp-client-chw | closed | Upcoming Services page does not update immediately after child home visit is submitted | bug high priority | As soon as the child home visit form is completed and submitted, the Upcoming Services page should update to the next set of upcoming services for that child. This isn't happening currently. | 1.0 | Upcoming Services page does not update immediately after child home visit is submitted - As soon as the child home visit form is completed and submitted, the Upcoming Services page should update to the next set of upcoming services for that child. This isn't happening currently. | priority | upcoming services page does not update immediately after child home visit is submitted as soon as the child home visit form is completed and submitted the upcoming services page should update to the next set of upcoming services for that child this isn t happening currently | 1 |
328,643 | 9,997,891,602 | IssuesEvent | 2019-07-12 06:31:35 | wso2-cellery/sdk | https://api.github.com/repos/wso2-cellery/sdk | closed | Cellery test with already running instance shows wrong message | Priority/High Severity/Major Type/Bug | I have already pet-be cell running, and I get below log, which says `pet-be` instance should be created.
```
cellery test wso2cellery/pet-be-cell:latest -n pet-be
✔ Extracting Cell Image wso2cellery/pet-be-cell:latest
✔ Reading Cell Image wso2cellery/pet-be-cell:latest
✔ Validating dependencies
Instances to be Used:
INSTANCE NAME CELL IMAGE USED INSTANCE SHARED
--------------- -------------------------------- --------------- --------
pet-be wso2cellery/pet-be-cell:latest To be Created -
Dependency Tree to be Used:
No Dependencies
? Do you wish to continue with starting above Cell instances (Y/n)?
``` | 1.0 | Cellery test with already running instance shows wrong message - I have already pet-be cell running, and I get below log, which says `pet-be` instance should be created.
```
cellery test wso2cellery/pet-be-cell:latest -n pet-be
✔ Extracting Cell Image wso2cellery/pet-be-cell:latest
✔ Reading Cell Image wso2cellery/pet-be-cell:latest
✔ Validating dependencies
Instances to be Used:
INSTANCE NAME CELL IMAGE USED INSTANCE SHARED
--------------- -------------------------------- --------------- --------
pet-be wso2cellery/pet-be-cell:latest To be Created -
Dependency Tree to be Used:
No Dependencies
? Do you wish to continue with starting above Cell instances (Y/n)?
``` | priority | cellery test with already running instance shows wrong message i have already pet be cell running and i get below log which says pet be instance should be created cellery test pet be cell latest n pet be ✔ extracting cell image pet be cell latest ✔ reading cell image pet be cell latest ✔ validating dependencies instances to be used instance name cell image used instance shared pet be pet be cell latest to be created dependency tree to be used no dependencies do you wish to continue with starting above cell instances y n | 1 |
758,778 | 26,568,726,104 | IssuesEvent | 2023-01-20 23:37:11 | codepandoradev/nft-marketplace-api | https://api.github.com/repos/codepandoradev/nft-marketplace-api | closed | Сделать избранное у пользователя | task priority:hight not_milestone |
## Описание
Добавить массив, в котором будет храниться информация о том, что пользователь добавил в "Избранное" | 1.0 | Сделать избранное у пользователя -
## Описание
Добавить массив, в котором будет храниться информация о том, что пользователь добавил в "Избранное" | priority | сделать избранное у пользователя описание добавить массив в котором будет храниться информация о том что пользователь добавил в избранное | 1 |
209,290 | 7,167,254,077 | IssuesEvent | 2018-01-29 19:54:57 | CityOfPhiladelphia/parks-rec-finder | https://api.github.com/repos/CityOfPhiladelphia/parks-rec-finder | opened | Refine filters to match spec | bug priority - high | ### Expected Behavior
[7.1.2.1. - 7.2.1.4.](https://docs.google.com/document/d/1duzm6Nj914sYiUyMSUSoKMWFETPy3C5XSajTEyI5pAs/edit?usp=sharing)
- View Fee (exclusive)
- Select “Fee” or “Free”
- View Age (inclusive)
- Select Age range
- Tot (2-5 or younger)
- Youth (6-12)
- Teen (13-19)
- Adult (20-55)
- Senior (56+)
- View Gender (inclusive)
- Select “Male” “Female” or “All”
- View day of week (inclusive)
- Select day of week
- Monday
- Tuesday
- Wednesday
- Thursday
- Friday
- Saturday
- Sunday
| 1.0 | Refine filters to match spec - ### Expected Behavior
[7.1.2.1. - 7.2.1.4.](https://docs.google.com/document/d/1duzm6Nj914sYiUyMSUSoKMWFETPy3C5XSajTEyI5pAs/edit?usp=sharing)
- View Fee (exclusive)
- Select “Fee” or “Free”
- View Age (inclusive)
- Select Age range
- Tot (2-5 or younger)
- Youth (6-12)
- Teen (13-19)
- Adult (20-55)
- Senior (56+)
- View Gender (inclusive)
- Select “Male” “Female” or “All”
- View day of week (inclusive)
- Select day of week
- Monday
- Tuesday
- Wednesday
- Thursday
- Friday
- Saturday
- Sunday
| priority | refine filters to match spec expected behavior view fee exclusive select “fee” or “free” view age inclusive select age range tot or younger youth teen adult senior view gender inclusive select “male” “female” or “all” view day of week inclusive select day of week monday tuesday wednesday thursday friday saturday sunday | 1 |
132,733 | 5,192,050,370 | IssuesEvent | 2017-01-22 03:27:20 | bethlakshmi/GBE2 | https://api.github.com/repos/bethlakshmi/GBE2 | closed | Export "All" calendar as Guidebook .csv | 5 point Estimated High Priority Merged | Guidebook has a format for importing schedule data.
Here's the guide, change *.txt to *.csv, git hub doesn't allow CSVs
[Guidebook_Schedule_Template.txt](https://github.com/bethlakshmi/GBE2/files/85956/Guidebook_Schedule_Template.txt)
We should be able to offer a scheduler thing that does "export calendar for conference_slug" and gives a csv that guidebook will take.
| 1.0 | Export "All" calendar as Guidebook .csv - Guidebook has a format for importing schedule data.
Here's the guide, change *.txt to *.csv, git hub doesn't allow CSVs
[Guidebook_Schedule_Template.txt](https://github.com/bethlakshmi/GBE2/files/85956/Guidebook_Schedule_Template.txt)
We should be able to offer a scheduler thing that does "export calendar for conference_slug" and gives a csv that guidebook will take.
| priority | export all calendar as guidebook csv guidebook has a format for importing schedule data here s the guide change txt to csv git hub doesn t allow csvs we should be able to offer a scheduler thing that does export calendar for conference slug and gives a csv that guidebook will take | 1 |
438,041 | 12,610,178,358 | IssuesEvent | 2020-06-12 04:05:41 | prysmaticlabs/prysm | https://api.github.com/repos/prysmaticlabs/prysm | closed | Rate limiting on infura eth1 node, no backoff of beacon-chain | Bug Priority: High | # 🐞 Bug Report
### Description
After getting rate limited on the infura eth1 node beacon-chain keeps trying to poll the endpoint every second. This causes the rate limiting to not resolve.
### Has this worked before in a previous version?
Have not seen any rate limiting. It is to check if polling is too fast either way.
## 🔬 Minimal Reproduction
Use infura endpoint.
## 🔥 Error
beacon chain error message, repeating every second
<pre>
time="2020-06-10 16:01:00" level=error msg="could not get block timestamp: could not query block with height 2848026: 429 Too Many Requests {"jsonrpc":"2.0","id":61386,"error":{"code":-32005,"message":"daily request count exceeded, request rate limited","data":{"rate":{"allowed_rps":1,"backoff_seconds":30,"current_rps":1.1},"see":"https://infura.io/dashboard"}}}" prefix=powchain
</pre>
## 🌍 Your Environment
**Operating System:**
Ubuntu 20.04
Docker 19.03.8
**What version of Prysm are you running? (Which release)**
alpha.10
gcr.io/prysmaticlabs/prysm/beacon-chain:HEAD-1f20cb
| 1.0 | Rate limiting on infura eth1 node, no backoff of beacon-chain - # 🐞 Bug Report
### Description
After getting rate limited on the infura eth1 node beacon-chain keeps trying to poll the endpoint every second. This causes the rate limiting to not resolve.
### Has this worked before in a previous version?
Have not seen any rate limiting. It is to check if polling is too fast either way.
## 🔬 Minimal Reproduction
Use infura endpoint.
## 🔥 Error
beacon chain error message, repeating every second
<pre>
time="2020-06-10 16:01:00" level=error msg="could not get block timestamp: could not query block with height 2848026: 429 Too Many Requests {"jsonrpc":"2.0","id":61386,"error":{"code":-32005,"message":"daily request count exceeded, request rate limited","data":{"rate":{"allowed_rps":1,"backoff_seconds":30,"current_rps":1.1},"see":"https://infura.io/dashboard"}}}" prefix=powchain
</pre>
## 🌍 Your Environment
**Operating System:**
Ubuntu 20.04
Docker 19.03.8
**What version of Prysm are you running? (Which release)**
alpha.10
gcr.io/prysmaticlabs/prysm/beacon-chain:HEAD-1f20cb
| priority | rate limiting on infura node no backoff of beacon chain 🐞 bug report description after getting rate limited on the infura node beacon chain keeps trying to poll the endpoint every second this causes the rate limiting to not resolve has this worked before in a previous version have not seen any rate limiting it is to check if polling is too fast either way 🔬 minimal reproduction use infura endpoint 🔥 error beacon chain error message repeating every second time level error msg could not get block timestamp could not query block with height too many requests jsonrpc id error code message daily request count exceeded request rate limited data rate allowed rps backoff seconds current rps see prefix powchain 🌍 your environment operating system ubuntu docker what version of prysm are you running which release alpha gcr io prysmaticlabs prysm beacon chain head | 1 |
764,922 | 26,823,543,183 | IssuesEvent | 2023-02-02 11:11:41 | fkie-cad/dewolf | https://api.github.com/repos/fkie-cad/dewolf | closed | Binaryninja UI decompilation fails with "Function object does not have attribute startswith" | bug priority-high | ### What happened?
When trying to decompile a function via the bn interface, the following error is raised:
> "Function object does not have attribute startswith"
### How to reproduce?
It seems the function object passed to `create_task` is parsed as a string.
### Affected Binary Ninja Version(s)
3.3 | 1.0 | Binaryninja UI decompilation fails with "Function object does not have attribute startswith" - ### What happened?
When trying to decompile a function via the bn interface, the following error is raised:
> "Function object does not have attribute startswith"
### How to reproduce?
It seems the function object passed to `create_task` is parsed as a string.
### Affected Binary Ninja Version(s)
3.3 | priority | binaryninja ui decompilation fails with function object does not have attribute startswith what happened when trying to decompile a function via the bn interface the following error is raised function object does not have attribute startswith how to reproduce it seems the function object passed to create task is parsed as a string affected binary ninja version s | 1 |
802,101 | 28,633,977,918 | IssuesEvent | 2023-04-25 00:19:59 | centerforaisafety/cerberus-cluster | https://api.github.com/repos/centerforaisafety/cerberus-cluster | opened | Cross talk between nfs | bug High priority | I think the /NFS/cluster is talking to the main cluster but really slowly. This is what's related #58 too.
Still need to investigate why spack is even called on startup though. | 1.0 | Cross talk between nfs - I think the /NFS/cluster is talking to the main cluster but really slowly. This is what's related #58 too.
Still need to investigate why spack is even called on startup though. | priority | cross talk between nfs i think the nfs cluster is talking to the main cluster but really slowly this is what s related too still need to investigate why spack is even called on startup though | 1 |
391,315 | 11,572,200,686 | IssuesEvent | 2020-02-20 23:21:46 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Incorrect result of argmax on CUDA with large non-contig tensors and on non-contig dimensions | high priority module: cuda module: operators topic: TensorIterator triaged | ```python
import torch
a = torch.load('bug.pt', 'cuda') # giant uint8 tensor with binary 0/1 values
C = 11359
b = a[:, 0, C]
print('shape', b.shape)
print('argmax', b.argmax())
print('max', b.max())
K = int(b.argmax())
print(b[K]) # should be 1, but is 0
print(a[K, 0, C]) # should be 1, but is 0
print(b.nonzero()) # different result compared to argmax
print('bad argmax', a.argmax(dim = 0)[0][C])
```
```
>>> torch.__version__
'1.4.0'
>>> torch.version.git_version
'143868c3df4eb2acdcb166b340a1063abf61339c'
```
I tried to minimize the size of `bug.pt`, but my first attempts led to bug disappearing, so I'm uploading it to my OneDrive as is (it's quite heavy - 3Gb): https://1drv.ms/u/s!Apx8USiTtrYmqfhMddRNKxibLatubA?e=wyihRU
(it would be nice for such situations to have a safe unpickler that can't execute arbitrary code or have it support loading/saving tensors from/to hdf5 as well. doesn't pytorch have some rudimentary unpickler for TorchCpp use?)
cc @ezyang @gchanan @zou3519 @ngimel | 1.0 | Incorrect result of argmax on CUDA with large non-contig tensors and on non-contig dimensions - ```python
import torch
a = torch.load('bug.pt', 'cuda') # giant uint8 tensor with binary 0/1 values
C = 11359
b = a[:, 0, C]
print('shape', b.shape)
print('argmax', b.argmax())
print('max', b.max())
K = int(b.argmax())
print(b[K]) # should be 1, but is 0
print(a[K, 0, C]) # should be 1, but is 0
print(b.nonzero()) # different result compared to argmax
print('bad argmax', a.argmax(dim = 0)[0][C])
```
```
>>> torch.__version__
'1.4.0'
>>> torch.version.git_version
'143868c3df4eb2acdcb166b340a1063abf61339c'
```
I tried to minimize the size of `bug.pt`, but my first attempts led to bug disappearing, so I'm uploading it to my OneDrive as is (it's quite heavy - 3Gb): https://1drv.ms/u/s!Apx8USiTtrYmqfhMddRNKxibLatubA?e=wyihRU
(it would be nice for such situations to have a safe unpickler that can't execute arbitrary code or have it support loading/saving tensors from/to hdf5 as well. doesn't pytorch have some rudimentary unpickler for TorchCpp use?)
cc @ezyang @gchanan @zou3519 @ngimel | priority | incorrect result of argmax on cuda with large non contig tensors and on non contig dimensions python import torch a torch load bug pt cuda giant tensor with binary values c b a print shape b shape print argmax b argmax print max b max k int b argmax print b should be but is print a should be but is print b nonzero different result compared to argmax print bad argmax a argmax dim torch version torch version git version i tried to minimize the size of bug pt but my first attempts led to bug disappearing so i m uploading it to my onedrive as is it s quite heavy it would be nice for such situations to have a safe unpickler that can t execute arbitrary code or have it support loading saving tensors from to as well doesn t pytorch have some rudimentary unpickler for torchcpp use cc ezyang gchanan ngimel | 1 |
386,965 | 11,453,874,889 | IssuesEvent | 2020-02-06 16:06:38 | jrtechs/github-graphs | https://api.github.com/repos/jrtechs/github-graphs | opened | github Authentication Update | high priority | Github has just deprecated the way in which this project is authenticating with the GitHub API.
https://developer.github.com/changes/2019-11-05-deprecated-passwords-and-authorizations-api/#authenticating-using-query-parameters
Moving forward we need to switch to an OAuth system. | 1.0 | github Authentication Update - Github has just deprecated the way in which this project is authenticating with the GitHub API.
https://developer.github.com/changes/2019-11-05-deprecated-passwords-and-authorizations-api/#authenticating-using-query-parameters
Moving forward we need to switch to an OAuth system. | priority | github authentication update github has just deprecated the way in which this project is authenticating with the github api moving forward we need to switch to an oauth system | 1 |
186,076 | 6,733,321,350 | IssuesEvent | 2017-10-18 14:30:30 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | opened | [Dispense] Improve verification code validation | epic/medication_dispense kind/task priority/high project/reimbursement status/todo | Improve verification code validation according to [spec](http://docs.uaehealthapi.apiary.io/#reference/public.-reimbursement/medication-dispense/create-medication-dispense) and [logic](https://edenlab.atlassian.net/wiki/spaces/EH/pages/3381244/Create+Medication+Dispense):
- [ ] Set code optional in request
- [ ] Update verification logic | 1.0 | [Dispense] Improve verification code validation - Improve verification code validation according to [spec](http://docs.uaehealthapi.apiary.io/#reference/public.-reimbursement/medication-dispense/create-medication-dispense) and [logic](https://edenlab.atlassian.net/wiki/spaces/EH/pages/3381244/Create+Medication+Dispense):
- [ ] Set code optional in request
- [ ] Update verification logic | priority | improve verification code validation improve verification code validation according to and set code optional in request update verification logic | 1 |
226,800 | 7,523,118,630 | IssuesEvent | 2018-04-12 23:07:54 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Tensor Factory Improvements List | enhancement high priority | - [x] Include equivalents of `np.empty` / `np.full` (PR: https://github.com/pytorch/pytorch/pull/5668)
- [x] Include size-based `new` factory methods (with the eventual goal of deprecating `Variable.new`). These are: `Variable.new_empty`, `Variable.new_ones`, `Variable.new_zeros`, `Variable.new_full`
(PR: https://github.com/pytorch/pytorch/pull/5668)
- [x] `torch.tensor` and `Variable.new_tensor` should have first-class support for sparse. (PR: https://github.com/pytorch/pytorch/pull/5745)
- [x] Verify device / dtype are consistent (Issue: https://github.com/pytorch/pytorch/issues/5461).
- [x] `new_tensor` / `torch.tensor` should copy numpy inputs (PR: https://github.com/pytorch/pytorch/pull/5713)
- [x] `torch.tensor` should do float/int type inference. (PR: https://github.com/pytorch/pytorch/pull/5997) | 1.0 | Tensor Factory Improvements List - - [x] Include equivalents of `np.empty` / `np.full` (PR: https://github.com/pytorch/pytorch/pull/5668)
- [x] Include size-based `new` factory methods (with the eventual goal of deprecating `Variable.new`). These are: `Variable.new_empty`, `Variable.new_ones`, `Variable.new_zeros`, `Variable.new_full`
(PR: https://github.com/pytorch/pytorch/pull/5668)
- [x] `torch.tensor` and `Variable.new_tensor` should have first-class support for sparse. (PR: https://github.com/pytorch/pytorch/pull/5745)
- [x] Verify device / dtype are consistent (Issue: https://github.com/pytorch/pytorch/issues/5461).
- [x] `new_tensor` / `torch.tensor` should copy numpy inputs (PR: https://github.com/pytorch/pytorch/pull/5713)
- [x] `torch.tensor` should do float/int type inference. (PR: https://github.com/pytorch/pytorch/pull/5997) | priority | tensor factory improvements list include equivalents of np empty np full pr include size based new factory methods with the eventual goal of deprecating variable new these are variable new empty variable new ones variable new zeros variable new full pr torch tensor and variable new tensor should have first class support for sparse pr verify device dtype are consistent issue new tensor torch tensor should copy numpy inputs pr torch tensor should do float int type inference pr | 1 |
773,428 | 27,157,241,846 | IssuesEvent | 2023-02-17 08:57:48 | bryntum/support | https://api.github.com/repos/bryntum/support | closed | `ZoomIn`/`ZoomOut` goes to a different location | bug resolved high-priority forum OEM | [Forum post](https://forum.bryntum.com/viewtopic.php?f=52&t=23617&p=116941#p116941)
Hi,
After updating to Bryntum Gantt 5.2.8, the zoomIn/zoomOut feature completely moves the scroll to another location. This occurs when the weekends are hidden.
Steps to reproduce:
Go to advanced example ( https://bryntum.com/products/gantt/examples/advanced/ )
In console, paste the following code to hide Weekends:
```
const oneDayInSeconds = 1000 * 60 * 60 * (24) * 1;
gantt.timeAxis.filter({
id: 'hideWeekendFilter',
filterBy: tick => {
let weekday = true;
if ((tick.duration >= oneDayInSeconds && tick.duration <= oneDayInSeconds) && (tick.startDate.getDay() == 6 || tick.startDate.getDay() == 0)) {
weekday = false;
}
return weekday;
}
});
```
Try zooming in 2-3 times using either mouse or buttons.
The zoom would go to a different location
Can we maintain the centerDate/mouseFocus in scroll like it was in the previous versions?
Thanks,
Rayudu | 1.0 | `ZoomIn`/`ZoomOut` goes to a different location - [Forum post](https://forum.bryntum.com/viewtopic.php?f=52&t=23617&p=116941#p116941)
Hi,
After updating to Bryntum Gantt 5.2.8, the zoomIn/zoomOut feature completely moves the scroll to another location. This occurs when the weekends are hidden.
Steps to reproduce:
Go to advanced example ( https://bryntum.com/products/gantt/examples/advanced/ )
In console, paste the following code to hide Weekends:
```
const oneDayInSeconds = 1000 * 60 * 60 * (24) * 1;
gantt.timeAxis.filter({
id: 'hideWeekendFilter',
filterBy: tick => {
let weekday = true;
if ((tick.duration >= oneDayInSeconds && tick.duration <= oneDayInSeconds) && (tick.startDate.getDay() == 6 || tick.startDate.getDay() == 0)) {
weekday = false;
}
return weekday;
}
});
```
Try zooming in 2-3 times using either mouse or buttons.
The zoom would go to a different location
Can we maintain the centerDate/mouseFocus in scroll like it was in the previous versions?
Thanks,
Rayudu | priority | zoomin zoomout goes to a different location hi after updating to bryntum gantt the zoomin zoomout feature completely moves the scroll to another location this occurs when the weekends are hidden steps to reproduce go to advanced example in console paste the following code to hide weekends const onedayinseconds gantt timeaxis filter id hideweekendfilter filterby tick let weekday true if tick duration onedayinseconds tick duration onedayinseconds tick startdate getday tick startdate getday weekday false return weekday try zooming in times using either mouse or buttons the zoom would go to a different location can we maintain the centerdate mousefocus in scroll like it was in the previous versions thanks rayudu | 1 |
303,043 | 9,301,589,338 | IssuesEvent | 2019-03-23 23:35:38 | E3SM-Project/ParallelIO | https://api.github.com/repos/E3SM-Project/ParallelIO | closed | Expose fillvalues in PIO1/PIO2 | High Priority enhancement | Currently PIO1 does not expose fillvalues (fillvalues for
int, float etc) to the user. However PIO2 does. And
neither PIO1 or PIO2 expose a fillvalue for character type.
At least MPAS wants to use fillvalues from PIO in its code (the
current usage is a hack, but some applications like MPAS might
want to use fillvalues).
PIO1 might not have historically supported fillvalues because there
was an MPI IO option (no longer available in PIO2) that does not
have a specific fillvalue for the types (unlike NetCDF, ... etc).
But since applications are interested in using fillvalues it might be
worthwhile providing them to the user since this avoids an
interface mismatch between PIO1 and PIO2.
We can use achar(0) as the fillvalue for character types, if a
corresponding fillvalue is not available in the underlying library. | 1.0 | Expose fillvalues in PIO1/PIO2 - Currently PIO1 does not expose fillvalues (fillvalues for
int, float etc) to the user. However PIO2 does. And
neither PIO1 or PIO2 expose a fillvalue for character type.
At least MPAS wants to use fillvalues from PIO in its code (the
current usage is a hack, but some applications like MPAS might
want to use fillvalues).
PIO1 might not have historically supported fillvalues because there
was an MPI IO option (no longer available in PIO2) that does not
have a specific fillvalue for the types (unlike NetCDF, ... etc).
But since applications are interested in using fillvalues it might be
worthwhile providing them to the user since this avoids an
interface mismatch between PIO1 and PIO2.
We can use achar(0) as the fillvalue for character types, if a
corresponding fillvalue is not available in the underlying library. | priority | expose fillvalues in currently does not expose fillvalues fillvalues for int float etc to the user however does and neither or expose a fillvalue for character type at least mpas wants to use fillvalues from pio in its code the current usage is a hack but some applications like mpas might want to use fillvalues might not have historically supported fillvalues because there was an mpi io option no longer available in that does not have a specific fillvalue for the types unlike netcdf etc but since applications are interested in using fillvalues it might be worthwhile providing them to the user since this avoids an interface mismatch between and we can use achar as the fillvalue for character types if a corresponding fillvalue is not available in the underlying library | 1 |
513,624 | 14,924,064,504 | IssuesEvent | 2021-01-23 21:51:33 | ls1intum/Artemis | https://api.github.com/repos/ls1intum/Artemis | closed | Exercise assessment dashboard displaying all assessments as draft | bug priority:high | ### Describe the bug
Graded exercises appear as drafts in the exercise assessment dashboard and the date of the last build is shown. Feedback and grading remains saved in the exercises, so this seems only to affect the exercise assessment dashboard.
This issue is not present at older exercises with their assessment due dated already passed (can't confirm whether assessments submitted before the Artemis update with their due date not passed yet are affected or not).
#### To Reproduce
1. Start a new assessment
2. Add feedback, submit the assessment
3. Click "Assess Next Submission" (not sure if this is necessary for reproducing)
4. Open the exercise assessment dashboard for the exercise
#### Expected behavior
The exercises should be shown as graded with their score shown.
#### Screenshots
Exercise Assessment Dashboard (these exercises are all graded and are displayed wrongly):

Assessment after opening via the exercise assessment dashboard (feedback and grading remain saved correctly, "Override assessment" and "Assess Next Submission" button are displayed correctly):

### Environment
<details><pre>
- OS: Windows 10
- Browser: Latest Firefox, latest Google Chrome
- Artemis version: 4.9.1
</pre></details>
#### Additional context
This seems to be a regression after having Artemis 4.9.1 deployed. | 1.0 | Exercise assessment dashboard displaying all assessments as draft - ### Describe the bug
Graded exercises appear as drafts in the exercise assessment dashboard and the date of the last build is shown. Feedback and grading remains saved in the exercises, so this seems only to affect the exercise assessment dashboard.
This issue is not present at older exercises with their assessment due dated already passed (can't confirm whether assessments submitted before the Artemis update with their due date not passed yet are affected or not).
#### To Reproduce
1. Start a new assessment
2. Add feedback, submit the assessment
3. Click "Assess Next Submission" (not sure if this is necessary for reproducing)
4. Open the exercise assessment dashboard for the exercise
#### Expected behavior
The exercises should be shown as graded with their score shown.
#### Screenshots
Exercise Assessment Dashboard (these exercises are all graded and are displayed wrongly):

Assessment after opening via the exercise assessment dashboard (feedback and grading remain saved correctly, "Override assessment" and "Assess Next Submission" button are displayed correctly):

### Environment
<details><pre>
- OS: Windows 10
- Browser: Latest Firefox, latest Google Chrome
- Artemis version: 4.9.1
</pre></details>
#### Additional context
This seems to be a regression after having Artemis 4.9.1 deployed. | priority | exercise assessment dashboard displaying all assessments as draft describe the bug graded exercises appear as drafts in the exercise assessment dashboard and the date of the last build is shown feedback and grading remains saved in the exercises so this seems only to affect the exercise assessment dashboard this issue is not present at older exercises with their assessment due dated already passed can t confirm whether assessments submitted before the artemis update with their due date not passed yet are affected or not to reproduce start a new assessment add feedback submit the assessment click assess next submission not sure if this is necessary for reproducing open the exercise assessment dashboard for the exercise expected behavior the exercises should be shown as graded with their score shown screenshots exercise assessment dashboard these exercises are all graded and are displayed wrongly assessment after opening via the exercise assessment dashboard feedback and grading remain saved correctly override assessment and assess next submission button are displayed correctly environment os windows browser latest firefox latest google chrome artemis version additional context this seems to be a regression after having artemis deployed | 1 |
640,107 | 20,773,803,659 | IssuesEvent | 2022-03-16 08:33:43 | SAP/xsk | https://api.github.com/repos/SAP/xsk | closed | [Destination] Body not sent in request | bug API core priority-high effort-high customer _Torino_ _Tantel_ | ### Details
```javascript
const request = new $.net.http.Request($.net.http.POST, "");
request.setBody(payload);
client.request(request, destination);
```
DestinationFacade uses https://github.com/SAP/xsk/blob/main/modules/api/api-xsjs/src/main/java/com/sap/xsk/api/destination/DestinationRequest.java which does not have body
The target should be to come as close as possible to reusing logic from https://github.com/eclipse/dirigible/blob/master/api%2Fapi-facade%2Fapi-http%2Fsrc%2Fmain%2Fjava%2Forg%2Feclipse%2Fdirigible%2Fapi%2Fv3%2Fhttp%2FHttpClientFacade.java as | 1.0 | [Destination] Body not sent in request - ### Details
```javascript
const request = new $.net.http.Request($.net.http.POST, "");
request.setBody(payload);
client.request(request, destination);
```
DestinationFacade uses https://github.com/SAP/xsk/blob/main/modules/api/api-xsjs/src/main/java/com/sap/xsk/api/destination/DestinationRequest.java which does not have body
The target should be to come as close as possible to reusing logic from https://github.com/eclipse/dirigible/blob/master/api%2Fapi-facade%2Fapi-http%2Fsrc%2Fmain%2Fjava%2Forg%2Feclipse%2Fdirigible%2Fapi%2Fv3%2Fhttp%2FHttpClientFacade.java as | priority | body not sent in request details javascript const request new net http request net http post request setbody payload client request request destination destinationfacade uses which does not have body the target should be to come as close as possible to reusing logic from as | 1 |
736,740 | 25,485,576,745 | IssuesEvent | 2022-11-26 10:48:17 | discord-jv/discord.jv | https://api.github.com/repos/discord-jv/discord.jv | closed | [Feature Request] Command Frontend | status: in progress type: feature priority: high status: claimed IMPORTANT | ### General Troubleshooting
- [X] You've have checked for similar feature requests.
- [X] You've updated to the latest version of the API.
- [X] You've have checked the PR's for features relating to your suggestion.
### Feature Request Description
Implement front-end command system, similar to https://github.com/ice-games/Java-Discord-Framework:
Extend a `CommandListener` class
Override `onCommand(CommandInteractionEvent ev) `
Put code inside method
Register in main class
This is **claimed** by me and will be started soon. | 1.0 | [Feature Request] Command Frontend - ### General Troubleshooting
- [X] You've have checked for similar feature requests.
- [X] You've updated to the latest version of the API.
- [X] You've have checked the PR's for features relating to your suggestion.
### Feature Request Description
Implement front-end command system, similar to https://github.com/ice-games/Java-Discord-Framework:
Extend a `CommandListener` class
Override `onCommand(CommandInteractionEvent ev) `
Put code inside method
Register in main class
This is **claimed** by me and will be started soon. | priority | command frontend general troubleshooting you ve have checked for similar feature requests you ve updated to the latest version of the api you ve have checked the pr s for features relating to your suggestion feature request description implement front end command system similar to extend a commandlistener class override oncommand commandinteractionevent ev put code inside method register in main class this is claimed by me and will be started soon | 1 |
546,302 | 16,008,721,246 | IssuesEvent | 2021-04-20 07:54:18 | ita-social-projects/TeachUA | https://api.github.com/repos/ita-social-projects/TeachUA | closed | [Головна сторінка] Wrong photo's size | Priority: High bug | **Environment:** macOS Big Sur 11.1, Google Chrome 89.0.4
**Reproducible:** always
**Build found:** last commit from https://speak-ukrainian.org.ua/dev/
**Steps to reproduce**
1. Go to https://speak-ukrainian.org.ua/dev/
2. Pay attention to the carousel's photo.
**Actual result**
The size of photo on the banner is 1506px x 400px.
<img width="1601" alt="Знімок екрана 2021-04-03 о 23 53 05" src="https://user-images.githubusercontent.com/78917926/113491279-36502180-94d8-11eb-9ff7-42515dcce59d.png">
**Expected result**
The size of photo on the banner is 1268px x 400px.
<img width="840" alt="Знімок екрана 2021-04-03 о 23 52 28" src="https://user-images.githubusercontent.com/78917926/113491287-48ca5b00-94d8-11eb-8421-8ee7d1b7cbdc.png">
**User story and test case links**
E.g.: "User story #108
**Labels to be added**
"Bug", Priority ("pri: high"), Severity ("severity: minor"), Type ("UI")
NOT BUG, Picture is flexible
| 1.0 | [Головна сторінка] Wrong photo's size - **Environment:** macOS Big Sur 11.1, Google Chrome 89.0.4
**Reproducible:** always
**Build found:** last commit from https://speak-ukrainian.org.ua/dev/
**Steps to reproduce**
1. Go to https://speak-ukrainian.org.ua/dev/
2. Pay attention to the carousel's photo.
**Actual result**
The size of photo on the banner is 1506px x 400px.
<img width="1601" alt="Знімок екрана 2021-04-03 о 23 53 05" src="https://user-images.githubusercontent.com/78917926/113491279-36502180-94d8-11eb-9ff7-42515dcce59d.png">
**Expected result**
The size of photo on the banner is 1268px x 400px.
<img width="840" alt="Знімок екрана 2021-04-03 о 23 52 28" src="https://user-images.githubusercontent.com/78917926/113491287-48ca5b00-94d8-11eb-8421-8ee7d1b7cbdc.png">
**User story and test case links**
E.g.: "User story #108
**Labels to be added**
"Bug", Priority ("pri: high"), Severity ("severity: minor"), Type ("UI")
NOT BUG, Picture is flexible
| priority | wrong photo s size environment macos big sur google chrome reproducible always build found last commit from steps to reproduce go to pay attention to the carousel s photo actual result the size of photo on the banner is x img width alt знімок екрана о src expected result the size of photo on the banner is x img width alt знімок екрана о src user story and test case links e g user story labels to be added bug priority pri high severity severity minor type ui not bug picture is flexible | 1 |
60,490 | 3,129,904,630 | IssuesEvent | 2015-09-09 05:47:10 | leo-project/leofs | https://api.github.com/repos/leo-project/leofs | closed | Cannot execute the rebalance command with manual operaiton | Bug Priority-HIGH _leo_manager _leo_redundant_manager | We cannot rarely execute the rebalance command with using ``leofs-adm`` manually.
I found that after having launched a new node, several minutes later, status of an attached node was changed automatically by the table-sync-process.
```erlang
%% Initial status
(manager_0@127.0.0.1)1> leo_redundant_manager_api:get_members().
{ok,[{member,'storage105@1.2.3.18',[],"1.2.3.18",
13077,ipv4,1441264848784584,attached,168,[],[]},
{member,'storage104@1.2.3.77',"node_053a7d8e",
"1.2.3.77",13077,ipv4,1399947793754906,running,168,[],
[]},
{member,'storage103@1.2.3.76',"node_0d6fab04",
"1.2.3.76",13077,ipv4,1399947791101110,running,168,[],
[]},
{member,'storage102@1.2.3.75',"node_7e013647",
"1.2.3.75",13075,ipv4,1411002000664416,running,168,[],
[]},
{member,'storage101@1.2.3.74',"node_78c19e24",
"1.2.3.74",13075,ipv4,1409191248571549,running,168,[],
[]}]}
%% A few min later
(manager_0@127.0.0.1)2> leo_redundant_manager_api:get_members().
{ok,[{member,'storage105@1.2.3.18',"node_725e382c",
"1.2.3.18",13077,ipv4,1441264848784584,running,168,[],
[]},
{member,'storage104@1.2.3.77',"node_053a7d8e",
"1.2.3.77",13077,ipv4,1399947793754906,running,168,[],
[]},
{member,'storage103@1.2.3.76',"node_0d6fab04",
"1.2.3.76",13077,ipv4,1399947791101110,running,168,[],
[]},
{member,'storage102@1.2.3.75',"node_7e013647",
"1.2.3.75",13075,ipv4,1411002000664416,running,168,[],
[]},
{member,'storage101@1.2.3.74',"node_78c19e24",
"1.2.3.74",13075,ipv4,1409191248571549,running,168,[],
[]}]}
``` | 1.0 | Cannot execute the rebalance command with manual operaiton - We cannot rarely execute the rebalance command with using ``leofs-adm`` manually.
I found that after having launched a new node, several minutes later, status of an attached node was changed automatically by the table-sync-process.
```erlang
%% Initial status
(manager_0@127.0.0.1)1> leo_redundant_manager_api:get_members().
{ok,[{member,'storage105@1.2.3.18',[],"1.2.3.18",
13077,ipv4,1441264848784584,attached,168,[],[]},
{member,'storage104@1.2.3.77',"node_053a7d8e",
"1.2.3.77",13077,ipv4,1399947793754906,running,168,[],
[]},
{member,'storage103@1.2.3.76',"node_0d6fab04",
"1.2.3.76",13077,ipv4,1399947791101110,running,168,[],
[]},
{member,'storage102@1.2.3.75',"node_7e013647",
"1.2.3.75",13075,ipv4,1411002000664416,running,168,[],
[]},
{member,'storage101@1.2.3.74',"node_78c19e24",
"1.2.3.74",13075,ipv4,1409191248571549,running,168,[],
[]}]}
%% A few min later
(manager_0@127.0.0.1)2> leo_redundant_manager_api:get_members().
{ok,[{member,'storage105@1.2.3.18',"node_725e382c",
"1.2.3.18",13077,ipv4,1441264848784584,running,168,[],
[]},
{member,'storage104@1.2.3.77',"node_053a7d8e",
"1.2.3.77",13077,ipv4,1399947793754906,running,168,[],
[]},
{member,'storage103@1.2.3.76',"node_0d6fab04",
"1.2.3.76",13077,ipv4,1399947791101110,running,168,[],
[]},
{member,'storage102@1.2.3.75',"node_7e013647",
"1.2.3.75",13075,ipv4,1411002000664416,running,168,[],
[]},
{member,'storage101@1.2.3.74',"node_78c19e24",
"1.2.3.74",13075,ipv4,1409191248571549,running,168,[],
[]}]}
``` | priority | cannot execute the rebalance command with manual operaiton we cannot rarely execute the rebalance command with using leofs adm manually i found that after having launched a new node several minutes later status of an attached node was changed automatically by the table sync process erlang initial status manager leo redundant manager api get members ok attached member node running member node running member node running member node running a few min later manager leo redundant manager api get members ok member node running member node running member node running member node running member node running | 1 |
179,330 | 6,623,883,170 | IssuesEvent | 2017-09-22 09:12:55 | xcat2/xcat-core | https://api.github.com/repos/xcat2/xcat-core | closed | [fvt] Checking all issues with 2.13.7 label are closed | component:test priority:high sprint1 type:feature | what to do
* [x] Checking all issues with 2.13.7 label are closed | 1.0 | [fvt] Checking all issues with 2.13.7 label are closed - what to do
* [x] Checking all issues with 2.13.7 label are closed | priority | checking all issues with label are closed what to do checking all issues with label are closed | 1 |
187,492 | 6,758,181,319 | IssuesEvent | 2017-10-24 13:28:23 | pi-top/pi-topPULSE | https://api.github.com/repos/pi-top/pi-topPULSE | reopened | Replace Numpy's uint8 module in configuration.py | enhancement priority: high | This is causing excessive CPU usage across all cores - a simple alternative likely exists.
Note, this looks like it was caused by a problem with the default Raspbian version of numpy. See [this](https://github.com/numpy/numpy/issues/6237) similar issue
| 1.0 | Replace Numpy's uint8 module in configuration.py - This is causing excessive CPU usage across all cores - a simple alternative likely exists.
Note, this looks like it was caused by a problem with the default Raspbian version of numpy. See [this](https://github.com/numpy/numpy/issues/6237) similar issue
| priority | replace numpy s module in configuration py this is causing excessive cpu usage across all cores a simple alternative likely exists note this looks like it was caused by a problem with the default raspbian version of numpy see similar issue | 1 |
709,313 | 24,373,282,825 | IssuesEvent | 2022-10-03 21:22:53 | codbex/codbex-kronos | https://api.github.com/repos/codbex/codbex-kronos | closed | [IDE] Migration perspective not loading | bug IDE effort-low priority-high | Migration perspective is not loading due to some errors:
```
2022-09-30 10:13:58.044 [ERROR] [http-nio-8080-exec-8] o.e.d.e.a.r.AbstractResourceExecutor - There is no resource at the specified path: /registry/public/ide-core/ui/message-hub.js
2022-09-30 10:13:58.220 [ERROR] [http-nio-8080-exec-8] o.e.d.e.a.r.AbstractResourceExecutor - There is no resource at the specified path: /registry/public/ide-core/ui/ui-layout.js
2022-09-30 10:13:58.415 [ERROR] [http-nio-8080-exec-2] o.e.d.e.a.r.AbstractResourceExecutor - There is no resource at the specified path: /registry/public/ide-core/ui/ui-core-ng-modules.js
```
Probably related to:
- https://github.com/eclipse/dirigible/issues/2034 | 1.0 | [IDE] Migration perspective not loading - Migration perspective is not loading due to some errors:
```
2022-09-30 10:13:58.044 [ERROR] [http-nio-8080-exec-8] o.e.d.e.a.r.AbstractResourceExecutor - There is no resource at the specified path: /registry/public/ide-core/ui/message-hub.js
2022-09-30 10:13:58.220 [ERROR] [http-nio-8080-exec-8] o.e.d.e.a.r.AbstractResourceExecutor - There is no resource at the specified path: /registry/public/ide-core/ui/ui-layout.js
2022-09-30 10:13:58.415 [ERROR] [http-nio-8080-exec-2] o.e.d.e.a.r.AbstractResourceExecutor - There is no resource at the specified path: /registry/public/ide-core/ui/ui-core-ng-modules.js
```
Probably related to:
- https://github.com/eclipse/dirigible/issues/2034 | priority | migration perspective not loading migration perspective is not loading due to some errors o e d e a r abstractresourceexecutor there is no resource at the specified path registry public ide core ui message hub js o e d e a r abstractresourceexecutor there is no resource at the specified path registry public ide core ui ui layout js o e d e a r abstractresourceexecutor there is no resource at the specified path registry public ide core ui ui core ng modules js probably related to | 1 |
696,181 | 23,889,092,119 | IssuesEvent | 2022-09-08 10:00:22 | serverlessworkflow/synapse | https://api.github.com/repos/serverlessworkflow/synapse | closed | Compensation states are not displayed in the UI | bug priority: high dashboard weight: 3 | **What happened**:
Compensation states are not displayed in the UI
**What you expected to happen**:
Compensation states to be properly rendered in the UI
**How to reproduce it**:
Create a new workflow definition that defines compensation.
Example:
```json
{
"id":"compensation-test",
"name":"Compensation Test",
"version":"0.1.0",
"specVersion":"0.8",
"functions":[
{
"name":"call-non-existing-uri",
"type":"rest",
"operation":"http://google.com/fails#fake"
}
],
"states":[
{
"name":"Fault vonlontarily",
"type":"operation",
"actions":[
{
"name":"Call non-existing uri",
"functionRef":{
"refName":"call-non-existing-uri"
}
}
],
"compensatedBy":"Compensate",
"end":true
},
{
"name":"Compensate",
"type":"inject",
"data":{
"status":"compensated"
},
"usedForCompensation":true,
"end":true
}
]
}
``` | 1.0 | Compensation states are not displayed in the UI - **What happened**:
Compensation states are not displayed in the UI
**What you expected to happen**:
Compensation states to be properly rendered in the UI
**How to reproduce it**:
Create a new workflow definition that defines compensation.
Example:
```json
{
"id":"compensation-test",
"name":"Compensation Test",
"version":"0.1.0",
"specVersion":"0.8",
"functions":[
{
"name":"call-non-existing-uri",
"type":"rest",
"operation":"http://google.com/fails#fake"
}
],
"states":[
{
"name":"Fault vonlontarily",
"type":"operation",
"actions":[
{
"name":"Call non-existing uri",
"functionRef":{
"refName":"call-non-existing-uri"
}
}
],
"compensatedBy":"Compensate",
"end":true
},
{
"name":"Compensate",
"type":"inject",
"data":{
"status":"compensated"
},
"usedForCompensation":true,
"end":true
}
]
}
``` | priority | compensation states are not displayed in the ui what happened compensation states are not displayed in the ui what you expected to happen compensation states to be properly rendered in the ui how to reproduce it create a new workflow definition that defines compensation example json id compensation test name compensation test version specversion functions name call non existing uri type rest operation states name fault vonlontarily type operation actions name call non existing uri functionref refname call non existing uri compensatedby compensate end true name compensate type inject data status compensated usedforcompensation true end true | 1 |
717,378 | 24,673,387,329 | IssuesEvent | 2022-10-18 15:11:55 | devvsakib/hacktoberfest-react-project | https://api.github.com/repos/devvsakib/hacktoberfest-react-project | closed | [MAJOR] convert website into REACT website | enhancement help wanted good first issue hacktoberfest [priority: high] | ## Major Update
So far we did well. Now its time to convert our **website** into **react** website.
Now we are using simple html css static website. But we are going to convert this into react.
Please comment only if you have exprience | 1.0 | [MAJOR] convert website into REACT website - ## Major Update
So far we did well. Now its time to convert our **website** into **react** website.
Now we are using simple html css static website. But we are going to convert this into react.
Please comment only if you have exprience | priority | convert website into react website major update so far we did well now its time to convert our website into react website now we are using simple html css static website but we are going to convert this into react please comment only if you have exprience | 1 |
588,218 | 17,650,121,615 | IssuesEvent | 2021-08-20 12:04:27 | bnbcfyh/cmpe352-2021-repeat-project | https://api.github.com/repos/bnbcfyh/cmpe352-2021-repeat-project | closed | Change the template name to "create_equipment.html" | Type: Bug Status: Done Priority: High | There is a bug in the [views.py](https://github.com/bnbcfyh/cmpe352-2021-repeat-project/blob/master/practice-app/website/views.py). The template's name mentioned in the 46th line of the file should have been "create_equipment" but it is coded as "equipments.html" and thus throwing an exception.
46th line:
` return render_template("equipments.html", user=current_user)`
This should be changed as:
` return render_template("create_equipment.html", user=current_user)`
The issue is solved already; however, the final file will be uploaded in a pull request after all the corrections and fixes are done. | 1.0 | Change the template name to "create_equipment.html" - There is a bug in the [views.py](https://github.com/bnbcfyh/cmpe352-2021-repeat-project/blob/master/practice-app/website/views.py). The template's name mentioned in the 46th line of the file should have been "create_equipment" but it is coded as "equipments.html" and thus throwing an exception.
46th line:
` return render_template("equipments.html", user=current_user)`
This should be changed as:
` return render_template("create_equipment.html", user=current_user)`
The issue is solved already; however, the final file will be uploaded in a pull request after all the corrections and fixes are done. | priority | change the template name to create equipment html there is a bug in the the template s name mentioned in the line of the file should have been create equipment but it is coded as equipments html and thus throwing an exception line return render template equipments html user current user this should be changed as return render template create equipment html user current user the issue is solved already however the final file will be uploaded in a pull request after all the corrections and fixes are done | 1 |
128,966 | 5,079,484,283 | IssuesEvent | 2016-12-28 20:20:21 | leeensminger/DelDOT-NPDES-Field-Tool | https://api.github.com/repos/leeensminger/DelDOT-NPDES-Field-Tool | opened | Advanced query produces no results for migrated barrel conveyances involving roadway culverts | bug - high priority | Related to issue #262.
A multi barrel roadway culvert conveyance was migrated properly according to the migrated database (3 conveyance records, 3 pipe segments, 3 barrels, 1 barrel conveyance) and displays in the field tool properly with 3 geometries. However, when I tried to open the barrel conveyance by using the advanced query builder, no features were found. I will test barrel conveyances involving other conveyance types to see if the issue is specific to roadway culverts. The advanced query does work when you search for newly created barrel conveyances. Currently though, using the advanced query builder is the ONLY way to open a barrel conveyance so without this function, we cannot open, edit, or inspect existing barrel conveyances.

| 1.0 | Advanced query produces no results for migrated barrel conveyances involving roadway culverts - Related to issue #262.
A multi barrel roadway culvert conveyance was migrated properly according to the migrated database (3 conveyance records, 3 pipe segments, 3 barrels, 1 barrel conveyance) and displays in the field tool properly with 3 geometries. However, when I tried to open the barrel conveyance by using the advanced query builder, no features were found. I will test barrel conveyances involving other conveyance types to see if the issue is specific to roadway culverts. The advanced query does work when you search for newly created barrel conveyances. Currently though, using the advanced query builder is the ONLY way to open a barrel conveyance so without this function, we cannot open, edit, or inspect existing barrel conveyances.

| priority | advanced query produces no results for migrated barrel conveyances involving roadway culverts related to issue a multi barrel roadway culvert conveyance was migrated properly according to the migrated database conveyance records pipe segments barrels barrel conveyance and displays in the field tool properly with geometries however when i tried to open the barrel conveyance by using the advanced query builder no features were found i will test barrel conveyances involving other conveyance types to see if the issue is specific to roadway culverts the advanced query does work when you search for newly created barrel conveyances currently though using the advanced query builder is the only way to open a barrel conveyance so without this function we cannot open edit or inspect existing barrel conveyances | 1 |
71,163 | 3,352,723,331 | IssuesEvent | 2015-11-18 00:11:47 | Solinea/goldstone-server | https://api.github.com/repos/Solinea/goldstone-server | opened | Layout redesigned dashboard and wire up JavaScript | priority 2: high type: enhancement | * Layout redesigned dashboard with image placeholders on new viz'.
* convert 'single-block' and 'double-block' grid layout system to bootstrap col-md-x scheme.
* wire up JavaScript.
* Wire up links in sidebar to render existing chart pages. | 1.0 | Layout redesigned dashboard and wire up JavaScript - * Layout redesigned dashboard with image placeholders on new viz'.
* convert 'single-block' and 'double-block' grid layout system to bootstrap col-md-x scheme.
* wire up JavaScript.
* Wire up links in sidebar to render existing chart pages. | priority | layout redesigned dashboard and wire up javascript layout redesigned dashboard with image placeholders on new viz convert single block and double block grid layout system to bootstrap col md x scheme wire up javascript wire up links in sidebar to render existing chart pages | 1 |
788,736 | 27,763,784,797 | IssuesEvent | 2023-03-16 10:06:04 | ditrit/leto-modelizer | https://api.github.com/repos/ditrit/leto-modelizer | closed | Access to the index view from home page | User Story Priority: Very high | ## Description
**As a** user, in the homepage, after creating or importing project
**I want** to be in the index view (view containing the list model & model template)
**so that** I can manage the models of my project | 1.0 | Access to the index view from home page - ## Description
**As a** user, in the homepage, after creating or importing project
**I want** to be in the index view (view containing the list model & model template)
**so that** I can manage the models of my project | priority | access to the index view from home page description as a user in the homepage after creating or importing project i want to be in the index view view containing the list model model template so that i can manage the models of my project | 1 |
797,307 | 28,143,376,770 | IssuesEvent | 2023-04-02 07:31:58 | KDT3-Final-6/final-project-FE | https://api.github.com/repos/KDT3-Final-6/final-project-FE | closed | 설문조사 페이지 - 설문조사 UXUI | Status: Available Status: Review Needed Priority: High Type: Feature/Function | ## ✔️ 체크사항
- [x] 제목은 `개발 페이지 - 개발 목적` 으로 작성해 주세요.
## 💡 개발 목적
- 기능 구현 전 마크업 단계
## 🌐 세부 내용
- [x] 나이
- [x] 성별
- [x] 동행자 유형
- [ ] 여행 일정 길이
- [x] 종교적 성향
- [x] 여행 테마
- [x] 원하는 여행 시기 (계절)
- [x] 결과 페이지
| 1.0 | 설문조사 페이지 - 설문조사 UXUI - ## ✔️ 체크사항
- [x] 제목은 `개발 페이지 - 개발 목적` 으로 작성해 주세요.
## 💡 개발 목적
- 기능 구현 전 마크업 단계
## 🌐 세부 내용
- [x] 나이
- [x] 성별
- [x] 동행자 유형
- [ ] 여행 일정 길이
- [x] 종교적 성향
- [x] 여행 테마
- [x] 원하는 여행 시기 (계절)
- [x] 결과 페이지
| priority | 설문조사 페이지 설문조사 uxui ✔️ 체크사항 제목은 개발 페이지 개발 목적 으로 작성해 주세요 💡 개발 목적 기능 구현 전 마크업 단계 🌐 세부 내용 나이 성별 동행자 유형 여행 일정 길이 종교적 성향 여행 테마 원하는 여행 시기 계절 결과 페이지 | 1 |
190,552 | 6,819,910,947 | IssuesEvent | 2017-11-07 11:59:23 | GoDatascience/pyor | https://api.github.com/repos/GoDatascience/pyor | opened | Collect the log of the Experiment | enhancement high priority | Currently, there's no field for the log in the Experiment and it's not collected from Celery. | 1.0 | Collect the log of the Experiment - Currently, there's no field for the log in the Experiment and it's not collected from Celery. | priority | collect the log of the experiment currently there s no field for the log in the experiment and it s not collected from celery | 1 |
93,611 | 3,906,495,352 | IssuesEvent | 2016-04-19 09:03:53 | HGustavs/LenaSYS | https://api.github.com/repos/HGustavs/LenaSYS | closed | Specific grade doesn't show up when hovering a corrected dugga | DuggaSys Group-1 highPriority | If a student pass a dugga, he cant see what grade he actually gets. If you hover on the green circle you can only see "Pass" and the date the submission was corrected. You should be able to see the specific grade G / VG and 3 / 4 / 5. | 1.0 | Specific grade doesn't show up when hovering a corrected dugga - If a student pass a dugga, he cant see what grade he actually gets. If you hover on the green circle you can only see "Pass" and the date the submission was corrected. You should be able to see the specific grade G / VG and 3 / 4 / 5. | priority | specific grade doesn t show up when hovering a corrected dugga if a student pass a dugga he cant see what grade he actually gets if you hover on the green circle you can only see pass and the date the submission was corrected you should be able to see the specific grade g vg and | 1 |
753,212 | 26,341,274,788 | IssuesEvent | 2023-01-10 17:52:33 | vaticle/typedb | https://api.github.com/repos/vaticle/typedb | closed | Re-enable failing reasoner correctness steps | type: bug priority: high domain: reasoner | ## Description
Many steps that verify the reasoner's soundness or completeness fail. These failures need to be investigated and fixed so that they can be reinstated.
## Environment
1. OS (where TypeDB server runs): Grabl
2. TypeDB version (and platform): TypeDB 2.2.0 as of https://github.com/vaticle/typedb/pull/6374
3. TypeDB client: none, internal correctness verification
## Additional Information
Each is demarked by `# TODO: Fails` so that they can be found easily. | 1.0 | Re-enable failing reasoner correctness steps - ## Description
Many steps that verify the reasoner's soundness or completeness fail. These failures need to be investigated and fixed so that they can be reinstated.
## Environment
1. OS (where TypeDB server runs): Grabl
2. TypeDB version (and platform): TypeDB 2.2.0 as of https://github.com/vaticle/typedb/pull/6374
3. TypeDB client: none, internal correctness verification
## Additional Information
Each is demarked by `# TODO: Fails` so that they can be found easily. | priority | re enable failing reasoner correctness steps description many steps that verify the reasoner s soundness or completeness fail these failures need to be investigated and fixed so that they can be reinstated environment os where typedb server runs grabl typedb version and platform typedb as of typedb client none internal correctness verification additional information each is demarked by todo fails so that they can be found easily | 1 |
324,739 | 9,908,652,115 | IssuesEvent | 2019-06-27 18:49:47 | Cog-Creators/Red-DiscordBot | https://api.github.com/repos/Cog-Creators/Red-DiscordBot | closed | [Config] Cache is set to new object even if object isn't JSON serializable | High Priority Status: PR Pending Type: Bug V3 | # Other bugs
#### What were you trying to do?
Set object that isn't JSON serializable.
```py
config.someval.set({231, 321})
```
#### What were you expecting to happen?
Get exception about non-serializable object and Config's cache to stay intact.
#### What actually happened?
Config's cache got updated with a non-serializable object.
#### How can we reproduce this issue?
Assuming `config` contains `Config` instance, use this:
```py
try:
await config.someval.set({231, 321})
except TypeError as e:
print(str(e))
print("yeah, we all know it's not JSON serializable, shut up")
print(await config.someval())
```
You should get this printed:
```py
Object of type set is not JSON serializable
yeah, we all know, it's not JSON serializable, shut up
{321, 231}
``` | 1.0 | [Config] Cache is set to new object even if object isn't JSON serializable - # Other bugs
#### What were you trying to do?
Set object that isn't JSON serializable.
```py
config.someval.set({231, 321})
```
#### What were you expecting to happen?
Get exception about non-serializable object and Config's cache to stay intact.
#### What actually happened?
Config's cache got updated with a non-serializable object.
#### How can we reproduce this issue?
Assuming `config` contains `Config` instance, use this:
```py
try:
await config.someval.set({231, 321})
except TypeError as e:
print(str(e))
print("yeah, we all know it's not JSON serializable, shut up")
print(await config.someval())
```
You should get this printed:
```py
Object of type set is not JSON serializable
yeah, we all know, it's not JSON serializable, shut up
{321, 231}
``` | priority | cache is set to new object even if object isn t json serializable other bugs what were you trying to do set object that isn t json serializable py config someval set what were you expecting to happen get exception about non serializable object and config s cache to stay intact what actually happened config s cache got updated with a non serializable object how can we reproduce this issue assuming config contains config instance use this py try await config someval set except typeerror as e print str e print yeah we all know it s not json serializable shut up print await config someval you should get this printed py object of type set is not json serializable yeah we all know it s not json serializable shut up | 1 |
71,638 | 3,366,115,383 | IssuesEvent | 2015-11-21 03:13:16 | TheLens/elections | https://api.github.com/repos/TheLens/elections | opened | make nav/table headings consistent | High priority | Nav says house, table says representative.
Use House
Nav says Senate, table says Senator.
Use Senate | 1.0 | make nav/table headings consistent - Nav says house, table says representative.
Use House
Nav says Senate, table says Senator.
Use Senate | priority | make nav table headings consistent nav says house table says representative use house nav says senate table says senator use senate | 1 |
103,891 | 4,186,989,463 | IssuesEvent | 2016-06-23 16:03:44 | CPLamb/RouteTracker | https://api.github.com/repos/CPLamb/RouteTracker | opened | Rework File pickerView | Priority - High | There are still problems with the pickerView with regard to a newly downloaded file does not reliably display on the pickerView. After downloading a file the new file does NOT display on the pickerView. If you press PRINT LIST button before going to the pickerView, the file DOES appear. Therefore there must be something in the LIST files method that updates the pickerView
Also, We need to fix the pickerView for the very first time the program is loaded OR after all of the files have been deleted. I think we should have 1 hardcoded list in the app the the app comes preloaded with. Therefore whena User first gets the app, there would be 1 demo list that he can look at. | 1.0 | Rework File pickerView - There are still problems with the pickerView with regard to a newly downloaded file does not reliably display on the pickerView. After downloading a file the new file does NOT display on the pickerView. If you press PRINT LIST button before going to the pickerView, the file DOES appear. Therefore there must be something in the LIST files method that updates the pickerView
Also, We need to fix the pickerView for the very first time the program is loaded OR after all of the files have been deleted. I think we should have 1 hardcoded list in the app the the app comes preloaded with. Therefore whena User first gets the app, there would be 1 demo list that he can look at. | priority | rework file pickerview there are still problems with the pickerview with regard to a newly downloaded file does not reliably display on the pickerview after downloading a file the new file does not display on the pickerview if you press print list button before going to the pickerview the file does appear therefore there must be something in the list files method that updates the pickerview also we need to fix the pickerview for the very first time the program is loaded or after all of the files have been deleted i think we should have hardcoded list in the app the the app comes preloaded with therefore whena user first gets the app there would be demo list that he can look at | 1 |
628,169 | 19,977,199,772 | IssuesEvent | 2022-01-29 09:20:32 | ballerina-platform/ballerina-dev-website | https://api.github.com/repos/ballerina-platform/ballerina-dev-website | opened | Move the Tutorials to one line down | Priority/Highest Type/Improvement Area/LearnPages | **Description:**
$subject in learn landing page
<img width="1333" alt="image" src="https://user-images.githubusercontent.com/16300038/151655414-fe271525-d6ab-441d-a463-a0a4bba7b313.png">
**Describe your problem(s)**
**Describe your solution(s)**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| 1.0 | Move the Tutorials to one line down - **Description:**
$subject in learn landing page
<img width="1333" alt="image" src="https://user-images.githubusercontent.com/16300038/151655414-fe271525-d6ab-441d-a463-a0a4bba7b313.png">
**Describe your problem(s)**
**Describe your solution(s)**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| priority | move the tutorials to one line down description subject in learn landing page img width alt image src describe your problem s describe your solution s related issues optional suggested labels optional suggested assignees optional | 1 |
527,048 | 15,307,764,304 | IssuesEvent | 2021-02-24 21:23:36 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | AMP Shadow popstate navigation: Cannot read property 'originalHash' of undefined | Category: PWA P1: High Priority Type: Bug WG: performance | ## What's the issue?
We're working on implementing the AMP Shadow functionality into our existing PWA.
The page is loading successfully and being attached to the DOM with the `AMP.attachShadowDocAsStream()` method, however, when navigating away from the AMP Shadow page to another (non-AMP) page and then hit the back button to get back to the shadow page, we get this error from the `attachShadowDocAsStream` method.
```
TypeError: Cannot read property 'originalHash' of undefined
at t (mode.js:82)
at Pn (runtime.js:510)
at On.attachShadowDocAsStream (runtime.js:585)
```
The navigation is handled via our PWA router and the flow is basically as follows:
- Any navigation closes the AMP doc if exists and removes the container element.
- Creates a new AMP doc if needed by calling the streaming method and attaches it to a new element.
So navigating back to the AMP Shadow page should have no existing AMP doc and no container to worry about.
## How do we reproduce the issue?
I was not able to built a test case for this since it perfectly worked there... So I'm turning here to get the pro insight from those who know the code if they can resolve this without an example.
Here's the code.
```
(window.AMP = window.AMP || []).push(function(){
// clean up previous AMP page
var prevContainer = this.shadowContainer;
this.shadowContainer = document.createElement('div');
if(prevContainer)
this.el.replaceChild(this.shadowContainer, prevContainer);
else
this.el.appendChild(this.shadowContainer);
// if exists, close previous amp page (when navigating between AMP shadow pages)
if(this.ampShadow)
this.ampShadow.close();
var ampShadow = this.ampShadow = AMP.attachShadowDocAsStream(this.shadowContainer, url);
streamDocument(url, function(chunk){
// for first chunk checking for validity
ampShadow.writer.write(chunk);
}.bind(this)).then(function(){
ampShadow.writer.close();
// update meta tags and link handler
}.bind(this)).catch(function(error){
// handle error
}.bind(this))
});
function streamDocument(url, callback){
return fetch(url).then(function(response){
var reader = response.body.getReader();
var decoder = new TextDecoder();
function readChunk() {
return reader.read().then(function(chunk){
var text = decoder.decode(
chunk.value || new Uint8Array(),
{stream: !chunk.done});
if(text) {
console.log('got bytes: ', text.length);
callback(text);
}
if(chunk.done){
console.log('end stream...');
return Promise.resolve();
} else {
return readChunk();
}
});
}
return readChunk();
});
}
```
On navigation to a non-AMP shadow page we simply close the amp doc and remove the references.
```
if(this.ampShadow)
this.ampShadow.close();
this.ampShadow = null;
this.shadowContainer = null;
```
## What browsers are affected?
Chrome 75 on windows 10
## Which AMP version is affected?
AMP ⚡ HTML shadows – Version 1907161745080
https://cdn.ampproject.org/shadow-v0.js
| 1.0 | AMP Shadow popstate navigation: Cannot read property 'originalHash' of undefined - ## What's the issue?
We're working on implementing the AMP Shadow functionality into our existing PWA.
The page is loading successfully and being attached to the DOM with the `AMP.attachShadowDocAsStream()` method, however, when navigating away from the AMP Shadow page to another (non-AMP) page and then hit the back button to get back to the shadow page, we get this error from the `attachShadowDocAsStream` method.
```
TypeError: Cannot read property 'originalHash' of undefined
at t (mode.js:82)
at Pn (runtime.js:510)
at On.attachShadowDocAsStream (runtime.js:585)
```
The navigation is handled via our PWA router and the flow is basically as follows:
- Any navigation closes the AMP doc if exists and removes the container element.
- Creates a new AMP doc if needed by calling the streaming method and attaches it to a new element.
So navigating back to the AMP Shadow page should have no existing AMP doc and no container to worry about.
## How do we reproduce the issue?
I was not able to built a test case for this since it perfectly worked there... So I'm turning here to get the pro insight from those who know the code if they can resolve this without an example.
Here's the code.
```
(window.AMP = window.AMP || []).push(function(){
// clean up previous AMP page
var prevContainer = this.shadowContainer;
this.shadowContainer = document.createElement('div');
if(prevContainer)
this.el.replaceChild(this.shadowContainer, prevContainer);
else
this.el.appendChild(this.shadowContainer);
// if exists, close previous amp page (when navigating between AMP shadow pages)
if(this.ampShadow)
this.ampShadow.close();
var ampShadow = this.ampShadow = AMP.attachShadowDocAsStream(this.shadowContainer, url);
streamDocument(url, function(chunk){
// for first chunk checking for validity
ampShadow.writer.write(chunk);
}.bind(this)).then(function(){
ampShadow.writer.close();
// update meta tags and link handler
}.bind(this)).catch(function(error){
// handle error
}.bind(this))
});
function streamDocument(url, callback){
return fetch(url).then(function(response){
var reader = response.body.getReader();
var decoder = new TextDecoder();
function readChunk() {
return reader.read().then(function(chunk){
var text = decoder.decode(
chunk.value || new Uint8Array(),
{stream: !chunk.done});
if(text) {
console.log('got bytes: ', text.length);
callback(text);
}
if(chunk.done){
console.log('end stream...');
return Promise.resolve();
} else {
return readChunk();
}
});
}
return readChunk();
});
}
```
On navigation to a non-AMP shadow page we simply close the amp doc and remove the references.
```
if(this.ampShadow)
this.ampShadow.close();
this.ampShadow = null;
this.shadowContainer = null;
```
## What browsers are affected?
Chrome 75 on windows 10
## Which AMP version is affected?
AMP ⚡ HTML shadows – Version 1907161745080
https://cdn.ampproject.org/shadow-v0.js
| priority | amp shadow popstate navigation cannot read property originalhash of undefined what s the issue we re working on implementing the amp shadow functionality into our existing pwa the page is loading successfully and being attached to the dom with the amp attachshadowdocasstream method however when navigating away from the amp shadow page to another non amp page and then hit the back button to get back to the shadow page we get this error from the attachshadowdocasstream method typeerror cannot read property originalhash of undefined at t mode js at pn runtime js at on attachshadowdocasstream runtime js the navigation is handled via our pwa router and the flow is basically as follows any navigation closes the amp doc if exists and removes the container element creates a new amp doc if needed by calling the streaming method and attaches it to a new element so navigating back to the amp shadow page should have no existing amp doc and no container to worry about how do we reproduce the issue i was not able to built a test case for this since it perfectly worked there so i m turning here to get the pro insight from those who know the code if they can resolve this without an example here s the code window amp window amp push function clean up previous amp page var prevcontainer this shadowcontainer this shadowcontainer document createelement div if prevcontainer this el replacechild this shadowcontainer prevcontainer else this el appendchild this shadowcontainer if exists close previous amp page when navigating between amp shadow pages if this ampshadow this ampshadow close var ampshadow this ampshadow amp attachshadowdocasstream this shadowcontainer url streamdocument url function chunk for first chunk checking for validity ampshadow writer write chunk bind this then function ampshadow writer close update meta tags and link handler bind this catch function error handle error bind this function streamdocument url callback return fetch url then function response var reader response body getreader var decoder new textdecoder function readchunk return reader read then function chunk var text decoder decode chunk value new stream chunk done if text console log got bytes text length callback text if chunk done console log end stream return promise resolve else return readchunk return readchunk on navigation to a non amp shadow page we simply close the amp doc and remove the references if this ampshadow this ampshadow close this ampshadow null this shadowcontainer null what browsers are affected chrome on windows which amp version is affected amp ⚡ html shadows – version | 1 |
242,483 | 7,842,952,599 | IssuesEvent | 2018-06-19 02:41:54 | steemit/devportal-tutorials-js | https://api.github.com/repos/steemit/devportal-tutorials-js | closed | JS-T: Migrate Getting Started from DP | 5 priority/high | As a Dev, I want to know what I have to do to get my machine ready to run the tutorials.
**AC**
- [x] goes in `tutorials/00_getting_started`
- [x] outlines environment and developer skill level expectations
- [x] goes through all steps needed to get the computer ready to run the tutorials (mention expected operating system(s))
- [x] links to 'tutorial 1' at the end
- [x] after following the instructions here, I should be able to run any tutorial by following that tutorial's run instructions. | 1.0 | JS-T: Migrate Getting Started from DP - As a Dev, I want to know what I have to do to get my machine ready to run the tutorials.
**AC**
- [x] goes in `tutorials/00_getting_started`
- [x] outlines environment and developer skill level expectations
- [x] goes through all steps needed to get the computer ready to run the tutorials (mention expected operating system(s))
- [x] links to 'tutorial 1' at the end
- [x] after following the instructions here, I should be able to run any tutorial by following that tutorial's run instructions. | priority | js t migrate getting started from dp as a dev i want to know what i have to do to get my machine ready to run the tutorials ac goes in tutorials getting started outlines environment and developer skill level expectations goes through all steps needed to get the computer ready to run the tutorials mention expected operating system s links to tutorial at the end after following the instructions here i should be able to run any tutorial by following that tutorial s run instructions | 1 |
570,016 | 17,017,138,110 | IssuesEvent | 2021-07-02 13:36:03 | stackabletech/operator-rs | https://api.github.com/repos/stackabletech/operator-rs | closed | Add functionality to set required labels to build_metadata function | priority/high type/enhancement | We have agreed to use a set of default labels, as per what [Kubernetes best practice expects](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/).
> app.kubernetes.io/name The name of the application
> app.kubernetes.io/instance A unique name identifying the instance of an application
> app.kubernetes.io/version The current version of the application (e.g., a semantic version, revision hash, etc.)
> app.kubernetes.io/component The component within the architecture
> app.kubernetes.io/part-of The name of a higher level application this one is part of
> app.kubernetes.io/managed-by The tool being used to manage the operation of an application
It would make sense to have these supported by build_metadata in some form. | 1.0 | Add functionality to set required labels to build_metadata function - We have agreed to use a set of default labels, as per what [Kubernetes best practice expects](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/).
> app.kubernetes.io/name The name of the application
> app.kubernetes.io/instance A unique name identifying the instance of an application
> app.kubernetes.io/version The current version of the application (e.g., a semantic version, revision hash, etc.)
> app.kubernetes.io/component The component within the architecture
> app.kubernetes.io/part-of The name of a higher level application this one is part of
> app.kubernetes.io/managed-by The tool being used to manage the operation of an application
It would make sense to have these supported by build_metadata in some form. | priority | add functionality to set required labels to build metadata function we have agreed to use a set of default labels as per what app kubernetes io name the name of the application app kubernetes io instance a unique name identifying the instance of an application app kubernetes io version the current version of the application e g a semantic version revision hash etc app kubernetes io component the component within the architecture app kubernetes io part of the name of a higher level application this one is part of app kubernetes io managed by the tool being used to manage the operation of an application it would make sense to have these supported by build metadata in some form | 1 |
607,437 | 18,782,359,063 | IssuesEvent | 2021-11-08 08:30:59 | betagouv/service-national-universel | https://api.github.com/repos/betagouv/service-national-universel | opened | fix: adapter fiche consultation du jeune | enhancement priority-HIGH | ### Fonctionnalité liée à un problème ?
_No response_
### Fonctionnalité
- [ ] lieu de naissance
- [ ] Classe scolaire
- [ ] residence a l'etranger (si oui infos de l'hebergeur
- [ ] nouvelles questions de situations particulieres
### Commentaires
_No response_ | 1.0 | fix: adapter fiche consultation du jeune - ### Fonctionnalité liée à un problème ?
_No response_
### Fonctionnalité
- [ ] lieu de naissance
- [ ] Classe scolaire
- [ ] residence a l'etranger (si oui infos de l'hebergeur
- [ ] nouvelles questions de situations particulieres
### Commentaires
_No response_ | priority | fix adapter fiche consultation du jeune fonctionnalité liée à un problème no response fonctionnalité lieu de naissance classe scolaire residence a l etranger si oui infos de l hebergeur nouvelles questions de situations particulieres commentaires no response | 1 |
73,262 | 3,410,172,807 | IssuesEvent | 2015-12-04 18:56:50 | washingtontrails/vms | https://api.github.com/repos/washingtontrails/vms | closed | MBP: Table Styles | Bug High Priority MBP BUDGET Plone Reviewing | We find that with the new MyBackpack install our existing table styles have changed rendering them not readable. Seems like a colored background has been added and all text is in caps.
Here are pages with some examples:
https://www.wta.org/hiking-info/outdoor-leadership/workshops
https://www.wta.org/action
https://www.wta.org/go-hiking/hikes/amendment.2015-12-04.4787074982/edit
| 1.0 | MBP: Table Styles - We find that with the new MyBackpack install our existing table styles have changed rendering them not readable. Seems like a colored background has been added and all text is in caps.
Here are pages with some examples:
https://www.wta.org/hiking-info/outdoor-leadership/workshops
https://www.wta.org/action
https://www.wta.org/go-hiking/hikes/amendment.2015-12-04.4787074982/edit
| priority | mbp table styles we find that with the new mybackpack install our existing table styles have changed rendering them not readable seems like a colored background has been added and all text is in caps here are pages with some examples | 1 |
828,461 | 31,830,048,368 | IssuesEvent | 2023-09-14 09:59:18 | fractal-analytics-platform/fractal-server | https://api.github.com/repos/fractal-analytics-platform/fractal-server | opened | Update `output_dataset.meta` also when workflow execution fails | High Priority | Branching from #842 (and differently from #788):
When a workflow fails at task N, update `output_dataset.meta` to the latest valid state (that is, the output of task N-1).
Consequences of this change in different scenarios still need to be reviewed. | 1.0 | Update `output_dataset.meta` also when workflow execution fails - Branching from #842 (and differently from #788):
When a workflow fails at task N, update `output_dataset.meta` to the latest valid state (that is, the output of task N-1).
Consequences of this change in different scenarios still need to be reviewed. | priority | update output dataset meta also when workflow execution fails branching from and differently from when a workflow fails at task n update output dataset meta to the latest valid state that is the output of task n consequences of this change in different scenarios still need to be reviewed | 1 |
621,724 | 19,595,400,580 | IssuesEvent | 2022-01-05 17:14:44 | firelab/windninja | https://api.github.com/repos/firelab/windninja | closed | Station data is fetched in the wrong time zone | bug priority:high component:point | We are trying to make a request to SynopticLabs in UTC, but are making the request in local time. | 1.0 | Station data is fetched in the wrong time zone - We are trying to make a request to SynopticLabs in UTC, but are making the request in local time. | priority | station data is fetched in the wrong time zone we are trying to make a request to synopticlabs in utc but are making the request in local time | 1 |
461,807 | 13,236,500,966 | IssuesEvent | 2020-08-18 19:55:18 | ROCmSoftwarePlatform/MIOpen | https://api.github.com/repos/ROCmSoftwarePlatform/MIOpen | opened | Post-merge review(s) of PR #347 | priority_high | This is to be closed once all comments mentioned below are resolved.
- [ ] https://github.com/ROCmSoftwarePlatform/MIOpen/pull/347#pullrequestreview-469461853
/cc reviewers of #347 -- @daniellowell @asroy @zjing14 @carlushuang @TejashShah | 1.0 | Post-merge review(s) of PR #347 - This is to be closed once all comments mentioned below are resolved.
- [ ] https://github.com/ROCmSoftwarePlatform/MIOpen/pull/347#pullrequestreview-469461853
/cc reviewers of #347 -- @daniellowell @asroy @zjing14 @carlushuang @TejashShah | priority | post merge review s of pr this is to be closed once all comments mentioned below are resolved cc reviewers of daniellowell asroy carlushuang tejashshah | 1 |
160,291 | 6,085,955,915 | IssuesEvent | 2017-06-17 19:36:10 | hotosm/learnosm | https://api.github.com/repos/hotosm/learnosm | closed | Print friendly | enhancement high priority | The PDFs are currently available as Google docs but these will get outdated as pull requests come in. So, it looks like:
- A script should be available in the repo to generate the PDFs (using pandoc?)
- The generated PDFs should have links to them on the website
- Ideally, the PDFs should be generated after each pull request
| 1.0 | Print friendly - The PDFs are currently available as Google docs but these will get outdated as pull requests come in. So, it looks like:
- A script should be available in the repo to generate the PDFs (using pandoc?)
- The generated PDFs should have links to them on the website
- Ideally, the PDFs should be generated after each pull request
| priority | print friendly the pdfs are currently available as google docs but these will get outdated as pull requests come in so it looks like a script should be available in the repo to generate the pdfs using pandoc the generated pdfs should have links to them on the website ideally the pdfs should be generated after each pull request | 1 |
771,796 | 27,092,638,395 | IssuesEvent | 2023-02-14 22:34:23 | testifysec/witness | https://api.github.com/repos/testifysec/witness | closed | Timestamp request fails on 201 | bug priority high | the[ sigstore timestamp](https://github.com/sigstore/timestamp-authority) project returns a 201 - created. Witness errors out on the response | 1.0 | Timestamp request fails on 201 - the[ sigstore timestamp](https://github.com/sigstore/timestamp-authority) project returns a 201 - created. Witness errors out on the response | priority | timestamp request fails on the project returns a created witness errors out on the response | 1 |
453,645 | 13,086,232,602 | IssuesEvent | 2020-08-02 05:14:13 | Journaly/journaly | https://api.github.com/repos/Journaly/journaly | closed | Visual Bugs & Improvements | bug high priority visual | #### Visual Bugs To Fix
- [x] My Feed search box isn't the right width [#181]
- [x] Chevron/down arrow is in wrong position on My Feed filters [#181]
- [x] Chevron/down arrow is in wrong position on Settings Add Languages
- [x] Chevron/down arrow is in wrong position on New Post drop downs
- [x] Settings > languages drop downs are off on mobile view | 1.0 | Visual Bugs & Improvements - #### Visual Bugs To Fix
- [x] My Feed search box isn't the right width [#181]
- [x] Chevron/down arrow is in wrong position on My Feed filters [#181]
- [x] Chevron/down arrow is in wrong position on Settings Add Languages
- [x] Chevron/down arrow is in wrong position on New Post drop downs
- [x] Settings > languages drop downs are off on mobile view | priority | visual bugs improvements visual bugs to fix my feed search box isn t the right width chevron down arrow is in wrong position on my feed filters chevron down arrow is in wrong position on settings add languages chevron down arrow is in wrong position on new post drop downs settings languages drop downs are off on mobile view | 1 |
125,267 | 4,954,991,818 | IssuesEvent | 2016-12-01 19:11:07 | DICE-UNC/irods-cloud-browser | https://api.github.com/repos/DICE-UNC/irods-cloud-browser | closed | integrate ssl negotiation | Highest Priority | testing, merging, and tuning for ssl integration. Need to add a property to the config file in etc to set the negotiation policy in jargon properties. | 1.0 | integrate ssl negotiation - testing, merging, and tuning for ssl integration. Need to add a property to the config file in etc to set the negotiation policy in jargon properties. | priority | integrate ssl negotiation testing merging and tuning for ssl integration need to add a property to the config file in etc to set the negotiation policy in jargon properties | 1 |
669,782 | 22,640,595,860 | IssuesEvent | 2022-07-01 01:19:05 | heading1/WYLSBingsu | https://api.github.com/repos/heading1/WYLSBingsu | closed | [FE] 글쓰기 POST 훅 구현 | 🖥 Frontend ❗️high-priority 🔨 Feature | ## 🔨 기능 설명
글쓰기 POST 요청을 보내고 관련 상태를 관리하는 훅 구현
## 📑 완료 조건
- [ ] 요청과 관련된 상태 관리
- [ ] 요청 성공시 핸들링
- [ ] 에러 핸들링
## 💭 관련 백로그
[FE] 작성 페이지 - API 요청 - 글쓰기 POST 훅 구현
## 💭 예상 작업 시간
2h
| 1.0 | [FE] 글쓰기 POST 훅 구현 - ## 🔨 기능 설명
글쓰기 POST 요청을 보내고 관련 상태를 관리하는 훅 구현
## 📑 완료 조건
- [ ] 요청과 관련된 상태 관리
- [ ] 요청 성공시 핸들링
- [ ] 에러 핸들링
## 💭 관련 백로그
[FE] 작성 페이지 - API 요청 - 글쓰기 POST 훅 구현
## 💭 예상 작업 시간
2h
| priority | 글쓰기 post 훅 구현 🔨 기능 설명 글쓰기 post 요청을 보내고 관련 상태를 관리하는 훅 구현 📑 완료 조건 요청과 관련된 상태 관리 요청 성공시 핸들링 에러 핸들링 💭 관련 백로그 작성 페이지 api 요청 글쓰기 post 훅 구현 💭 예상 작업 시간 | 1 |
639,923 | 20,769,144,126 | IssuesEvent | 2022-03-16 01:16:44 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | closed | quest `Is That Your Goblin?` bug | Priority-High | ### Current Behaviour
quest `Is That Your Goblin?` (id 12969)Unable to complete,talking to 'agnetta tyrsdotar' cannot trigger a fight against it
### Expected Blizzlike Behaviour
get quest 12969,and talking to 'agnetta tyrsdotar' can trigger a fight against it
### Source
_No response_
### Steps to reproduce the problem
1.quest add 12969
2.go c id 30154
3.talk it
### Extra Notes
_No response_
### AC rev. hash/commit
acd3ed8759477e6059b31f88377a790f3a0d8133
### Operating system
Ubuntu20.04
### Custom changes or Modules
_No response_ | 1.0 | quest `Is That Your Goblin?` bug - ### Current Behaviour
quest `Is That Your Goblin?` (id 12969)Unable to complete,talking to 'agnetta tyrsdotar' cannot trigger a fight against it
### Expected Blizzlike Behaviour
get quest 12969,and talking to 'agnetta tyrsdotar' can trigger a fight against it
### Source
_No response_
### Steps to reproduce the problem
1.quest add 12969
2.go c id 30154
3.talk it
### Extra Notes
_No response_
### AC rev. hash/commit
acd3ed8759477e6059b31f88377a790f3a0d8133
### Operating system
Ubuntu20.04
### Custom changes or Modules
_No response_ | priority | quest is that your goblin bug current behaviour quest is that your goblin id unable to complete talking to agnetta tyrsdotar cannot trigger a fight against it expected blizzlike behaviour get quest and talking to agnetta tyrsdotar can trigger a fight against it source no response steps to reproduce the problem quest add go c id talk it extra notes no response ac rev hash commit operating system custom changes or modules no response | 1 |
176,967 | 6,570,771,039 | IssuesEvent | 2017-09-10 04:15:34 | ponylang/ponyc | https://api.github.com/repos/ponylang/ponyc | closed | Package manager | difficulty: 2 - medium enhancement: 3 - ready for work priority: 3 - high | I saw a presentation from Big Techday 8 http://www.techcast.com/events/bigtechday8/pranner-1450/. Great one BTW ;)
You said you wish to use someones package manager and you're looking for suggestions.
You need to try http://nixos.org/nix/. You can't do it better :)
See example setup for Golang:
- https://nixos.org/nixpkgs/manual/#sec-language-go
- http://lethalman.blogspot.com/2015/02/developing-in-golang-with-nix-package.html
| 1.0 | Package manager - I saw a presentation from Big Techday 8 http://www.techcast.com/events/bigtechday8/pranner-1450/. Great one BTW ;)
You said you wish to use someones package manager and you're looking for suggestions.
You need to try http://nixos.org/nix/. You can't do it better :)
See example setup for Golang:
- https://nixos.org/nixpkgs/manual/#sec-language-go
- http://lethalman.blogspot.com/2015/02/developing-in-golang-with-nix-package.html
| priority | package manager i saw a presentation from big techday great one btw you said you wish to use someones package manager and you re looking for suggestions you need to try you can t do it better see example setup for golang | 1 |
50,369 | 3,006,356,368 | IssuesEvent | 2015-07-27 09:51:04 | Itseez/opencv | https://api.github.com/repos/Itseez/opencv | opened | white border while displaying a full image with python2.7 and opencv | affected: 2.4 auto-transferred bug category: highgui-images priority: normal | Transferred from http://code.opencv.org/issues/3188
```
|| Stefano Spirolazzi on 2013-08-02 10:49
|| Priority: Normal
|| Affected: branch '2.4'
|| Category: highgui-images
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x86 / Any
```
white border while displaying a full image with python2.7 and opencv
-----------
```
Hi!
I want to display a black image in a full screen mode. It looks a really sample task to do, but when i run the code below i get a little white strip on top and on the left. It doesn't matter if i change the size of the numpy array, i always get it.
If you run this code, do you get a fully black screen??
I have tried with different laptop and different version of opencv
thanks
@img = np.zeros((800, 1280)) #my resolution is 800, 1280
cv2.namedWindow("test", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("test", cv2.WND_PROP_FULLSCREEN, cv2.cv.CV_WINDOW_FULLSCREEN)
cv2.imshow("test",img)
cv2.waitKey(0)@
```
History
-------
##### Victor Kocheganov on 2013-08-06 06:55
```
Hello Stefano,
thank you for submitting this ticket!
Could you please duplicate this question to http://answers.opencv.org/questions/, likely somebody's already faced with similar issue and could help. And it would be highly appreciated if you'll have time to investigate this issue and propose your own fix (if this is indeed a bug). Please see also http://www.code.opencv.org/projects/opencv/wiki/How_to_contribute for details.
Thank you in advance,
Victor Kocheganov
- Target version set to 2.4.7
- Assignee set to Vadim Pisarevsky
- Status changed from New to Open
- Category set to highgui-images
```
##### Victor Kocheganov on 2013-08-09 10:37
```
- Assignee changed from Vadim Pisarevsky to Alexander Smorkalov
```
##### Alexander Smorkalov on 2013-08-29 13:59
```
Hello Stefano!
I tried to reproduce your issue on my Linux desktop. Your code works perfect for me. I use Ubuntu and GTK back-end for OpenCV. Please select proper platform in ticket properties if you use something other. I will try the same on Windows and close the issue if everything is ok.
My python and c++ code:
<pre>
import numpy as np
import cv2
img = np.zeros((1080, 1920)) #my resolution is 800, 1280
cv2.namedWindow("test", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("test", cv2.WND_PROP_FULLSCREEN, cv2.cv.CV_WINDOW_FULLSCREEN)
cv2.imshow("test",img)
cv2.waitKey(0)
</pre>
<pre>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int main(int, char**)
{
cv::Mat frame = cv::Mat::zeros(1080, 1920, CV_8UC3);
cv::namedWindow("test", cv::WND_PROP_FULLSCREEN);
cv::setWindowProperty("test", cv::WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
cv::imshow("test", frame);
cv::waitKey(0);
return 0;
}
</pre>
```
##### Alexander Smorkalov on 2013-08-29 14:52
```
I reproduce the issue on Windows 8 with WIN32UI back-end.
```
##### Alexander Smorkalov on 2013-09-12 10:51
```
- Target version changed from 2.4.7 to Next Hackathon
``` | 1.0 | white border while displaying a full image with python2.7 and opencv - Transferred from http://code.opencv.org/issues/3188
```
|| Stefano Spirolazzi on 2013-08-02 10:49
|| Priority: Normal
|| Affected: branch '2.4'
|| Category: highgui-images
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x86 / Any
```
white border while displaying a full image with python2.7 and opencv
-----------
```
Hi!
I want to display a black image in a full screen mode. It looks a really sample task to do, but when i run the code below i get a little white strip on top and on the left. It doesn't matter if i change the size of the numpy array, i always get it.
If you run this code, do you get a fully black screen??
I have tried with different laptop and different version of opencv
thanks
@img = np.zeros((800, 1280)) #my resolution is 800, 1280
cv2.namedWindow("test", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("test", cv2.WND_PROP_FULLSCREEN, cv2.cv.CV_WINDOW_FULLSCREEN)
cv2.imshow("test",img)
cv2.waitKey(0)@
```
History
-------
##### Victor Kocheganov on 2013-08-06 06:55
```
Hello Stefano,
thank you for submitting this ticket!
Could you please duplicate this question to http://answers.opencv.org/questions/, likely somebody's already faced with similar issue and could help. And it would be highly appreciated if you'll have time to investigate this issue and propose your own fix (if this is indeed a bug). Please see also http://www.code.opencv.org/projects/opencv/wiki/How_to_contribute for details.
Thank you in advance,
Victor Kocheganov
- Target version set to 2.4.7
- Assignee set to Vadim Pisarevsky
- Status changed from New to Open
- Category set to highgui-images
```
##### Victor Kocheganov on 2013-08-09 10:37
```
- Assignee changed from Vadim Pisarevsky to Alexander Smorkalov
```
##### Alexander Smorkalov on 2013-08-29 13:59
```
Hello Stefano!
I tried to reproduce your issue on my Linux desktop. Your code works perfect for me. I use Ubuntu and GTK back-end for OpenCV. Please select proper platform in ticket properties if you use something other. I will try the same on Windows and close the issue if everything is ok.
My python and c++ code:
<pre>
import numpy as np
import cv2
img = np.zeros((1080, 1920)) #my resolution is 800, 1280
cv2.namedWindow("test", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("test", cv2.WND_PROP_FULLSCREEN, cv2.cv.CV_WINDOW_FULLSCREEN)
cv2.imshow("test",img)
cv2.waitKey(0)
</pre>
<pre>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int main(int, char**)
{
cv::Mat frame = cv::Mat::zeros(1080, 1920, CV_8UC3);
cv::namedWindow("test", cv::WND_PROP_FULLSCREEN);
cv::setWindowProperty("test", cv::WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
cv::imshow("test", frame);
cv::waitKey(0);
return 0;
}
</pre>
```
##### Alexander Smorkalov on 2013-08-29 14:52
```
I reproduce the issue on Windows 8 with WIN32UI back-end.
```
##### Alexander Smorkalov on 2013-09-12 10:51
```
- Target version changed from 2.4.7 to Next Hackathon
``` | priority | white border while displaying a full image with and opencv transferred from stefano spirolazzi on priority normal affected branch category highgui images tracker bug difficulty pr platform any white border while displaying a full image with and opencv hi i want to display a black image in a full screen mode it looks a really sample task to do but when i run the code below i get a little white strip on top and on the left it doesn t matter if i change the size of the numpy array i always get it if you run this code do you get a fully black screen i have tried with different laptop and different version of opencv thanks img np zeros my resolution is namedwindow test wnd prop fullscreen setwindowproperty test wnd prop fullscreen cv cv window fullscreen imshow test img waitkey history victor kocheganov on hello stefano thank you for submitting this ticket could you please duplicate this question to likely somebody s already faced with similar issue and could help and it would be highly appreciated if you ll have time to investigate this issue and propose your own fix if this is indeed a bug please see also for details thank you in advance victor kocheganov target version set to assignee set to vadim pisarevsky status changed from new to open category set to highgui images victor kocheganov on assignee changed from vadim pisarevsky to alexander smorkalov alexander smorkalov on hello stefano i tried to reproduce your issue on my linux desktop your code works perfect for me i use ubuntu and gtk back end for opencv please select proper platform in ticket properties if you use something other i will try the same on windows and close the issue if everything is ok my python and c code import numpy as np import img np zeros my resolution is namedwindow test wnd prop fullscreen setwindowproperty test wnd prop fullscreen cv cv window fullscreen imshow test img waitkey include include int main int char cv mat frame cv mat zeros cv cv namedwindow test cv wnd prop fullscreen cv setwindowproperty test cv wnd prop fullscreen cv window fullscreen cv imshow test frame cv waitkey return alexander smorkalov on i reproduce the issue on windows with back end alexander smorkalov on target version changed from to next hackathon | 1 |
448,443 | 12,950,753,118 | IssuesEvent | 2020-07-19 14:22:27 | GiftForGood/website | https://api.github.com/repos/GiftForGood/website | closed | View all chats for Donor | c.UserStory m.MVP priority.High | # User Story
<!--
https://github.com/GiftForGood/website/issues?q=is%3Aissue+label%3Ac.UserStory
-->
## Describe the user story in detail.
As a donor, I want to view all my chats in my inbox so that I know which NPO I have interacted with.
| 1.0 | View all chats for Donor - # User Story
<!--
https://github.com/GiftForGood/website/issues?q=is%3Aissue+label%3Ac.UserStory
-->
## Describe the user story in detail.
As a donor, I want to view all my chats in my inbox so that I know which NPO I have interacted with.
| priority | view all chats for donor user story describe the user story in detail as a donor i want to view all my chats in my inbox so that i know which npo i have interacted with | 1 |
188,441 | 6,776,364,480 | IssuesEvent | 2017-10-27 17:34:04 | redox-os/ion | https://api.github.com/repos/redox-os/ion | closed | Multiple Redirection Support | B-Class enhancement high-priority low-hanging fruit | It's currently only possible to perform one redirection at a time. Adding support for multiple redirections would be very useful in a number of cases, and should be somewhat easy to implement. In example, redirecting standard output to /dev/null, but redirecting standard error to a log.
```sh
cmd args... > /dev/null ^> log
``` | 1.0 | Multiple Redirection Support - It's currently only possible to perform one redirection at a time. Adding support for multiple redirections would be very useful in a number of cases, and should be somewhat easy to implement. In example, redirecting standard output to /dev/null, but redirecting standard error to a log.
```sh
cmd args... > /dev/null ^> log
``` | priority | multiple redirection support it s currently only possible to perform one redirection at a time adding support for multiple redirections would be very useful in a number of cases and should be somewhat easy to implement in example redirecting standard output to dev null but redirecting standard error to a log sh cmd args dev null log | 1 |
754,699 | 26,398,713,891 | IssuesEvent | 2023-01-12 22:09:55 | zulip/zulip-mobile | https://api.github.com/repos/zulip/zulip-mobile | closed | Don't immediately send attachment on upload | a-compose/send P1 high-priority | When a user uploads a attachment or image, it's immediately sent. Instead, we should change the behaviour to be more like the webapp: the input field should have the markdown for displaying the image inserted, but the user should be allowed to edit before sending. This will also allow people to add multiple attachments (see [this CZO thread](https://chat.zulip.org/#narrow/stream/48-mobile/topic/Multiple.20Attachments) requesting this).
Related: #1903 | 1.0 | Don't immediately send attachment on upload - When a user uploads a attachment or image, it's immediately sent. Instead, we should change the behaviour to be more like the webapp: the input field should have the markdown for displaying the image inserted, but the user should be allowed to edit before sending. This will also allow people to add multiple attachments (see [this CZO thread](https://chat.zulip.org/#narrow/stream/48-mobile/topic/Multiple.20Attachments) requesting this).
Related: #1903 | priority | don t immediately send attachment on upload when a user uploads a attachment or image it s immediately sent instead we should change the behaviour to be more like the webapp the input field should have the markdown for displaying the image inserted but the user should be allowed to edit before sending this will also allow people to add multiple attachments see requesting this related | 1 |
261,922 | 8,247,807,553 | IssuesEvent | 2018-09-11 16:32:20 | WordImpress/wp-business-reviews | https://api.github.com/repos/WordImpress/wp-business-reviews | closed | Ensure collections render in IE 11 | high-priority | ## Current Behavior
<!-- Required. Include any warnings or errors in the browser or console. -->
Testimonials don't get printed to the DOM in IE11. Wrapper elements (.wpbr-wrapper and collection container) get created, but they're empty. Tested on IE11 on Win7; other browsers and OSes work as expected.
## Expected Behavior
<!-- Required. -->
Testimonials should show on the page (near the bottom of the page, between "Contact us Today" link and "Make your ideal home a reality" block).
## Steps to Reproduce
<!-- Required. -->
1. Browsing the website in IE11.
2. Testimonials should show on the page (near the bottom of the page, between "Contact us Today" link and "Make your ideal home a reality" block).
3. Testimonials don't get printed to the DOM in IE11. Wrapper elements (.wpbr-wrapper and collection container) get created, but they're empty. Tested on IE11 on Win7; other browsers and OSes work as expected.
## Possible Solution
<!-- Optional. Delete if solution is unknown. -->
Include JS polyfills for older browsers that cannot use ES6 features.
## Related
<!-- Optional. Relevant links to issues, support tickets, or websites. -->
https://secure.helpscout.net/conversation/642144487/23564/
## Acceptance Criteria
<!-- Required. Include a checklist of conditions that must be true in order to close this issue. -->
- [x] Collections render in IE11.
## Environment
<details>
<summary>Operating System</summary>
<ul>
<li>Platform: Microsoft Windows</li>
</ul>
</details>
<details>
<summary>Browser</summary>
<ul>
<li>Name: IE</li>
<li>Version: 10, 11</li>
</ul>
</details>
| 1.0 | Ensure collections render in IE 11 - ## Current Behavior
<!-- Required. Include any warnings or errors in the browser or console. -->
Testimonials don't get printed to the DOM in IE11. Wrapper elements (.wpbr-wrapper and collection container) get created, but they're empty. Tested on IE11 on Win7; other browsers and OSes work as expected.
## Expected Behavior
<!-- Required. -->
Testimonials should show on the page (near the bottom of the page, between "Contact us Today" link and "Make your ideal home a reality" block).
## Steps to Reproduce
<!-- Required. -->
1. Browsing the website in IE11.
2. Testimonials should show on the page (near the bottom of the page, between "Contact us Today" link and "Make your ideal home a reality" block).
3. Testimonials don't get printed to the DOM in IE11. Wrapper elements (.wpbr-wrapper and collection container) get created, but they're empty. Tested on IE11 on Win7; other browsers and OSes work as expected.
## Possible Solution
<!-- Optional. Delete if solution is unknown. -->
Include JS polyfills for older browsers that cannot use ES6 features.
## Related
<!-- Optional. Relevant links to issues, support tickets, or websites. -->
https://secure.helpscout.net/conversation/642144487/23564/
## Acceptance Criteria
<!-- Required. Include a checklist of conditions that must be true in order to close this issue. -->
- [x] Collections render in IE11.
## Environment
<details>
<summary>Operating System</summary>
<ul>
<li>Platform: Microsoft Windows</li>
</ul>
</details>
<details>
<summary>Browser</summary>
<ul>
<li>Name: IE</li>
<li>Version: 10, 11</li>
</ul>
</details>
| priority | ensure collections render in ie current behavior testimonials don t get printed to the dom in wrapper elements wpbr wrapper and collection container get created but they re empty tested on on other browsers and oses work as expected expected behavior testimonials should show on the page near the bottom of the page between contact us today link and make your ideal home a reality block steps to reproduce browsing the website in testimonials should show on the page near the bottom of the page between contact us today link and make your ideal home a reality block testimonials don t get printed to the dom in wrapper elements wpbr wrapper and collection container get created but they re empty tested on on other browsers and oses work as expected possible solution include js polyfills for older browsers that cannot use features related acceptance criteria collections render in environment operating system platform microsoft windows browser name ie version | 1 |
643,397 | 20,956,223,613 | IssuesEvent | 2022-03-27 05:56:35 | AY2122S2-CS2103T-T09-2/tp | https://api.github.com/repos/AY2122S2-CS2103T-T09-2/tp | closed | As a Recruiter, I want to be able to search applicants by job | type.Story priority.High | so that I can view who is interviewing for the job and what rounds they are at | 1.0 | As a Recruiter, I want to be able to search applicants by job - so that I can view who is interviewing for the job and what rounds they are at | priority | as a recruiter i want to be able to search applicants by job so that i can view who is interviewing for the job and what rounds they are at | 1 |
42,278 | 2,870,007,863 | IssuesEvent | 2015-06-06 18:48:00 | bastet/Bastet | https://api.github.com/repos/bastet/Bastet | opened | Contain Responses | Priority: High Type: Enhancement | Contain all responses in a meta object which contains status code, and error or data fields. | 1.0 | Contain Responses - Contain all responses in a meta object which contains status code, and error or data fields. | priority | contain responses contain all responses in a meta object which contains status code and error or data fields | 1 |
489,011 | 14,100,184,635 | IssuesEvent | 2020-11-06 03:26:35 | PMEAL/OpenPNM | https://api.github.com/repos/PMEAL/OpenPNM | closed | Conductance models should optionally return element and conduit values | enhancement high priority proposal | It might be a good idea to add a ``mode`` argument to conductance models, that accepts 'elements' and 'conduit'.
In the case of 'conduit' the model returns a single conductance value for the whole conduit. In the case of 'elements' it returns a Nt-by-3 array or dict containing the conductance of each individual element in the conduit. This latter option would be consistent with our 'conduit.length' and 'conduit.area' geometry models. | 1.0 | Conductance models should optionally return element and conduit values - It might be a good idea to add a ``mode`` argument to conductance models, that accepts 'elements' and 'conduit'.
In the case of 'conduit' the model returns a single conductance value for the whole conduit. In the case of 'elements' it returns a Nt-by-3 array or dict containing the conductance of each individual element in the conduit. This latter option would be consistent with our 'conduit.length' and 'conduit.area' geometry models. | priority | conductance models should optionally return element and conduit values it might be a good idea to add a mode argument to conductance models that accepts elements and conduit in the case of conduit the model returns a single conductance value for the whole conduit in the case of elements it returns a nt by array or dict containing the conductance of each individual element in the conduit this latter option would be consistent with our conduit length and conduit area geometry models | 1 |
343,169 | 10,325,853,135 | IssuesEvent | 2019-09-01 20:55:52 | woocommerce/woocommerce-admin | https://api.github.com/repos/woocommerce/woocommerce-admin | opened | Product analytics incorrect when payment fails before successfully being paid | Analytics [Priority] High [Type] Bug | **Describe the bug**
When the payment for an order initially fails (with Stripe, in this case), but then has the payment successfully processed later, the Products analytics data is incorrect.
It appears to count the failed payment/order as 1 item sold, then counts that same item a second time once the payment is successful, showing a "+1 more" tag on the order.
I attempted to recreate this bug by manually changing order statuses (not using credit card failures/successes), but I was not able to reproduce it using that method. In the screenshot below, order number 265 was done manually (and is correct), and order 275 was the failed credit card payment (Stripe) that was then successfully processed (which is incorrect and adds an additional product):

(Link to screenshot: https://cld.wthms.co/nSRsYi+)
On the Products analytics page, you can see the additional item being counted in the stats, showing an item count of 3 for the 2 orders from the above screenshot:

(Link to screenshot: https://cld.wthms.co/pI1Gyi+)
This is also visible on the main Dashboard, but only appears to affect the Products analytics there (other data appears correct):

(Link to screenshot: https://cld.wthms.co/KHBrjJ+)
**To Reproduce**
Steps to reproduce the behavior:
1. Log in as a customer and attempt to purchase an item with incorrect credit card information (so it will fail). I used Stripe.
2. After you see the failure message on the Checkout page, go to My Account > Orders, click the "Pay" button, and enter correct credit card info so it is successful.
3. While logged in as an Admin, go to Analytics > Orders to see the "+1 tag".
4. While logged in as an Admin, go to Analytics > Products to see the incorrect item count and revenue data.
**Expected behavior**
When a failed order is successfully paid, the item count and net revenue for the products on that order should accurately reflect what was actually purchased. | 1.0 | Product analytics incorrect when payment fails before successfully being paid - **Describe the bug**
When the payment for an order initially fails (with Stripe, in this case), but then has the payment successfully processed later, the Products analytics data is incorrect.
It appears to count the failed payment/order as 1 item sold, then counts that same item a second time once the payment is successful, showing a "+1 more" tag on the order.
I attempted to recreate this bug by manually changing order statuses (not using credit card failures/successes), but I was not able to reproduce it using that method. In the screenshot below, order number 265 was done manually (and is correct), and order 275 was the failed credit card payment (Stripe) that was then successfully processed (which is incorrect and adds an additional product):

(Link to screenshot: https://cld.wthms.co/nSRsYi+)
On the Products analytics page, you can see the additional item being counted in the stats, showing an item count of 3 for the 2 orders from the above screenshot:

(Link to screenshot: https://cld.wthms.co/pI1Gyi+)
This is also visible on the main Dashboard, but only appears to affect the Products analytics there (other data appears correct):

(Link to screenshot: https://cld.wthms.co/KHBrjJ+)
**To Reproduce**
Steps to reproduce the behavior:
1. Log in as a customer and attempt to purchase an item with incorrect credit card information (so it will fail). I used Stripe.
2. After you see the failure message on the Checkout page, go to My Account > Orders, click the "Pay" button, and enter correct credit card info so it is successful.
3. While logged in as an Admin, go to Analytics > Orders to see the "+1 tag".
4. While logged in as an Admin, go to Analytics > Products to see the incorrect item count and revenue data.
**Expected behavior**
When a failed order is successfully paid, the item count and net revenue for the products on that order should accurately reflect what was actually purchased. | priority | product analytics incorrect when payment fails before successfully being paid describe the bug when the payment for an order initially fails with stripe in this case but then has the payment successfully processed later the products analytics data is incorrect it appears to count the failed payment order as item sold then counts that same item a second time once the payment is successful showing a more tag on the order i attempted to recreate this bug by manually changing order statuses not using credit card failures successes but i was not able to reproduce it using that method in the screenshot below order number was done manually and is correct and order was the failed credit card payment stripe that was then successfully processed which is incorrect and adds an additional product link to screenshot on the products analytics page you can see the additional item being counted in the stats showing an item count of for the orders from the above screenshot link to screenshot this is also visible on the main dashboard but only appears to affect the products analytics there other data appears correct link to screenshot to reproduce steps to reproduce the behavior log in as a customer and attempt to purchase an item with incorrect credit card information so it will fail i used stripe after you see the failure message on the checkout page go to my account orders click the pay button and enter correct credit card info so it is successful while logged in as an admin go to analytics orders to see the tag while logged in as an admin go to analytics products to see the incorrect item count and revenue data expected behavior when a failed order is successfully paid the item count and net revenue for the products on that order should accurately reflect what was actually purchased | 1 |
317,894 | 9,670,491,379 | IssuesEvent | 2019-05-21 20:03:10 | E3SM-Project/ParallelIO | https://api.github.com/repos/E3SM-Project/ParallelIO | closed | Some autogenerated fortran tests have wrong strides | High Priority Next Release bug | Some autogenerated Fortran tests have the wrong value for strides.
The testing suite currently generates some tests like,
init_finalize_np2_nio2_st2
The above test tries to run with 2 processes, 2 I/O tasks separated by a stride of 2 (the maximum stride possible in this case is 1).
However this does not result in an error because the testing framework resets the invalid stride to 1 (when the test is run). | 1.0 | Some autogenerated fortran tests have wrong strides - Some autogenerated Fortran tests have the wrong value for strides.
The testing suite currently generates some tests like,
init_finalize_np2_nio2_st2
The above test tries to run with 2 processes, 2 I/O tasks separated by a stride of 2 (the maximum stride possible in this case is 1).
However this does not result in an error because the testing framework resets the invalid stride to 1 (when the test is run). | priority | some autogenerated fortran tests have wrong strides some autogenerated fortran tests have the wrong value for strides the testing suite currently generates some tests like init finalize the above test tries to run with processes i o tasks separated by a stride of the maximum stride possible in this case is however this does not result in an error because the testing framework resets the invalid stride to when the test is run | 1 |
236,576 | 7,750,885,506 | IssuesEvent | 2018-05-30 15:27:32 | fedora-infra/bodhi | https://api.github.com/repos/fedora-infra/bodhi | opened | Bodhi crashes while composing a modular repo if it has a mailing list defined | Composer Crash High priority | Bodhi was [recently configured to have a mailing list configured for ```fedora_modular```](https://pagure.io/fedora-infrastructure/issue/6872). The next time a modular repo was composed, it failed with this traceback:
```
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: Traceback (most recent call last):
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/consumers/masher.py", line 337, in run
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: self.work()
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/consumers/masher.py", line 422, in work
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: self.send_stable_announcements()
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/consumers/masher.py", line 70, in wrapper
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: retval = method(self, *args, **kwargs)
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/consumers/masher.py", line 732, in send_stable_announcements
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: update.send_update_notice()
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/models.py", line 2777, in send_update_notice
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: for subject, body in mail.get_template(self, templatetype):
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/mail.py", line 354, in get_template
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: use_template = globals()[use_template]
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: KeyError: u'fedora_modular_errata_template'
```
This is because someone had designed Bodhi's e-mail system to hard code releases into the Python code rather than to use settings, and the modular repo isn't hard coded :/ | 1.0 | Bodhi crashes while composing a modular repo if it has a mailing list defined - Bodhi was [recently configured to have a mailing list configured for ```fedora_modular```](https://pagure.io/fedora-infrastructure/issue/6872). The next time a modular repo was composed, it failed with this traceback:
```
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: Traceback (most recent call last):
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/consumers/masher.py", line 337, in run
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: self.work()
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/consumers/masher.py", line 422, in work
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: self.send_stable_announcements()
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/consumers/masher.py", line 70, in wrapper
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: retval = method(self, *args, **kwargs)
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/consumers/masher.py", line 732, in send_stable_announcements
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: update.send_update_notice()
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/models.py", line 2777, in send_update_notice
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: for subject, body in mail.get_template(self, templatetype):
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: File "/usr/lib/python2.7/site-packages/bodhi/server/mail.py", line 354, in get_template
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: use_template = globals()[use_template]
May 30 14:39:14 bodhi-backend01.phx2.fedoraproject.org fedmsg-hub[67822]: KeyError: u'fedora_modular_errata_template'
```
This is because someone had designed Bodhi's e-mail system to hard code releases into the Python code rather than to use settings, and the modular repo isn't hard coded :/ | priority | bodhi crashes while composing a modular repo if it has a mailing list defined bodhi was the next time a modular repo was composed it failed with this traceback may bodhi fedoraproject org fedmsg hub traceback most recent call last may bodhi fedoraproject org fedmsg hub file usr lib site packages bodhi server consumers masher py line in run may bodhi fedoraproject org fedmsg hub self work may bodhi fedoraproject org fedmsg hub file usr lib site packages bodhi server consumers masher py line in work may bodhi fedoraproject org fedmsg hub self send stable announcements may bodhi fedoraproject org fedmsg hub file usr lib site packages bodhi server consumers masher py line in wrapper may bodhi fedoraproject org fedmsg hub retval method self args kwargs may bodhi fedoraproject org fedmsg hub file usr lib site packages bodhi server consumers masher py line in send stable announcements may bodhi fedoraproject org fedmsg hub update send update notice may bodhi fedoraproject org fedmsg hub file usr lib site packages bodhi server models py line in send update notice may bodhi fedoraproject org fedmsg hub for subject body in mail get template self templatetype may bodhi fedoraproject org fedmsg hub file usr lib site packages bodhi server mail py line in get template may bodhi fedoraproject org fedmsg hub use template globals may bodhi fedoraproject org fedmsg hub keyerror u fedora modular errata template this is because someone had designed bodhi s e mail system to hard code releases into the python code rather than to use settings and the modular repo isn t hard coded | 1 |
759,510 | 26,598,964,936 | IssuesEvent | 2023-01-23 14:30:46 | kubeshop/testkube | https://api.github.com/repos/kubeshop/testkube | closed | Test stuck in state running despite container and job completed | bug 🐛 high-priority | **Describe the bug**
After upgrading testkube from 1.8.16 to 1.8.29, my custom executor does not work correctly anymore. After the pod and job in the cluster are terminated/completed, the UI and CLI of Testkube show status "running".
**To Reproduce**
Steps to reproduce the behavior:
1. Upgrade testkube from 1.8.16 -> 1.8.29 or Testkube Helm Chart from 1.8.51 -> 1.8.100
2. Run a test with a executor based on executor template which was working correctly before
3. Test is stuck in state "Running" forever in CLI and UI and only stops after aborting
**Expected behavior**
Test is terminated and shows success/fail after pod and job is terminated
**Version / Cluster**
Could fix this problem after Downgrading to 1.8.16(Server Version)/1.8.51(Helm Chart Testkube)
Issue occured for me from 1.8.29 (Server)/1.8.100(Helm Chart) upwards (tested also with 1.8.30/1.8.104)
| 1.0 | Test stuck in state running despite container and job completed - **Describe the bug**
After upgrading testkube from 1.8.16 to 1.8.29, my custom executor does not work correctly anymore. After the pod and job in the cluster are terminated/completed, the UI and CLI of Testkube show status "running".
**To Reproduce**
Steps to reproduce the behavior:
1. Upgrade testkube from 1.8.16 -> 1.8.29 or Testkube Helm Chart from 1.8.51 -> 1.8.100
2. Run a test with a executor based on executor template which was working correctly before
3. Test is stuck in state "Running" forever in CLI and UI and only stops after aborting
**Expected behavior**
Test is terminated and shows success/fail after pod and job is terminated
**Version / Cluster**
Could fix this problem after Downgrading to 1.8.16(Server Version)/1.8.51(Helm Chart Testkube)
Issue occured for me from 1.8.29 (Server)/1.8.100(Helm Chart) upwards (tested also with 1.8.30/1.8.104)
| priority | test stuck in state running despite container and job completed describe the bug after upgrading testkube from to my custom executor does not work correctly anymore after the pod and job in the cluster are terminated completed the ui and cli of testkube show status running to reproduce steps to reproduce the behavior upgrade testkube from or testkube helm chart from run a test with a executor based on executor template which was working correctly before test is stuck in state running forever in cli and ui and only stops after aborting expected behavior test is terminated and shows success fail after pod and job is terminated version cluster could fix this problem after downgrading to server version helm chart testkube issue occured for me from server helm chart upwards tested also with | 1 |
249,530 | 7,963,010,591 | IssuesEvent | 2018-07-13 16:01:26 | KB1RD/LearnASM | https://api.github.com/repos/KB1RD/LearnASM | closed | Fix the Instruction Format | High Priority enhancement | Since the deprecation of the link bit, I have been considering several large changes to the language. These should be done before I even start thinking about the "learn" mode.
See [https://learnasm.kb1rd.net/trm/#future-changes](https://learnasm.kb1rd.net/trm/#future-changes) | 1.0 | Fix the Instruction Format - Since the deprecation of the link bit, I have been considering several large changes to the language. These should be done before I even start thinking about the "learn" mode.
See [https://learnasm.kb1rd.net/trm/#future-changes](https://learnasm.kb1rd.net/trm/#future-changes) | priority | fix the instruction format since the deprecation of the link bit i have been considering several large changes to the language these should be done before i even start thinking about the learn mode see | 1 |
751,417 | 26,244,172,846 | IssuesEvent | 2023-01-05 14:01:25 | miversen33/netman.nvim | https://api.github.com/repos/miversen33/netman.nvim | closed | Nmlogs fails to run with provided error | bug Core High Priority | When running the command [`:Nmlogs`](https://github.com/miversen33/netman.nvim/tree/v1.1#nmlogs) you receive the following error
```
E5108: Error executing lua [string ":lua"]:1: attempt to call field 'dump_info' (a nil value)
stack traceback:
[string ":lua"]:1: in main chunk
```
Note, this is on v1.1 only (main will still work). This is almost certainly due to a bit of work that was done on [the netman utils](https://github.com/miversen33/netman.nvim/blob/main/lua/netman/utils.lua) file, as the function it is calling out to (`dump_info`) no longer exists there. I will have to update Nmlogs to work with some of the new API architecture, as well as maybe stop breaking stuff in the future 🙃 | 1.0 | Nmlogs fails to run with provided error - When running the command [`:Nmlogs`](https://github.com/miversen33/netman.nvim/tree/v1.1#nmlogs) you receive the following error
```
E5108: Error executing lua [string ":lua"]:1: attempt to call field 'dump_info' (a nil value)
stack traceback:
[string ":lua"]:1: in main chunk
```
Note, this is on v1.1 only (main will still work). This is almost certainly due to a bit of work that was done on [the netman utils](https://github.com/miversen33/netman.nvim/blob/main/lua/netman/utils.lua) file, as the function it is calling out to (`dump_info`) no longer exists there. I will have to update Nmlogs to work with some of the new API architecture, as well as maybe stop breaking stuff in the future 🙃 | priority | nmlogs fails to run with provided error when running the command you receive the following error error executing lua attempt to call field dump info a nil value stack traceback in main chunk note this is on only main will still work this is almost certainly due to a bit of work that was done on file as the function it is calling out to dump info no longer exists there i will have to update nmlogs to work with some of the new api architecture as well as maybe stop breaking stuff in the future 🙃 | 1 |
459,904 | 13,201,077,996 | IssuesEvent | 2020-08-14 09:26:38 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | 3. Create mda_structure_status_report | NTD Priority: High | Create mda_structure_status_report with structure_id, jurisdiction, mda_treatment_status
As part of this, get population metadata into OpenSRP location properties or a separate table (ideally not done by this team, but +2d if we have to get rough info in ourselves for sanity checking)
Estimate: 3 days (lots of structures makes it slow to iterate here) | 1.0 | 3. Create mda_structure_status_report - Create mda_structure_status_report with structure_id, jurisdiction, mda_treatment_status
As part of this, get population metadata into OpenSRP location properties or a separate table (ideally not done by this team, but +2d if we have to get rough info in ourselves for sanity checking)
Estimate: 3 days (lots of structures makes it slow to iterate here) | priority | create mda structure status report create mda structure status report with structure id jurisdiction mda treatment status as part of this get population metadata into opensrp location properties or a separate table ideally not done by this team but if we have to get rough info in ourselves for sanity checking estimate days lots of structures makes it slow to iterate here | 1 |
194,019 | 6,890,673,553 | IssuesEvent | 2017-11-22 14:45:51 | fabric8-launch/appdev-documentation | https://api.github.com/repos/fabric8-launch/appdev-documentation | closed | WildFly Swarm guide: missing product version number | Effort | Low Issue | Has PR Priority | High Runtime | WildFly Swarm Type | Bug | The Swarm guide doesn't tell what's the version number to use when adding Maven dependencies. What's more, when showing the BOM usage, it even uses a community version number for demonstration!
FTR, the correct version number for our first GA is `7.0.0.redhat-8`. | 1.0 | WildFly Swarm guide: missing product version number - The Swarm guide doesn't tell what's the version number to use when adding Maven dependencies. What's more, when showing the BOM usage, it even uses a community version number for demonstration!
FTR, the correct version number for our first GA is `7.0.0.redhat-8`. | priority | wildfly swarm guide missing product version number the swarm guide doesn t tell what s the version number to use when adding maven dependencies what s more when showing the bom usage it even uses a community version number for demonstration ftr the correct version number for our first ga is redhat | 1 |
481,817 | 13,892,462,067 | IssuesEvent | 2020-10-19 12:14:37 | status-im/status-react | https://api.github.com/repos/status-im/status-react | opened | Endless progress bar when connecting when caming back from offline mode | 1.8 bug high-priority high-severity | # Bug Report
## Problem
Endless progress bar and messages are not fetched when application is back from offline mode.
#### Expected behavior
"Connected" and no progress bar after coming back from offline, messages are fetched
#### Actual behavior
"Connected" and endless progress bar (waited for ~4 mins)

### Reproduction
[comment]: # (Describe how we can replicate the bug step by step.)
- Device 1: Open Status
- Device 1: Join any public chat
- Device 1: Turn on airplane mode
- Device 2: Send to public chat several messages
- Device 1: Turn off airplane mode and wait
### Additional Information
- Status version: 1.8 RC 1
- Operating System: Android, iOS
#### Logs
[Status.log](https://github.com/status-im/status-react/files/5401651/Status.log)
[geth.log](https://github.com/status-im/status-react/files/5401654/geth.log)
| 1.0 | Endless progress bar when connecting when caming back from offline mode - # Bug Report
## Problem
Endless progress bar and messages are not fetched when application is back from offline mode.
#### Expected behavior
"Connected" and no progress bar after coming back from offline, messages are fetched
#### Actual behavior
"Connected" and endless progress bar (waited for ~4 mins)

### Reproduction
[comment]: # (Describe how we can replicate the bug step by step.)
- Device 1: Open Status
- Device 1: Join any public chat
- Device 1: Turn on airplane mode
- Device 2: Send to public chat several messages
- Device 1: Turn off airplane mode and wait
### Additional Information
- Status version: 1.8 RC 1
- Operating System: Android, iOS
#### Logs
[Status.log](https://github.com/status-im/status-react/files/5401651/Status.log)
[geth.log](https://github.com/status-im/status-react/files/5401654/geth.log)
| priority | endless progress bar when connecting when caming back from offline mode bug report problem endless progress bar and messages are not fetched when application is back from offline mode expected behavior connected and no progress bar after coming back from offline messages are fetched actual behavior connected and endless progress bar waited for mins reproduction describe how we can replicate the bug step by step device open status device join any public chat device turn on airplane mode device send to public chat several messages device turn off airplane mode and wait additional information status version rc operating system android ios logs | 1 |
105,807 | 4,242,229,584 | IssuesEvent | 2016-07-06 18:48:10 | blacklocus/anvil | https://api.github.com/repos/blacklocus/anvil | reopened | e2e tests | priority-high | should really get some tests in here because prior things are going to start breaking as more features are added. The current rough set of features follows, which could be broken down into smaller scenarios, or could all be stitched into one giant "story"
- Create/delete several walls
- Add/remove several boards
- Add/remove several series
- Rename a wall
- Rename boards
- Adjust window/period
- "Save as default" window/period | 1.0 | e2e tests - should really get some tests in here because prior things are going to start breaking as more features are added. The current rough set of features follows, which could be broken down into smaller scenarios, or could all be stitched into one giant "story"
- Create/delete several walls
- Add/remove several boards
- Add/remove several series
- Rename a wall
- Rename boards
- Adjust window/period
- "Save as default" window/period | priority | tests should really get some tests in here because prior things are going to start breaking as more features are added the current rough set of features follows which could be broken down into smaller scenarios or could all be stitched into one giant story create delete several walls add remove several boards add remove several series rename a wall rename boards adjust window period save as default window period | 1 |
24,511 | 2,668,242,669 | IssuesEvent | 2015-03-23 06:29:30 | cs2103jan2015-t15-2j/main | https://api.github.com/repos/cs2103jan2015-t15-2j/main | closed | A user can modify previously added tasks | priority.high type.story | ...so that the user can have more flexibility using the application. | 1.0 | A user can modify previously added tasks - ...so that the user can have more flexibility using the application. | priority | a user can modify previously added tasks so that the user can have more flexibility using the application | 1 |
349,398 | 10,468,718,452 | IssuesEvent | 2019-09-22 15:38:50 | nim-lang/nimble | https://api.github.com/repos/nim-lang/nimble | closed | Dependency resolution issue | Bug High Priority | PackageFoo:
* requires "httpbeast 0.2.2"
* requires "jester#5a54b5e2cc0b6b7405536fdd79b65aa133cac6c8"
Jester#5a54b5e2cc0b6b7405536fdd79b65aa133cac6c8:
* requires "httpbeast >= 0.2.2"
Nimble gives:
```
Error: Cannot satisfy the dependency on httpbeast #head and httpbeast 0.2.2
```
Nimble should be able to satisfy these dependencies. | 1.0 | Dependency resolution issue - PackageFoo:
* requires "httpbeast 0.2.2"
* requires "jester#5a54b5e2cc0b6b7405536fdd79b65aa133cac6c8"
Jester#5a54b5e2cc0b6b7405536fdd79b65aa133cac6c8:
* requires "httpbeast >= 0.2.2"
Nimble gives:
```
Error: Cannot satisfy the dependency on httpbeast #head and httpbeast 0.2.2
```
Nimble should be able to satisfy these dependencies. | priority | dependency resolution issue packagefoo requires httpbeast requires jester jester requires httpbeast nimble gives error cannot satisfy the dependency on httpbeast head and httpbeast nimble should be able to satisfy these dependencies | 1 |
139,226 | 5,358,242,966 | IssuesEvent | 2017-02-20 21:26:56 | Angblah/The-Comparator | https://api.github.com/repos/Angblah/The-Comparator | opened | Toolbar Implementation - Export | Priority: High Stack: Frontend Status: Available Type: Feature | Be able to click "Export" in the toolbar to export to different file formats for offline viewing. | 1.0 | Toolbar Implementation - Export - Be able to click "Export" in the toolbar to export to different file formats for offline viewing. | priority | toolbar implementation export be able to click export in the toolbar to export to different file formats for offline viewing | 1 |
472,386 | 13,623,487,409 | IssuesEvent | 2020-09-24 06:25:23 | OpenSRP/opensrp-client-chw | https://api.github.com/repos/OpenSRP/opensrp-client-chw | closed | Bugs found from QA 9.11.20 | bug high priority | There are four breaking issues that prevent testing - we cannot share the APK with the client in its current status.
- [ ] Clicking "view vaccine history" crashes the app. I am unable to test whether the changes to the vaccine schedule have been made.
- [ ] Clicking "record visit" opens an empty view

- [ ] "View upcoming services" on the child profile view still is showing services that should have been removed such as breastfeeding and Vitamin A

- [ ] The "Upcoming Services" view is an empty screen. I cannot test if that text has been appropriately split into two sections.

| 1.0 | Bugs found from QA 9.11.20 - There are four breaking issues that prevent testing - we cannot share the APK with the client in its current status.
- [ ] Clicking "view vaccine history" crashes the app. I am unable to test whether the changes to the vaccine schedule have been made.
- [ ] Clicking "record visit" opens an empty view

- [ ] "View upcoming services" on the child profile view still is showing services that should have been removed such as breastfeeding and Vitamin A

- [ ] The "Upcoming Services" view is an empty screen. I cannot test if that text has been appropriately split into two sections.

| priority | bugs found from qa there are four breaking issues that prevent testing we cannot share the apk with the client in its current status clicking view vaccine history crashes the app i am unable to test whether the changes to the vaccine schedule have been made clicking record visit opens an empty view view upcoming services on the child profile view still is showing services that should have been removed such as breastfeeding and vitamin a the upcoming services view is an empty screen i cannot test if that text has been appropriately split into two sections | 1 |
828,456 | 31,829,750,424 | IssuesEvent | 2023-09-14 09:48:38 | code4romania/crestem-ong | https://api.github.com/repos/code4romania/crestem-ong | opened | [Admin ONG/Persoane resursă] implement resource person's profile page | enhancement :rocket: high-priority :fire: | The Admin ONG user should be able to view each resource person on an individual child page.
From the 'Persoane resursă' page, where the list of all resource persons is displayed, the user should be able to click on each resource person and access the individual resource person's profile page. ([see design here](https://www.figma.com/file/EeUtizbTE8Bkqhjx70CwOB/Cre%C8%99tem-ONG?type=design&node-id=1686-29459&mode=design&t=n84o6A92aLuMVw5U-4)) | 1.0 | [Admin ONG/Persoane resursă] implement resource person's profile page - The Admin ONG user should be able to view each resource person on an individual child page.
From the 'Persoane resursă' page, where the list of all resource persons is displayed, the user should be able to click on each resource person and access the individual resource person's profile page. ([see design here](https://www.figma.com/file/EeUtizbTE8Bkqhjx70CwOB/Cre%C8%99tem-ONG?type=design&node-id=1686-29459&mode=design&t=n84o6A92aLuMVw5U-4)) | priority | implement resource person s profile page the admin ong user should be able to view each resource person on an individual child page from the persoane resursă page where the list of all resource persons is displayed the user should be able to click on each resource person and access the individual resource person s profile page | 1 |
284,308 | 8,737,255,276 | IssuesEvent | 2018-12-11 21:59:52 | aowen87/TicketTester | https://api.github.com/repos/aowen87/TicketTester | closed | Problem displaying multiple plots with history variables from Ale3d. | bug crash likelihood medium priority reviewed severity high wrong results | Al reported some bugs displaying multiple plots with history variables. Here is information about reproducing them: The data files to reproduce the bug are: if7f_001.00020 if7f_004.00020 Bug 1: Step to demonstrate bug: 1) Have "Apply subset selections to all plots" on (the default). 2) Open if7f_004.00020 3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp 4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp 5) Press Draw The second plot will generate the message: WARNING: The Pseudocolor plot of variable "hist/incmat2_2/dmfdt/temp" yielded no data. This is because VisIt used the SIL selection from plot 1 of incmat1_1 on incmat2_2 off incmat3_3 off for plot 2, when the second variable is defined on incmat2_2. The work around is to turn off "Apply subset selections to all plots" and set the SIL for the second plot to be incmat1_1 off incmat2_2 on incmat3_3 off Bug 2: 1) Have "Apply subset selections to all plots" on (the default). 2) Open if7f_004.00020 3) Add a Filled Boundary plot of material. 4) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp 5) Press Draw This gives a plot where the Pseudocolor plot is drawing extra partial zones. This is because it selects all the materials for the Pseudocolor plot when the variable is only defined on incmat1_1. You may need to hide the Filled Boundary plot to see the problem.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 958
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Problem displaying multiple plots with history variables from Ale3d.
Assigned to: Eric Brugger
Category:
Target version: 2.4.2
Author: Eric Brugger
Start: 02/08/2012
Due date:
% Done: 100
Estimated time: 16.0
Created: 02/08/2012 02:53 pm
Updated: 02/23/2012 08:29 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.4.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Al reported some bugs displaying multiple plots with history variables. Here is information about reproducing them: The data files to reproduce the bug are: if7f_001.00020 if7f_004.00020 Bug 1: Step to demonstrate bug: 1) Have "Apply subset selections to all plots" on (the default). 2) Open if7f_004.00020 3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp 4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp 5) Press Draw The second plot will generate the message: WARNING: The Pseudocolor plot of variable "hist/incmat2_2/dmfdt/temp" yielded no data. This is because VisIt used the SIL selection from plot 1 of incmat1_1 on incmat2_2 off incmat3_3 off for plot 2, when the second variable is defined on incmat2_2. The work around is to turn off "Apply subset selections to all plots" and set the SIL for the second plot to be incmat1_1 off incmat2_2 on incmat3_3 off Bug 2: 1) Have "Apply subset selections to all plots" on (the default). 2) Open if7f_004.00020 3) Add a Filled Boundary plot of material. 4) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp 5) Press Draw This gives a plot where the Pseudocolor plot is drawing extra partial zones. This is because it selects all the materials for the Pseudocolor plot when the variable is only defined on incmat1_1. You may need to hide the Filled Boundary plot to see the problem.
Comments:
With some investigation I found the obvious, that the routine avtSILRestriction::SetFromCompatibleRestriction is returning true when the second plot is created. This is because the SILs both have incmat1_1, incmat2_2 and incmat3_3. So either that routine needs to know that it isn't really the same SIL, or the SIL needs to only consist of the parts that make sense, then there would be no problem in SetFromCompatibleRestriction. Sounds like the way to go, don't know how much work is involved. It also seems right since the user shouldn't even see the other materials as possibilities for selection. So I don't think having a different SIL for the different variables is a winner since there is one material object per file, which is used to set avtMaterialMetaData. I committed revisions 17424 and 17426 to the 2.4 RC and trunk with thefollowing change:1) I modified VisIt so that when you have "Apply subset selections to all plots" on and add a plot of a material restricted variable it doesn't apply the SIL from a compatible plot unless the variables of both plots are restricted to the same materials. This resolves #958.M help/en_US/relnotes2.4.2.htmlM viewer/main/ViewerPlotList.C
| 1.0 | Problem displaying multiple plots with history variables from Ale3d. - Al reported some bugs displaying multiple plots with history variables. Here is information about reproducing them: The data files to reproduce the bug are: if7f_001.00020 if7f_004.00020 Bug 1: Step to demonstrate bug: 1) Have "Apply subset selections to all plots" on (the default). 2) Open if7f_004.00020 3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp 4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp 5) Press Draw The second plot will generate the message: WARNING: The Pseudocolor plot of variable "hist/incmat2_2/dmfdt/temp" yielded no data. This is because VisIt used the SIL selection from plot 1 of incmat1_1 on incmat2_2 off incmat3_3 off for plot 2, when the second variable is defined on incmat2_2. The work around is to turn off "Apply subset selections to all plots" and set the SIL for the second plot to be incmat1_1 off incmat2_2 on incmat3_3 off Bug 2: 1) Have "Apply subset selections to all plots" on (the default). 2) Open if7f_004.00020 3) Add a Filled Boundary plot of material. 4) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp 5) Press Draw This gives a plot where the Pseudocolor plot is drawing extra partial zones. This is because it selects all the materials for the Pseudocolor plot when the variable is only defined on incmat1_1. You may need to hide the Filled Boundary plot to see the problem.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 958
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Problem displaying multiple plots with history variables from Ale3d.
Assigned to: Eric Brugger
Category:
Target version: 2.4.2
Author: Eric Brugger
Start: 02/08/2012
Due date:
% Done: 100
Estimated time: 16.0
Created: 02/08/2012 02:53 pm
Updated: 02/23/2012 08:29 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.4.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Al reported some bugs displaying multiple plots with history variables. Here is information about reproducing them: The data files to reproduce the bug are: if7f_001.00020 if7f_004.00020 Bug 1: Step to demonstrate bug: 1) Have "Apply subset selections to all plots" on (the default). 2) Open if7f_004.00020 3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp 4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp 5) Press Draw The second plot will generate the message: WARNING: The Pseudocolor plot of variable "hist/incmat2_2/dmfdt/temp" yielded no data. This is because VisIt used the SIL selection from plot 1 of incmat1_1 on incmat2_2 off incmat3_3 off for plot 2, when the second variable is defined on incmat2_2. The work around is to turn off "Apply subset selections to all plots" and set the SIL for the second plot to be incmat1_1 off incmat2_2 on incmat3_3 off Bug 2: 1) Have "Apply subset selections to all plots" on (the default). 2) Open if7f_004.00020 3) Add a Filled Boundary plot of material. 4) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp 5) Press Draw This gives a plot where the Pseudocolor plot is drawing extra partial zones. This is because it selects all the materials for the Pseudocolor plot when the variable is only defined on incmat1_1. You may need to hide the Filled Boundary plot to see the problem.
Comments:
With some investigation I found the obvious, that the routine avtSILRestriction::SetFromCompatibleRestriction is returning true when the second plot is created. This is because the SILs both have incmat1_1, incmat2_2 and incmat3_3. So either that routine needs to know that it isn't really the same SIL, or the SIL needs to only consist of the parts that make sense, then there would be no problem in SetFromCompatibleRestriction. Sounds like the way to go, don't know how much work is involved. It also seems right since the user shouldn't even see the other materials as possibilities for selection. So I don't think having a different SIL for the different variables is a winner since there is one material object per file, which is used to set avtMaterialMetaData. I committed revisions 17424 and 17426 to the 2.4 RC and trunk with thefollowing change:1) I modified VisIt so that when you have "Apply subset selections to all plots" on and add a plot of a material restricted variable it doesn't apply the SIL from a compatible plot unless the variables of both plots are restricted to the same materials. This resolves #958.M help/en_US/relnotes2.4.2.htmlM viewer/main/ViewerPlotList.C
| priority | problem displaying multiple plots with history variables from al reported some bugs displaying multiple plots with history variables here is information about reproducing them the data files to reproduce the bug are bug step to demonstrate bug have apply subset selections to all plots on the default open add a pseudocolor of hist dmfdt temp add a pseudocolor of hist dmfdt temp press draw the second plot will generate the message warning the pseudocolor plot of variable hist dmfdt temp yielded no data this is because visit used the sil selection from plot of on off off for plot when the second variable is defined on the work around is to turn off apply subset selections to all plots and set the sil for the second plot to be off on off bug have apply subset selections to all plots on the default open add a filled boundary plot of material add a pseudocolor of hist dmfdt temp press draw this gives a plot where the pseudocolor plot is drawing extra partial zones this is because it selects all the materials for the pseudocolor plot when the variable is only defined on you may need to hide the filled boundary plot to see the problem redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority urgent subject problem displaying multiple plots with history variables from assigned to eric brugger category target version author eric brugger start due date done estimated time created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description al reported some bugs displaying multiple plots with history variables here is information about reproducing them the data files to reproduce the bug are bug step to demonstrate bug have apply subset selections to all plots on the default open add a pseudocolor of hist dmfdt temp add a pseudocolor of hist dmfdt temp press draw the second plot will generate the message warning the pseudocolor plot of variable hist dmfdt temp yielded no data this is because visit used the sil selection from plot of on off off for plot when the second variable is defined on the work around is to turn off apply subset selections to all plots and set the sil for the second plot to be off on off bug have apply subset selections to all plots on the default open add a filled boundary plot of material add a pseudocolor of hist dmfdt temp press draw this gives a plot where the pseudocolor plot is drawing extra partial zones this is because it selects all the materials for the pseudocolor plot when the variable is only defined on you may need to hide the filled boundary plot to see the problem comments with some investigation i found the obvious that the routine avtsilrestriction setfromcompatiblerestriction is returning true when the second plot is created this is because the sils both have and so either that routine needs to know that it isn t really the same sil or the sil needs to only consist of the parts that make sense then there would be no problem in setfromcompatiblerestriction sounds like the way to go don t know how much work is involved it also seems right since the user shouldn t even see the other materials as possibilities for selection so i don t think having a different sil for the different variables is a winner since there is one material object per file which is used to set avtmaterialmetadata i committed revisions and to the rc and trunk with thefollowing change i modified visit so that when you have apply subset selections to all plots on and add a plot of a material restricted variable it doesn t apply the sil from a compatible plot unless the variables of both plots are restricted to the same materials this resolves m help en us htmlm viewer main viewerplotlist c | 1 |
767,488 | 26,927,637,433 | IssuesEvent | 2023-02-07 14:48:46 | kubermatic/kubermatic | https://api.github.com/repos/kubermatic/kubermatic | closed | usercluster-controller-manager fails to run after upgrade to v2.22.0-alpha.0 | kind/bug priority/high sig/networking | ### What happened?
After upgrading from KKP 2.21.x to `v2.22.0-alpha.0`, usercluster-controller-manager for clusters with the Tunneling expose strategy fails to run with the following error:
```
invalid value "" for flag -tunneling-agent-ip: "" is not valid ip address
```
The root cause seems to be that the code is expecting that `cluster.spec.clusterNetwork.tunnelingAgentIP` is defaulted, which is not happening for existing clusters.
The workaround is to set it manually for affected clusters:
```
spec:
clusterNetwork:
tunnelingAgentIP: 100.64.30.10
```
### Expected behavior
KKP upgrade should not break existing cluster's functionality.
### How to reproduce the issue?
- on KKP 2.21.x, create an user cluster with the Tunneling expose strategy
- upgrade KKP to `v2.22.0-alpha.0`
### How is your environment configured?
- KKP version: `v2.22.0-alpha.0`
- Shared or separate master/seed clusters?: shared
| 1.0 | usercluster-controller-manager fails to run after upgrade to v2.22.0-alpha.0 - ### What happened?
After upgrading from KKP 2.21.x to `v2.22.0-alpha.0`, usercluster-controller-manager for clusters with the Tunneling expose strategy fails to run with the following error:
```
invalid value "" for flag -tunneling-agent-ip: "" is not valid ip address
```
The root cause seems to be that the code is expecting that `cluster.spec.clusterNetwork.tunnelingAgentIP` is defaulted, which is not happening for existing clusters.
The workaround is to set it manually for affected clusters:
```
spec:
clusterNetwork:
tunnelingAgentIP: 100.64.30.10
```
### Expected behavior
KKP upgrade should not break existing cluster's functionality.
### How to reproduce the issue?
- on KKP 2.21.x, create an user cluster with the Tunneling expose strategy
- upgrade KKP to `v2.22.0-alpha.0`
### How is your environment configured?
- KKP version: `v2.22.0-alpha.0`
- Shared or separate master/seed clusters?: shared
| priority | usercluster controller manager fails to run after upgrade to alpha what happened after upgrading from kkp x to alpha usercluster controller manager for clusters with the tunneling expose strategy fails to run with the following error invalid value for flag tunneling agent ip is not valid ip address the root cause seems to be that the code is expecting that cluster spec clusternetwork tunnelingagentip is defaulted which is not happening for existing clusters the workaround is to set it manually for affected clusters spec clusternetwork tunnelingagentip expected behavior kkp upgrade should not break existing cluster s functionality how to reproduce the issue on kkp x create an user cluster with the tunneling expose strategy upgrade kkp to alpha how is your environment configured kkp version alpha shared or separate master seed clusters shared | 1 |
121,744 | 4,820,955,835 | IssuesEvent | 2016-11-05 03:07:25 | solrmarc/solrmarc | https://api.github.com/repos/solrmarc/solrmarc | closed | get better at documenting changes and communicating w community | auto-migrated Priority-High Type-Task | ```
We need to get better at promulgating our mods, having a place to check for
updates (e.g. issues resolved), documenting our changes (javadoc, issues,
documentation ...)
```
Original issue reported on code.google.com by `naomi.du...@gmail.com` on 21 Aug 2009 at 7:27
| 1.0 | get better at documenting changes and communicating w community - ```
We need to get better at promulgating our mods, having a place to check for
updates (e.g. issues resolved), documenting our changes (javadoc, issues,
documentation ...)
```
Original issue reported on code.google.com by `naomi.du...@gmail.com` on 21 Aug 2009 at 7:27
| priority | get better at documenting changes and communicating w community we need to get better at promulgating our mods having a place to check for updates e g issues resolved documenting our changes javadoc issues documentation original issue reported on code google com by naomi du gmail com on aug at | 1 |
79,377 | 3,535,320,632 | IssuesEvent | 2016-01-16 12:13:05 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | Server error when accessing notifications | Component: UX & Upgrade Priority: High Status: QA Type: Bug |
Internal Server Error - An unexpected error was encountered, no more information is available.
I have a movie file of the steps to reproduce, but can't attach that to github-not supported. | 1.0 | Server error when accessing notifications -
Internal Server Error - An unexpected error was encountered, no more information is available.
I have a movie file of the steps to reproduce, but can't attach that to github-not supported. | priority | server error when accessing notifications internal server error an unexpected error was encountered no more information is available i have a movie file of the steps to reproduce but can t attach that to github not supported | 1 |
669,456 | 22,625,808,664 | IssuesEvent | 2022-06-30 10:31:54 | heading1/WYLSBingsu | https://api.github.com/repos/heading1/WYLSBingsu | closed | [FE] 토핑 선택 form 완성 | 🖥 Frontend ❗️high-priority 🔨 Feature | ## 🔨 기능 설명
- 토핑 선택 form 완성
## 📑 완료 조건
- [ ] 토핑 선택 form 완성
## 💭 관련 백로그
[FE] 작성 페이지 - 디자인 - 토핑 선택 form 완성
## 💭 예상 작업 시간
2h
| 1.0 | [FE] 토핑 선택 form 완성 - ## 🔨 기능 설명
- 토핑 선택 form 완성
## 📑 완료 조건
- [ ] 토핑 선택 form 완성
## 💭 관련 백로그
[FE] 작성 페이지 - 디자인 - 토핑 선택 form 완성
## 💭 예상 작업 시간
2h
| priority | 토핑 선택 form 완성 🔨 기능 설명 토핑 선택 form 완성 📑 완료 조건 토핑 선택 form 완성 💭 관련 백로그 작성 페이지 디자인 토핑 선택 form 완성 💭 예상 작업 시간 | 1 |
278,017 | 8,635,067,166 | IssuesEvent | 2018-11-22 20:06:05 | QuantEcon/lecture-source-jl | https://api.github.com/repos/QuantEcon/lecture-source-jl | closed | Sanity Check on amss | high-priority | We just merged #159. This came with a lot of code changes and a few math ones (for example, moving to linear interpolation and relaxing the error tolerance a bit).
We need to do a smoke-test of the output vs. the lectures.quantecon.org/jl/amss.html It looks like @Nosferican shifted the plot scale a bit, so perhaps he could weigh in here as to some of the more visible differences in the y-axis.
As mentioned, we're really nearing the finish line, so working on this sooner rather than later would be appreciated. Ping me on slack if anything is unclear. | 1.0 | Sanity Check on amss - We just merged #159. This came with a lot of code changes and a few math ones (for example, moving to linear interpolation and relaxing the error tolerance a bit).
We need to do a smoke-test of the output vs. the lectures.quantecon.org/jl/amss.html It looks like @Nosferican shifted the plot scale a bit, so perhaps he could weigh in here as to some of the more visible differences in the y-axis.
As mentioned, we're really nearing the finish line, so working on this sooner rather than later would be appreciated. Ping me on slack if anything is unclear. | priority | sanity check on amss we just merged this came with a lot of code changes and a few math ones for example moving to linear interpolation and relaxing the error tolerance a bit we need to do a smoke test of the output vs the lectures quantecon org jl amss html it looks like nosferican shifted the plot scale a bit so perhaps he could weigh in here as to some of the more visible differences in the y axis as mentioned we re really nearing the finish line so working on this sooner rather than later would be appreciated ping me on slack if anything is unclear | 1 |
370,420 | 10,931,829,741 | IssuesEvent | 2019-11-23 13:24:27 | jonfroehlich/makeabilitylabwebsite | https://api.github.com/repos/jonfroehlich/makeabilitylabwebsite | closed | Django and Postgres aren't communicating properly | Priority: Very High bug requires-updating-model-database | Related issue: #361, #360
On production, Django seems to think that the state of the database is different from what it actually is. This causes Postgres not to update its tables when `makemigrations` or `migrate` are run. The result of this is usually a ton of `Server Error: 500` on pages that involve tables that have been modified. This error happens locally (sometimes) too.
For now, it would probably be best to hold back from adding/removing fields in `models.py` since this is where changes to the database are made. If we do need to add/remove a field, we'll also need to update the schema (on UW servers) manually since Django isn't doing it automatically.
Since it looks like this might be a problem with Django, it might be worthwhile to try using a newer version, in case this bug has been addressed in a more recent update. | 1.0 | Django and Postgres aren't communicating properly - Related issue: #361, #360
On production, Django seems to think that the state of the database is different from what it actually is. This causes Postgres not to update its tables when `makemigrations` or `migrate` are run. The result of this is usually a ton of `Server Error: 500` on pages that involve tables that have been modified. This error happens locally (sometimes) too.
For now, it would probably be best to hold back from adding/removing fields in `models.py` since this is where changes to the database are made. If we do need to add/remove a field, we'll also need to update the schema (on UW servers) manually since Django isn't doing it automatically.
Since it looks like this might be a problem with Django, it might be worthwhile to try using a newer version, in case this bug has been addressed in a more recent update. | priority | django and postgres aren t communicating properly related issue on production django seems to think that the state of the database is different from what it actually is this causes postgres not to update its tables when makemigrations or migrate are run the result of this is usually a ton of server error on pages that involve tables that have been modified this error happens locally sometimes too for now it would probably be best to hold back from adding removing fields in models py since this is where changes to the database are made if we do need to add remove a field we ll also need to update the schema on uw servers manually since django isn t doing it automatically since it looks like this might be a problem with django it might be worthwhile to try using a newer version in case this bug has been addressed in a more recent update | 1 |
281,949 | 8,701,172,688 | IssuesEvent | 2018-12-05 10:47:04 | AICrowd/ai-crowd-3 | https://api.github.com/repos/AICrowd/ai-crowd-3 | closed | Missing leaderboard entries | bug high priority | _From @seanfcarroll on October 17, 2017 10:40_
13 & 28 are missing from learning how to run
_Copied from original issue: crowdAI/crowdai#356_ | 1.0 | Missing leaderboard entries - _From @seanfcarroll on October 17, 2017 10:40_
13 & 28 are missing from learning how to run
_Copied from original issue: crowdAI/crowdai#356_ | priority | missing leaderboard entries from seanfcarroll on october are missing from learning how to run copied from original issue crowdai crowdai | 1 |
470,527 | 13,539,808,896 | IssuesEvent | 2020-09-16 13:55:18 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.0.2 beta release-70 ]Contract Board: Can only select learned materials as road types. | Category: Gameplay Priority: High | Can not select not yet learned road types using "Build Road"
missing a "show unlearned" button ( but i would think this feature is unnecessary here as it should always show all road types maybe? )
Before learning skills

After learning skills
 | 1.0 | [0.9.0.2 beta release-70 ]Contract Board: Can only select learned materials as road types. - Can not select not yet learned road types using "Build Road"
missing a "show unlearned" button ( but i would think this feature is unnecessary here as it should always show all road types maybe? )
Before learning skills

After learning skills
 | priority | contract board can only select learned materials as road types can not select not yet learned road types using build road missing a show unlearned button but i would think this feature is unnecessary here as it should always show all road types maybe before learning skills after learning skills | 1 |
546,544 | 16,014,398,436 | IssuesEvent | 2021-04-20 14:26:25 | diyabc/diyabcGUI | https://api.github.com/repos/diyabc/diyabcGUI | closed | [RF][model choice] adapt number of available simulation to model grouping/selection | bug high priority to be validated | If running on model choice on a subset of scenarii, the proposed number of available scenarii should depend on the selected scenarii. | 1.0 | [RF][model choice] adapt number of available simulation to model grouping/selection - If running on model choice on a subset of scenarii, the proposed number of available scenarii should depend on the selected scenarii. | priority | adapt number of available simulation to model grouping selection if running on model choice on a subset of scenarii the proposed number of available scenarii should depend on the selected scenarii | 1 |
772,124 | 27,107,699,088 | IssuesEvent | 2023-02-15 13:18:46 | godot-dlang/godot-dlang | https://api.github.com/repos/godot-dlang/godot-dlang | closed | gdextension headers breaking bind templates | extension api priority: high | @Superbelko new commit https://github.com/godotengine/godot-headers/commit/dfdf91336a50b2af4154beef853fdc75ae303666 turns `GDNativeMethodBindPtr` from `void*` into `const(void*)`.
This brakes `GodotMethod!` template since it has `GDNativeMethodBindPtr mb`, which is not const.
And it also tries modifying it with `mb = _godot_api.classdb_get_method_bind(cast(GDNativeStringNamePtr) StringName(className), cast(GDNativeStringNamePtr) StringName(methodName), hash);`, which is not acceptable | 1.0 | gdextension headers breaking bind templates - @Superbelko new commit https://github.com/godotengine/godot-headers/commit/dfdf91336a50b2af4154beef853fdc75ae303666 turns `GDNativeMethodBindPtr` from `void*` into `const(void*)`.
This brakes `GodotMethod!` template since it has `GDNativeMethodBindPtr mb`, which is not const.
And it also tries modifying it with `mb = _godot_api.classdb_get_method_bind(cast(GDNativeStringNamePtr) StringName(className), cast(GDNativeStringNamePtr) StringName(methodName), hash);`, which is not acceptable | priority | gdextension headers breaking bind templates superbelko new commit turns gdnativemethodbindptr from void into const void this brakes godotmethod template since it has gdnativemethodbindptr mb which is not const and it also tries modifying it with mb godot api classdb get method bind cast gdnativestringnameptr stringname classname cast gdnativestringnameptr stringname methodname hash which is not acceptable | 1 |
769,523 | 27,010,170,850 | IssuesEvent | 2023-02-10 14:50:00 | mantidproject/mslice | https://api.github.com/repos/mantidproject/mslice | closed | Exception when taking interactive cut on Slice along 2Theta | bug High priority | **Describe the bug**
An exception is thrown when taking an interactive cut on a slice plot along 2Theta.
**To Reproduce**
Steps to reproduce the behavior:
1. Load MAR21335_Ei60meV
2. Change x to 2Theta
3. Create an interactive cut on the slice plot
Traceback (most recent call last):
File "C:\MantidNightlyInstall\bin\lib\site-packages\matplotlib\cbook\__init__.py", line 307, in process
func(*args, **kwargs)
File "C:\MantidNightlyInstall\bin\lib\site-packages\matplotlib\widgets.py", line 2003, in release
self._release(event)
File "C:\MantidNightlyInstall\bin\lib\site-packages\matplotlib\widgets.py", line 3118, in _release
self.onselect(self._eventpress, self._eventrelease)
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\plotting\plot_window\interactive_cut.py", line 45, in plot_from_mouse_event
self.plot_cut(eclick.xdata, erelease.xdata, eclick.ydata, erelease.ydata)
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\plotting\plot_window\interactive_cut.py", line 58, in plot_cut
self._cut_plotter_presenter.plot_interactive_cut(workspace, cut, store, self.slice_plot.intensity_type)
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\presenters\cut_plotter_presenter.py", line 158, in plot_interactive_cut
self._plot_cut(workspace, cut, False, store, update_main=False, intensity_correction=intensity_correction)
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\presenters\cut_plotter_presenter.py", line 50, in _plot_cut
cut.cut_ws = compute_cut(workspace, cut_axis, integration_axis, cut.norm_to_one, cut.algorithm, store)
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\models\cut\cut.py", line 47, in cut_ws
self._update_cut_axis()
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\models\cut\cut.py", line 230, in _update_cut_axis
ws_cut_axis = self._cut_ws.raw_ws.getYDimension()
RuntimeError: Workspace does not have a Y dimension. | 1.0 | Exception when taking interactive cut on Slice along 2Theta - **Describe the bug**
An exception is thrown when taking an interactive cut on a slice plot along 2Theta.
**To Reproduce**
Steps to reproduce the behavior:
1. Load MAR21335_Ei60meV
2. Change x to 2Theta
3. Create an interactive cut on the slice plot
Traceback (most recent call last):
File "C:\MantidNightlyInstall\bin\lib\site-packages\matplotlib\cbook\__init__.py", line 307, in process
func(*args, **kwargs)
File "C:\MantidNightlyInstall\bin\lib\site-packages\matplotlib\widgets.py", line 2003, in release
self._release(event)
File "C:\MantidNightlyInstall\bin\lib\site-packages\matplotlib\widgets.py", line 3118, in _release
self.onselect(self._eventpress, self._eventrelease)
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\plotting\plot_window\interactive_cut.py", line 45, in plot_from_mouse_event
self.plot_cut(eclick.xdata, erelease.xdata, eclick.ydata, erelease.ydata)
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\plotting\plot_window\interactive_cut.py", line 58, in plot_cut
self._cut_plotter_presenter.plot_interactive_cut(workspace, cut, store, self.slice_plot.intensity_type)
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\presenters\cut_plotter_presenter.py", line 158, in plot_interactive_cut
self._plot_cut(workspace, cut, False, store, update_main=False, intensity_correction=intensity_correction)
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\presenters\cut_plotter_presenter.py", line 50, in _plot_cut
cut.cut_ws = compute_cut(workspace, cut_axis, integration_axis, cut.norm_to_one, cut.algorithm, store)
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\models\cut\cut.py", line 47, in cut_ws
self._update_cut_axis()
File "C:\MantidNightlyInstall\scripts\ExternalInterfaces\mslice\models\cut\cut.py", line 230, in _update_cut_axis
ws_cut_axis = self._cut_ws.raw_ws.getYDimension()
RuntimeError: Workspace does not have a Y dimension. | priority | exception when taking interactive cut on slice along describe the bug an exception is thrown when taking an interactive cut on a slice plot along to reproduce steps to reproduce the behavior load change x to create an interactive cut on the slice plot traceback most recent call last file c mantidnightlyinstall bin lib site packages matplotlib cbook init py line in process func args kwargs file c mantidnightlyinstall bin lib site packages matplotlib widgets py line in release self release event file c mantidnightlyinstall bin lib site packages matplotlib widgets py line in release self onselect self eventpress self eventrelease file c mantidnightlyinstall scripts externalinterfaces mslice plotting plot window interactive cut py line in plot from mouse event self plot cut eclick xdata erelease xdata eclick ydata erelease ydata file c mantidnightlyinstall scripts externalinterfaces mslice plotting plot window interactive cut py line in plot cut self cut plotter presenter plot interactive cut workspace cut store self slice plot intensity type file c mantidnightlyinstall scripts externalinterfaces mslice presenters cut plotter presenter py line in plot interactive cut self plot cut workspace cut false store update main false intensity correction intensity correction file c mantidnightlyinstall scripts externalinterfaces mslice presenters cut plotter presenter py line in plot cut cut cut ws compute cut workspace cut axis integration axis cut norm to one cut algorithm store file c mantidnightlyinstall scripts externalinterfaces mslice models cut cut py line in cut ws self update cut axis file c mantidnightlyinstall scripts externalinterfaces mslice models cut cut py line in update cut axis ws cut axis self cut ws raw ws getydimension runtimeerror workspace does not have a y dimension | 1 |
641,615 | 20,830,658,158 | IssuesEvent | 2022-03-19 11:31:44 | the-dr-lazy/purescript-monarch | https://api.github.com/repos/the-dr-lazy/purescript-monarch | opened | Unify `command` and `update` | Type: Enhancement Priority: High Status: Pending | ### Motivation
With current API, the exact sequence of `update` and `command` evaluation is not obvious without diving into the deep of the library. By unifying them into a single function, it becomes predictable.
<!-- Why are we doing this? What use cases does it support? What is the expected outcome? -->
### Basic Example
```purescript
update :: forall message model. MonadEffect m => message -> model -> Tuple model (m Unit)
```
### Flags
- [ ] I would like to submit a PR if the requested feature becomes approved (Help can be provided if you need assistance submitting a PR.).
- [ ] Breaking change.
| 1.0 | Unify `command` and `update` - ### Motivation
With current API, the exact sequence of `update` and `command` evaluation is not obvious without diving into the deep of the library. By unifying them into a single function, it becomes predictable.
<!-- Why are we doing this? What use cases does it support? What is the expected outcome? -->
### Basic Example
```purescript
update :: forall message model. MonadEffect m => message -> model -> Tuple model (m Unit)
```
### Flags
- [ ] I would like to submit a PR if the requested feature becomes approved (Help can be provided if you need assistance submitting a PR.).
- [ ] Breaking change.
| priority | unify command and update motivation with current api the exact sequence of update and command evaluation is not obvious without diving into the deep of the library by unifying them into a single function it becomes predictable basic example purescript update forall message model monadeffect m message model tuple model m unit flags i would like to submit a pr if the requested feature becomes approved help can be provided if you need assistance submitting a pr breaking change | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.