Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
411,256 | 12,016,095,535 | IssuesEvent | 2020-04-10 15:22:35 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | opened | Bell message missing value for claim winnings | Bug Needed for V2 launch Priority: High | Just claimed roughly $250 from claim proceeds....$115 was profit
The bell message only shows a dash and not the value...
 | 1.0 | Bell message missing value for claim winnings - Just claimed roughly $250 from claim proceeds....$115 was profit
The bell message only shows a dash and not the value...
 | priority | bell message missing value for claim winnings just claimed roughly from claim proceeds was profit the bell message only shows a dash and not the value | 1 |
199,048 | 6,980,266,951 | IssuesEvent | 2017-12-13 00:43:45 | steemit/hivemind | https://api.github.com/repos/steemit/hivemind | closed | finalize db schema | priority/high WIP | todo:
- bool fields
consider:
- use INT ids instead of varchar(16) account names
- use block_num instead of timestamp
hive_posts_cache:
- few missing fields: `depth`, `get_post_stats` vals
- bonus: distinguish simple vote updates (payout/ranking fields) from body/thread updates | 1.0 | finalize db schema - todo:
- bool fields
consider:
- use INT ids instead of varchar(16) account names
- use block_num instead of timestamp
hive_posts_cache:
- few missing fields: `depth`, `get_post_stats` vals
- bonus: distinguish simple vote updates (payout/ranking fields) from body/thread updates | priority | finalize db schema todo bool fields consider use int ids instead of varchar account names use block num instead of timestamp hive posts cache few missing fields depth get post stats vals bonus distinguish simple vote updates payout ranking fields from body thread updates | 1 |
646,265 | 21,042,647,840 | IssuesEvent | 2022-03-31 13:36:38 | AY2122S2-CS2103T-W12-1/tp | https://api.github.com/repos/AY2122S2-CS2103T-W12-1/tp | closed | As a Teaching Assistant, I can “tag” students with various tags | type.Story priority.High | ... so that I can keep track of who to follow up on, who to check up on more often etc | 1.0 | As a Teaching Assistant, I can “tag” students with various tags - ... so that I can keep track of who to follow up on, who to check up on more often etc | priority | as a teaching assistant i can “tag” students with various tags so that i can keep track of who to follow up on who to check up on more often etc | 1 |
654,967 | 21,674,876,453 | IssuesEvent | 2022-05-08 14:53:39 | emredermann/453_Test | https://api.github.com/repos/emredermann/453_Test | opened | Dummy function complexity enhancement | bug enhancement good first issue High priority | Dummy function in dummy.py file must be reimplemented in order to get better complexity. | 1.0 | Dummy function complexity enhancement - Dummy function in dummy.py file must be reimplemented in order to get better complexity. | priority | dummy function complexity enhancement dummy function in dummy py file must be reimplemented in order to get better complexity | 1 |
96,114 | 3,964,556,367 | IssuesEvent | 2016-05-03 01:42:13 | donejs/donejs | https://api.github.com/repos/donejs/donejs | closed | Run server throws error for undefined version | bug Priority - High | I have created app with `donejs add app anniv`.
and changed port to avoid conflicts with another app I run.
and run application with `donejs develop`
```sh
> anniv@0.0.0 develop /Users/mshin/Workspace/github/anniv
> done-serve --develop --port 7000
done-serve starting on http://localhost:7000
Potentially unhandled rejection [8] TypeError: Error loading "anniv@0.0.0#index.stache!done-autorender@0.8.0#autorender" at <unknown>
Error loading "can@2.3.23#util/vdom/document/document" at file:/Users/mshin/Workspace/github/anniv/node_modules/can/util/vdom/document/document.js
Error loading "can@2.3.23#util/vdom/document/document" from "done-autorender@0.8.0#autorender" at file:/Users/mshin/Workspace/github/anniv/node_modules/done-autorender/src/autorender.js
Cannot read property 'version' of undefined
at createModuleNameAndNormalize (file:/Users/mshin/Workspace/github/anniv/node_modules/steal/ext/npm-extension.js:179:37)
at tryCatchReject (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:1183:30)
at runContinuation1 (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:1142:4)
at Fulfilled.when (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:930:4)
at Pending.run (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:821:13)
at Scheduler._drain (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:97:19)
at Scheduler.drain (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:62:9)
at nextTickCallbackWith0Args (node.js:415:9)
at process._tickCallback (node.js:344:13)
Potentially unhandled rejection [7] TypeError: Error loading "can@2.3.23#util/vdom/document/document" at file:/Users/mshin/Workspace/github/anniv/node_modules/can/util/vdom/document/document.js
Error loading "can@2.3.23#util/vdom/document/document" from "anniv@0.0.0#index.stache!done-autorender@0.8.0#autorender" at file:/Users/mshin/Workspace/github/anniv/src/index.stache
Cannot read property 'version' of undefined
at createModuleNameAndNormalize (file:/Users/mshin/Workspace/github/anniv/node_modules/steal/ext/npm-extension.js:179:37)
at tryCatchReject (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:1183:30)
at runContinuation1 (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:1142:4)
at Fulfilled.when (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:930:4)
at Pending.run (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:821:13)
at Scheduler._drain (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:97:19)
at Scheduler.drain (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:62:9)
at nextTickCallbackWith0Args (node.js:415:9)
at process._tickCallback (node.js:344:13)
``` | 1.0 | Run server throws error for undefined version - I have created app with `donejs add app anniv`.
and changed port to avoid conflicts with another app I run.
and run application with `donejs develop`
```sh
> anniv@0.0.0 develop /Users/mshin/Workspace/github/anniv
> done-serve --develop --port 7000
done-serve starting on http://localhost:7000
Potentially unhandled rejection [8] TypeError: Error loading "anniv@0.0.0#index.stache!done-autorender@0.8.0#autorender" at <unknown>
Error loading "can@2.3.23#util/vdom/document/document" at file:/Users/mshin/Workspace/github/anniv/node_modules/can/util/vdom/document/document.js
Error loading "can@2.3.23#util/vdom/document/document" from "done-autorender@0.8.0#autorender" at file:/Users/mshin/Workspace/github/anniv/node_modules/done-autorender/src/autorender.js
Cannot read property 'version' of undefined
at createModuleNameAndNormalize (file:/Users/mshin/Workspace/github/anniv/node_modules/steal/ext/npm-extension.js:179:37)
at tryCatchReject (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:1183:30)
at runContinuation1 (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:1142:4)
at Fulfilled.when (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:930:4)
at Pending.run (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:821:13)
at Scheduler._drain (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:97:19)
at Scheduler.drain (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:62:9)
at nextTickCallbackWith0Args (node.js:415:9)
at process._tickCallback (node.js:344:13)
Potentially unhandled rejection [7] TypeError: Error loading "can@2.3.23#util/vdom/document/document" at file:/Users/mshin/Workspace/github/anniv/node_modules/can/util/vdom/document/document.js
Error loading "can@2.3.23#util/vdom/document/document" from "anniv@0.0.0#index.stache!done-autorender@0.8.0#autorender" at file:/Users/mshin/Workspace/github/anniv/src/index.stache
Cannot read property 'version' of undefined
at createModuleNameAndNormalize (file:/Users/mshin/Workspace/github/anniv/node_modules/steal/ext/npm-extension.js:179:37)
at tryCatchReject (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:1183:30)
at runContinuation1 (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:1142:4)
at Fulfilled.when (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:930:4)
at Pending.run (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:821:13)
at Scheduler._drain (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:97:19)
at Scheduler.drain (/Users/mshin/Workspace/github/anniv/node_modules/steal/node_modules/steal-systemjs/node_modules/steal-es6-module-loader/dist/es6-module-loader.src.js:62:9)
at nextTickCallbackWith0Args (node.js:415:9)
at process._tickCallback (node.js:344:13)
``` | priority | run server throws error for undefined version i have created app with donejs add app anniv and changed port to avoid conflicts with another app i run and run application with donejs develop sh anniv develop users mshin workspace github anniv done serve develop port done serve starting on potentially unhandled rejection typeerror error loading anniv index stache done autorender autorender at error loading can util vdom document document at file users mshin workspace github anniv node modules can util vdom document document js error loading can util vdom document document from done autorender autorender at file users mshin workspace github anniv node modules done autorender src autorender js cannot read property version of undefined at createmodulenameandnormalize file users mshin workspace github anniv node modules steal ext npm extension js at trycatchreject users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at fulfilled when users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at pending run users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at scheduler drain users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at scheduler drain users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at node js at process tickcallback node js potentially unhandled rejection typeerror error loading can util vdom document document at file users mshin workspace github anniv node modules can util vdom document document js error loading can util vdom document document from anniv index stache done autorender autorender at file users mshin workspace github anniv src index stache cannot read property version of undefined at createmodulenameandnormalize file users mshin workspace github anniv node modules steal ext npm extension js at trycatchreject users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at fulfilled when users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at pending run users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at scheduler drain users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at scheduler drain users mshin workspace github anniv node modules steal node modules steal systemjs node modules steal module loader dist module loader src js at node js at process tickcallback node js | 1 |
302,437 | 9,259,594,156 | IssuesEvent | 2019-03-18 00:44:59 | CosmiQ/cw-geodata | https://api.github.com/repos/CosmiQ/cw-geodata | opened | Implement unit tests for vector.graph | Difficulty: Medium Priority: High Type: Maintenance | @avanetten, do you think you could do this? If not, can you drop me a quick example file to build a graph from alongside a pickled graph object to compare it to? Ideally from a low-density tile or something like that so that I can include the files in the testing submodule.
Thanks! | 1.0 | Implement unit tests for vector.graph - @avanetten, do you think you could do this? If not, can you drop me a quick example file to build a graph from alongside a pickled graph object to compare it to? Ideally from a low-density tile or something like that so that I can include the files in the testing submodule.
Thanks! | priority | implement unit tests for vector graph avanetten do you think you could do this if not can you drop me a quick example file to build a graph from alongside a pickled graph object to compare it to ideally from a low density tile or something like that so that i can include the files in the testing submodule thanks | 1 |
719,131 | 24,747,860,525 | IssuesEvent | 2022-10-21 11:16:50 | ChildMindInstitute/mindlogger-applet-builder | https://api.github.com/repos/ChildMindInstitute/mindlogger-applet-builder | closed | Reordering the report components is not considered a change in the applet | bug Config Report High EK-High Priority | **Steps to reproduce**
1. Open a site and log in: https://admin.mindlogger.org/
2. Open the activity with a configured report and at least two components
3. Reorder components
4. Click "Save"
5. Pay attention to the warning pop-up
**Actual result**
Reordering the report components is not considered a change in the applet
**Expected result**
Reordering the report components is considered a change in the applet
**Notes**: If there will be any other changes the reordering will be saved and applied.
**Video**: https://www.screencast.com/t/coGnguabIv
**Environment:**
https://admin.mindlogger.org/
https://admin-staging.mindlogger.org/
Win 10 / Chrome 103
prod account:
test-user1@com.us / qwerty
my applet 6 / Edit test
Applet password: Qwe123!!! | 1.0 | Reordering the report components is not considered a change in the applet - **Steps to reproduce**
1. Open a site and log in: https://admin.mindlogger.org/
2. Open the activity with a configured report and at least two components
3. Reorder components
4. Click "Save"
5. Pay attention to the warning pop-up
**Actual result**
Reordering the report components is not considered a change in the applet
**Expected result**
Reordering the report components is considered a change in the applet
**Notes**: If there will be any other changes the reordering will be saved and applied.
**Video**: https://www.screencast.com/t/coGnguabIv
**Environment:**
https://admin.mindlogger.org/
https://admin-staging.mindlogger.org/
Win 10 / Chrome 103
prod account:
test-user1@com.us / qwerty
my applet 6 / Edit test
Applet password: Qwe123!!! | priority | reordering the report components is not considered a change in the applet steps to reproduce open a site and log in open the activity with a configured report and at least two components reorder components click save pay attention to the warning pop up actual result reordering the report components is not considered a change in the applet expected result reordering the report components is considered a change in the applet notes if there will be any other changes the reordering will be saved and applied video environment win chrome prod account test com us qwerty my applet edit test applet password | 1 |
177,870 | 6,588,041,544 | IssuesEvent | 2017-09-14 00:20:18 | gravityview/GravityView | https://api.github.com/repos/gravityview/GravityView | opened | Editing an entry stips the labels from product calculation fields | Bug Core: Edit Entry Core: Fields Difficulty: Medium Priority: High | The labels get stripped from the receipt table after editing in Edit Entry; probably due to bad serialization?
Also look into whether it's the deleting of the entry meta on `GravityView_Field_Product::clear_product_info_cache()` method

See [HS#10931](https://secure.helpscout.net/conversation/430351819/10931/). | 1.0 | Editing an entry stips the labels from product calculation fields - The labels get stripped from the receipt table after editing in Edit Entry; probably due to bad serialization?
Also look into whether it's the deleting of the entry meta on `GravityView_Field_Product::clear_product_info_cache()` method

See [HS#10931](https://secure.helpscout.net/conversation/430351819/10931/). | priority | editing an entry stips the labels from product calculation fields the labels get stripped from the receipt table after editing in edit entry probably due to bad serialization also look into whether it s the deleting of the entry meta on gravityview field product clear product info cache method see | 1 |
666,892 | 22,390,978,152 | IssuesEvent | 2022-06-17 07:39:41 | nexB/scancode.io | https://api.github.com/repos/nexB/scancode.io | closed | DiscoveredPackage matching query does not exist. | bug high priority | Input: https://github.com/ballerina-platform/ballerina-lang/archive/refs/tags/v1.2.29.tar.gz
Pipeline: `scan_package`
```
DiscoveredPackage matching query does not exist.
Traceback:
File "/app/scanpipe/pipelines/__init__.py", line 115, in execute
step(self)
File "/app/scanpipe/pipelines/scan_package.py", line 119, in build_inventory_from_scan
scancode.create_inventory_from_scan(self.project, self.scan_output_location)
File "/app/scanpipe/pipes/scancode.py", line 504, in create_inventory_from_scan
create_codebase_resources(project, scanned_codebase)
File "/app/scanpipe/pipes/scancode.py", line 398, in create_codebase_resources
package = DiscoveredPackage.objects.get(package_uid=package_uid)
File "/usr/local/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 496, in get
raise self.model.DoesNotExist(
```
The culprit is the `for_packages` attribute on the `codebase/ballerina-lang-1.2.29/tool-plugins/theia/yarn.lock` resource that returns a list containing a `None` value as the `package_uid`, `for_packages == [None]`.
`None` cannot be matched in the DB thus the model.DoesNotExist exception.
```
from commoncode.resource import VirtualCodebase
scanned_codebase = VirtualCodebase("scancode-2022-06-15-04-55-57.json")
resource = scanned_codebase.get_resource('codebase/ballerina-lang-1.2.29/tool-plugins/theia/yarn.lock')
resource.for_packages # -> [None]
```
1. This is likely an issue in the way the `for_packages` value is generated and needs to be fixed.
2. The package QuerySet is not scoped with the current project in create_codebase_resources in https://github.com/nexB/scancode.io/blob/main/scanpipe/pipes/scancode.py#L398
3. The `package_uid` field is not enforced to be unique in a project, also it is not indexed at the moment and it should as the code now heavily relies on it to fetch DiscoveredPackage instances https://github.com/nexB/scancode.io/blob/main/scanpipe/models.py#L1729 | 1.0 | DiscoveredPackage matching query does not exist. - Input: https://github.com/ballerina-platform/ballerina-lang/archive/refs/tags/v1.2.29.tar.gz
Pipeline: `scan_package`
```
DiscoveredPackage matching query does not exist.
Traceback:
File "/app/scanpipe/pipelines/__init__.py", line 115, in execute
step(self)
File "/app/scanpipe/pipelines/scan_package.py", line 119, in build_inventory_from_scan
scancode.create_inventory_from_scan(self.project, self.scan_output_location)
File "/app/scanpipe/pipes/scancode.py", line 504, in create_inventory_from_scan
create_codebase_resources(project, scanned_codebase)
File "/app/scanpipe/pipes/scancode.py", line 398, in create_codebase_resources
package = DiscoveredPackage.objects.get(package_uid=package_uid)
File "/usr/local/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 496, in get
raise self.model.DoesNotExist(
```
The culprit is the `for_packages` attribute on the `codebase/ballerina-lang-1.2.29/tool-plugins/theia/yarn.lock` resource that returns a list containing a `None` value as the `package_uid`, `for_packages == [None]`.
`None` cannot be matched in the DB thus the model.DoesNotExist exception.
```
from commoncode.resource import VirtualCodebase
scanned_codebase = VirtualCodebase("scancode-2022-06-15-04-55-57.json")
resource = scanned_codebase.get_resource('codebase/ballerina-lang-1.2.29/tool-plugins/theia/yarn.lock')
resource.for_packages # -> [None]
```
1. This is likely an issue in the way the `for_packages` value is generated and needs to be fixed.
2. The package QuerySet is not scoped with the current project in create_codebase_resources in https://github.com/nexB/scancode.io/blob/main/scanpipe/pipes/scancode.py#L398
3. The `package_uid` field is not enforced to be unique in a project, also it is not indexed at the moment and it should as the code now heavily relies on it to fetch DiscoveredPackage instances https://github.com/nexB/scancode.io/blob/main/scanpipe/models.py#L1729 | priority | discoveredpackage matching query does not exist input pipeline scan package discoveredpackage matching query does not exist traceback file app scanpipe pipelines init py line in execute step self file app scanpipe pipelines scan package py line in build inventory from scan scancode create inventory from scan self project self scan output location file app scanpipe pipes scancode py line in create inventory from scan create codebase resources project scanned codebase file app scanpipe pipes scancode py line in create codebase resources package discoveredpackage objects get package uid package uid file usr local lib site packages django db models manager py line in manager method return getattr self get queryset name args kwargs file usr local lib site packages django db models query py line in get raise self model doesnotexist the culprit is the for packages attribute on the codebase ballerina lang tool plugins theia yarn lock resource that returns a list containing a none value as the package uid for packages none cannot be matched in the db thus the model doesnotexist exception from commoncode resource import virtualcodebase scanned codebase virtualcodebase scancode json resource scanned codebase get resource codebase ballerina lang tool plugins theia yarn lock resource for packages this is likely an issue in the way the for packages value is generated and needs to be fixed the package queryset is not scoped with the current project in create codebase resources in the package uid field is not enforced to be unique in a project also it is not indexed at the moment and it should as the code now heavily relies on it to fetch discoveredpackage instances | 1 |
347,457 | 10,430,184,635 | IssuesEvent | 2019-09-17 05:56:42 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Need to allow the user to navigate if user is using iframe | NEED FAST ACTION NEXT UPDATE [Priority: HIGH] bug | Console Error: Unsafe JavaScript attempt to initiate navigation for frame with origin 'https://www.thecorporatethiefbeats.com' from frame with URL 'https://corporatethief.infinity.airbit.com/?config_id=2325&embed=1#amp=1'. The frame attempting navigation of the top-level window is sandboxed, but the flag of 'allow-top-navigation' or 'allow-top-navigation-by-user-activation' is not set.
HelpScout Link: https://secure.helpscout.net/conversation/940768169/79433?folderId=1060554
Need to add this attribute in the sandbox to allow the user to navigate properly:
"allow-top-navigation"
| 1.0 | Need to allow the user to navigate if user is using iframe - Console Error: Unsafe JavaScript attempt to initiate navigation for frame with origin 'https://www.thecorporatethiefbeats.com' from frame with URL 'https://corporatethief.infinity.airbit.com/?config_id=2325&embed=1#amp=1'. The frame attempting navigation of the top-level window is sandboxed, but the flag of 'allow-top-navigation' or 'allow-top-navigation-by-user-activation' is not set.
HelpScout Link: https://secure.helpscout.net/conversation/940768169/79433?folderId=1060554
Need to add this attribute in the sandbox to allow the user to navigate properly:
"allow-top-navigation"
| priority | need to allow the user to navigate if user is using iframe console error unsafe javascript attempt to initiate navigation for frame with origin from frame with url the frame attempting navigation of the top level window is sandboxed but the flag of allow top navigation or allow top navigation by user activation is not set helpscout link need to add this attribute in the sandbox to allow the user to navigate properly allow top navigation | 1 |
392,541 | 11,592,719,963 | IssuesEvent | 2020-02-24 12:07:15 | bryntum/support | https://api.github.com/repos/bryntum/support | closed | Context menu differs in schedule vs grid | bug high-priority resolved | Open advanced demo
Right click a task
Right click a row in the locked section. Note the difference
In Basic demo, context menus are identical | 1.0 | Context menu differs in schedule vs grid - Open advanced demo
Right click a task
Right click a row in the locked section. Note the difference
In Basic demo, context menus are identical | priority | context menu differs in schedule vs grid open advanced demo right click a task right click a row in the locked section note the difference in basic demo context menus are identical | 1 |
794,924 | 28,054,759,649 | IssuesEvent | 2023-03-29 08:39:30 | inlang/inlang | https://api.github.com/repos/inlang/inlang | closed | validate config | type: feature scope: core priority: high | ## Problem
1. Programatically testing the config is not possible e.g. "Does the `readResources` function work?", "Are my lints correct?".
2. Plugin authors are starting to "hack" the inlang config. That's great. But will lead to unintended consequences and breaking changes in the future.
## Proposal
Provide a module (`inlang/core/test` ?) that can validate the config schema and functionality. That module can be used by, for example, the CLI that provides an `inlang config validate` command and the editor that checks the config on the fly.
- [ ] Validate the config schema with something like `zod`.
- [ ] "Test run" functions that are defined in the config
- [ ] Provide a CLI command like `inlang config validate` to test the config file programmatically
- [ ] Use regex to ban code in the config file that won't run in the browser like `import`, node globals, etc.
## Additional information
Using zod for the validation seems to make sense.
| 1.0 | validate config - ## Problem
1. Programatically testing the config is not possible e.g. "Does the `readResources` function work?", "Are my lints correct?".
2. Plugin authors are starting to "hack" the inlang config. That's great. But will lead to unintended consequences and breaking changes in the future.
## Proposal
Provide a module (`inlang/core/test` ?) that can validate the config schema and functionality. That module can be used by, for example, the CLI that provides an `inlang config validate` command and the editor that checks the config on the fly.
- [ ] Validate the config schema with something like `zod`.
- [ ] "Test run" functions that are defined in the config
- [ ] Provide a CLI command like `inlang config validate` to test the config file programmatically
- [ ] Use regex to ban code in the config file that won't run in the browser like `import`, node globals, etc.
## Additional information
Using zod for the validation seems to make sense.
| priority | validate config problem programatically testing the config is not possible e g does the readresources function work are my lints correct plugin authors are starting to hack the inlang config that s great but will lead to unintended consequences and breaking changes in the future proposal provide a module inlang core test that can validate the config schema and functionality that module can be used by for example the cli that provides an inlang config validate command and the editor that checks the config on the fly validate the config schema with something like zod test run functions that are defined in the config provide a cli command like inlang config validate to test the config file programmatically use regex to ban code in the config file that won t run in the browser like import node globals etc additional information using zod for the validation seems to make sense | 1 |
584,466 | 17,455,549,834 | IssuesEvent | 2021-08-06 00:14:53 | zulip/zulip | https://api.github.com/repos/zulip/zulip | opened | Add infrastructure to prevent double-sending of custom emails | help wanted priority: high area: emails | The `manage.py send_custom_email` system is very useful for sending custom emails to portions of the userbase for a Zulip server. However, one flaw in its design is that if it encounters an error that crashes the job, there isn't a convenient way to continue (without emailing users twice). While it is possible to do so with string-parsing `/var/log/zulip/send_email.log`, it'd be much better to track this information in the Zulip database.
I think the right way to do this is to create a `RealmAuditLog` entry when custom emails are sent to a given user; we can number it as `CUSTOM_EMAIL_SENT = 800` in `AbtractRealmAuditLog` (picked to not conflict with the 700 in https://github.com/zulip/zulip/issues/19528) with the full ID for the custom email (the long thing in the template path) included in the `extra_data` key.
And then we can have the `send_custom_email` function exclude users who have a RealmAuditLog entry for the current email's ID. We will want to do this exclusion carefully to ensure that logic like the `--marketing` option, which are designed to only email an address once even if they have multiple accounts for their email address, will avoid sending the email if any UserProfile with that `delivery_email` has such a RealmAuditLog entry.
This won't be a perfect system, in that custom email ID changes if the Markdown template passed into it changes, but that can be addressed by hand by excluding additional email IDs.
| 1.0 | Add infrastructure to prevent double-sending of custom emails - The `manage.py send_custom_email` system is very useful for sending custom emails to portions of the userbase for a Zulip server. However, one flaw in its design is that if it encounters an error that crashes the job, there isn't a convenient way to continue (without emailing users twice). While it is possible to do so with string-parsing `/var/log/zulip/send_email.log`, it'd be much better to track this information in the Zulip database.
I think the right way to do this is to create a `RealmAuditLog` entry when custom emails are sent to a given user; we can number it as `CUSTOM_EMAIL_SENT = 800` in `AbtractRealmAuditLog` (picked to not conflict with the 700 in https://github.com/zulip/zulip/issues/19528) with the full ID for the custom email (the long thing in the template path) included in the `extra_data` key.
And then we can have the `send_custom_email` function exclude users who have a RealmAuditLog entry for the current email's ID. We will want to do this exclusion carefully to ensure that logic like the `--marketing` option, which are designed to only email an address once even if they have multiple accounts for their email address, will avoid sending the email if any UserProfile with that `delivery_email` has such a RealmAuditLog entry.
This won't be a perfect system, in that custom email ID changes if the Markdown template passed into it changes, but that can be addressed by hand by excluding additional email IDs.
| priority | add infrastructure to prevent double sending of custom emails the manage py send custom email system is very useful for sending custom emails to portions of the userbase for a zulip server however one flaw in its design is that if it encounters an error that crashes the job there isn t a convenient way to continue without emailing users twice while it is possible to do so with string parsing var log zulip send email log it d be much better to track this information in the zulip database i think the right way to do this is to create a realmauditlog entry when custom emails are sent to a given user we can number it as custom email sent in abtractrealmauditlog picked to not conflict with the in with the full id for the custom email the long thing in the template path included in the extra data key and then we can have the send custom email function exclude users who have a realmauditlog entry for the current email s id we will want to do this exclusion carefully to ensure that logic like the marketing option which are designed to only email an address once even if they have multiple accounts for their email address will avoid sending the email if any userprofile with that delivery email has such a realmauditlog entry this won t be a perfect system in that custom email id changes if the markdown template passed into it changes but that can be addressed by hand by excluding additional email ids | 1 |
776,271 | 27,254,137,120 | IssuesEvent | 2023-02-22 10:15:50 | AY2223S2-CS2103T-T09-4/tp | https://api.github.com/repos/AY2223S2-CS2103T-T09-4/tp | closed | Add client management features to the User Guide for v1.1 | priority.High User Guide type.Task | As a user, I can see the client management features in the User Guide. | 1.0 | Add client management features to the User Guide for v1.1 - As a user, I can see the client management features in the User Guide. | priority | add client management features to the user guide for as a user i can see the client management features in the user guide | 1 |
628,141 | 19,976,732,596 | IssuesEvent | 2022-01-29 07:35:39 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Need to make proper compatibility with amp analytics function when infinite scroll is used. | bug Urgent [Priority: HIGH] | Ref:- https://secure.helpscout.net/conversation/1752450699/222045?folderId=4874234
Need to make proper compatibility with amp analytics function when infinite scroll is used and analytics only register the first pageview. | 1.0 | Need to make proper compatibility with amp analytics function when infinite scroll is used. - Ref:- https://secure.helpscout.net/conversation/1752450699/222045?folderId=4874234
Need to make proper compatibility with amp analytics function when infinite scroll is used and analytics only register the first pageview. | priority | need to make proper compatibility with amp analytics function when infinite scroll is used ref need to make proper compatibility with amp analytics function when infinite scroll is used and analytics only register the first pageview | 1 |
822,315 | 30,864,807,042 | IssuesEvent | 2023-08-03 07:15:51 | kubebb/components | https://api.github.com/repos/kubebb/components | closed | deploy kubebb stack in private cluster | enhancement priority-high difficulty-medium | When kubernetes cluster can not access public network ,we have to use kubebb differently.
## Steps to deploy kubbb stack in a private cluster
1. deploy k8s cluster
2. deploy a private image registry(optional)
3. push all images to private image registry
- kubebb/core
- buildingbase images
- chartmuseum image
- ...
4. deploy kubebb core
5. deploy a private component repository
- use chartmuseum
- only internal usage
6. push all official components into this private repository
6. create `Repository` into `kubebb/core`
- with image registry override
## Steps to deploy a component in private cluster
1. push component into private repository
2. push required images into private image registry
3. update `componentplan.yaml` to use private image registry and private component repository
4. apply `componentplan.yaml` and check status
| 1.0 | deploy kubebb stack in private cluster - When kubernetes cluster can not access public network ,we have to use kubebb differently.
## Steps to deploy kubbb stack in a private cluster
1. deploy k8s cluster
2. deploy a private image registry(optional)
3. push all images to private image registry
- kubebb/core
- buildingbase images
- chartmuseum image
- ...
4. deploy kubebb core
5. deploy a private component repository
- use chartmuseum
- only internal usage
6. push all official components into this private repository
6. create `Repository` into `kubebb/core`
- with image registry override
## Steps to deploy a component in private cluster
1. push component into private repository
2. push required images into private image registry
3. update `componentplan.yaml` to use private image registry and private component repository
4. apply `componentplan.yaml` and check status
| priority | deploy kubebb stack in private cluster when kubernetes cluster can not access public network we have to use kubebb differently steps to deploy kubbb stack in a private cluster deploy cluster deploy a private image registry optional push all images to private image registry kubebb core buildingbase images chartmuseum image deploy kubebb core deploy a private component repository use chartmuseum only internal usage push all official components into this private repository create repository into kubebb core with image registry override steps to deploy a component in private cluster push component into private repository push required images into private image registry update componentplan yaml to use private image registry and private component repository apply componentplan yaml and check status | 1 |
694,631 | 23,822,023,238 | IssuesEvent | 2022-09-05 12:07:54 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | [bug][conan 2] Reference install/download fails - conan.tools.scm not available | priority: high bug | When downloading or installing the `zlib/1.2.11` (latest revision of the recipe should be prepared to work with conan 2) recipe from conan center it fails the command when loading the recipe contents saying `ModuleNotFoundError: No module named 'conan.tools.scm'`
### Environment Details (include every applicable attribute)
* Operating System+version: Windows 10
* Compiler+version: VS 2020
* Conan version: 2.0.0.beta2
* Python version: 3.10.2
### Steps to reproduce (Include if Applicable)
```
conan install --reference zlib/1.2.11#d77ee68739fcbe5bf37b8a4690eea6ea -r conancenter
```
or
```
conan download zlib/1.2.11#d77ee68739fcbe5bf37b8a4690eea6ea -p os=Windows -r conancenter
```
### Logs (Executed commands with output) (Include/Attach if Applicable)
```
(conan2) λ conan install --reference zlib/1.2.11#d77ee68739fcbe5bf37b8a4690eea6ea -r conancenter
-------- Input profiles ----------
Profile host:
[settings]
arch=x86_64
build_type=Release
compiler=msvc
compiler.cppstd=14
compiler.runtime=dynamic
compiler.runtime_type=Release
compiler.version=192
os=Windows
[options]
[tool_requires]
[env]
Profile build:
[settings]
arch=x86_64
build_type=Release
compiler=msvc
compiler.cppstd=14
compiler.runtime=dynamic
compiler.runtime_type=Release
compiler.version=192
os=Windows
[options]
[tool_requires]
[env]
-------- Computing dependency graph ----------
Graph root
virtual
-------- Computing necessary packages ----------
ERROR: Package 'zlib/1.2.11' not resolved: zlib/1.2.11: Cannot load recipe.
Error loading conanfile at 'C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py': Unable to load conanfile in C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py", line 5, in <module>
from conan.tools.scm import Version
ModuleNotFoundError: No module named 'conan.tools.scm'
```
or
```
(conan2) λ conan download zlib/1.2.11#d77ee68739fcbe5bf37b8a4690eea6ea -p os=Windows -r conancenter
Downloading zlib/1.2.11#d77ee68739fcbe5bf37b8a4690eea6ea
Downloading conanmanifest.txt
Downloading conanfile.py
Downloading conan_export.tgz
Decompressing conan_export.tgz
ERROR: Error loading conanfile at 'C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py': Unable to load conanfile in C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py", line 5, in <module>
from conan.tools.scm import Version
ModuleNotFoundError: No module named 'conan.tools.scm'
```
| 1.0 | [bug][conan 2] Reference install/download fails - conan.tools.scm not available - When downloading or installing the `zlib/1.2.11` (latest revision of the recipe should be prepared to work with conan 2) recipe from conan center it fails the command when loading the recipe contents saying `ModuleNotFoundError: No module named 'conan.tools.scm'`
### Environment Details (include every applicable attribute)
* Operating System+version: Windows 10
* Compiler+version: VS 2020
* Conan version: 2.0.0.beta2
* Python version: 3.10.2
### Steps to reproduce (Include if Applicable)
```
conan install --reference zlib/1.2.11#d77ee68739fcbe5bf37b8a4690eea6ea -r conancenter
```
or
```
conan download zlib/1.2.11#d77ee68739fcbe5bf37b8a4690eea6ea -p os=Windows -r conancenter
```
### Logs (Executed commands with output) (Include/Attach if Applicable)
```
(conan2) λ conan install --reference zlib/1.2.11#d77ee68739fcbe5bf37b8a4690eea6ea -r conancenter
-------- Input profiles ----------
Profile host:
[settings]
arch=x86_64
build_type=Release
compiler=msvc
compiler.cppstd=14
compiler.runtime=dynamic
compiler.runtime_type=Release
compiler.version=192
os=Windows
[options]
[tool_requires]
[env]
Profile build:
[settings]
arch=x86_64
build_type=Release
compiler=msvc
compiler.cppstd=14
compiler.runtime=dynamic
compiler.runtime_type=Release
compiler.version=192
os=Windows
[options]
[tool_requires]
[env]
-------- Computing dependency graph ----------
Graph root
virtual
-------- Computing necessary packages ----------
ERROR: Package 'zlib/1.2.11' not resolved: zlib/1.2.11: Cannot load recipe.
Error loading conanfile at 'C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py': Unable to load conanfile in C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py", line 5, in <module>
from conan.tools.scm import Version
ModuleNotFoundError: No module named 'conan.tools.scm'
```
or
```
(conan2) λ conan download zlib/1.2.11#d77ee68739fcbe5bf37b8a4690eea6ea -p os=Windows -r conancenter
Downloading zlib/1.2.11#d77ee68739fcbe5bf37b8a4690eea6ea
Downloading conanmanifest.txt
Downloading conanfile.py
Downloading conan_export.tgz
Decompressing conan_export.tgz
ERROR: Error loading conanfile at 'C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py': Unable to load conanfile in C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\danimtb\.conan2\p\92eecd812928ae0c\e\conanfile.py", line 5, in <module>
from conan.tools.scm import Version
ModuleNotFoundError: No module named 'conan.tools.scm'
```
| priority | reference install download fails conan tools scm not available when downloading or installing the zlib latest revision of the recipe should be prepared to work with conan recipe from conan center it fails the command when loading the recipe contents saying modulenotfounderror no module named conan tools scm environment details include every applicable attribute operating system version windows compiler version vs conan version python version steps to reproduce include if applicable conan install reference zlib r conancenter or conan download zlib p os windows r conancenter logs executed commands with output include attach if applicable λ conan install reference zlib r conancenter input profiles profile host arch build type release compiler msvc compiler cppstd compiler runtime dynamic compiler runtime type release compiler version os windows profile build arch build type release compiler msvc compiler cppstd compiler runtime dynamic compiler runtime type release compiler version os windows computing dependency graph graph root virtual computing necessary packages error package zlib not resolved zlib cannot load recipe error loading conanfile at c users danimtb p e conanfile py unable to load conanfile in c users danimtb p e conanfile py file line in exec module file line in call with frames removed file c users danimtb p e conanfile py line in from conan tools scm import version modulenotfounderror no module named conan tools scm or λ conan download zlib p os windows r conancenter downloading zlib downloading conanmanifest txt downloading conanfile py downloading conan export tgz decompressing conan export tgz error error loading conanfile at c users danimtb p e conanfile py unable to load conanfile in c users danimtb p e conanfile py file line in exec module file line in call with frames removed file c users danimtb p e conanfile py line in from conan tools scm import version modulenotfounderror no module named conan tools scm | 1 |
458,301 | 13,172,808,868 | IssuesEvent | 2020-08-11 19:06:53 | infiniteautomation/ma-core-public | https://api.github.com/repos/infiniteautomation/ma-core-public | opened | Role Dao - Get Inherited should be done in a transaction | High Priority Item | This would cater for changes in the structure during this recursive call. | 1.0 | Role Dao - Get Inherited should be done in a transaction - This would cater for changes in the structure during this recursive call. | priority | role dao get inherited should be done in a transaction this would cater for changes in the structure during this recursive call | 1 |
265,389 | 8,353,752,617 | IssuesEvent | 2018-10-02 11:07:49 | handsontable/handsontable | https://api.github.com/repos/handsontable/handsontable | closed | [Column sorting] New, inserted columns breaks the table | Plugin: column sorting Priority: high Regression Status: Released Type: Bug | ### Description
After creating few columns behind the viewport, Handsontable throws an exception.
### Steps to reproduce
<!--- Provide steps to reproduce this issue -->
1. Insert two columns behind the viewport when the `columnSorting` plugin is enabled.
2. You can see an exception.

### Your environment
* Handsontable version: 6.0.0
| 1.0 | [Column sorting] New, inserted columns breaks the table - ### Description
After creating few columns behind the viewport, Handsontable throws an exception.
### Steps to reproduce
<!--- Provide steps to reproduce this issue -->
1. Insert two columns behind the viewport when the `columnSorting` plugin is enabled.
2. You can see an exception.

### Your environment
* Handsontable version: 6.0.0
| priority | new inserted columns breaks the table description after creating few columns behind the viewport handsontable throws an exception steps to reproduce insert two columns behind the viewport when the columnsorting plugin is enabled you can see an exception your environment handsontable version | 1 |
195,088 | 6,902,572,912 | IssuesEvent | 2017-11-25 22:24:16 | wlandau-lilly/drake | https://api.github.com/repos/wlandau-lilly/drake | closed | Require 'config' to be supplied to all user-side functions except make() and drake_config() | high priority | This will simplify and fortify the code base. Functions like `vis_drake_graph()` take a bunch of arguments and then construct a `drake_config()` list if one is not already supplied. This generates a lot of confusion, repeat code, and room for error. Only `make()` and `drake_config()` should take arguments like `parallelism` and `jobs` and `verbose` in the usual way. The others should be required to get them from a `drake_config()` list. | 1.0 | Require 'config' to be supplied to all user-side functions except make() and drake_config() - This will simplify and fortify the code base. Functions like `vis_drake_graph()` take a bunch of arguments and then construct a `drake_config()` list if one is not already supplied. This generates a lot of confusion, repeat code, and room for error. Only `make()` and `drake_config()` should take arguments like `parallelism` and `jobs` and `verbose` in the usual way. The others should be required to get them from a `drake_config()` list. | priority | require config to be supplied to all user side functions except make and drake config this will simplify and fortify the code base functions like vis drake graph take a bunch of arguments and then construct a drake config list if one is not already supplied this generates a lot of confusion repeat code and room for error only make and drake config should take arguments like parallelism and jobs and verbose in the usual way the others should be required to get them from a drake config list | 1 |
451,975 | 13,044,277,013 | IssuesEvent | 2020-07-29 04:10:10 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | self provisioner storage permission problems | area/console area/iam kind/bug kind/need-to-verify priority/high | **Describe the Bug**
Self provisioner can't create volume, and snapshot not shown up

**Versions Used**
KubeSphere: 3.0.0-dev
| 1.0 | self provisioner storage permission problems - **Describe the Bug**
Self provisioner can't create volume, and snapshot not shown up

**Versions Used**
KubeSphere: 3.0.0-dev
| priority | self provisioner storage permission problems describe the bug self provisioner can t create volume and snapshot not shown up versions used kubesphere dev | 1 |
117,907 | 4,728,897,636 | IssuesEvent | 2016-10-18 17:11:15 | MRN-Code/penny-collector | https://api.github.com/repos/MRN-Code/penny-collector | closed | Drop “globals” language | enhancement high priority | There’s a lot of “globals” required (_src/utils/globals.js_). However, these are *shared* components, not items on the global object. Change the language to make this clearer. | 1.0 | Drop “globals” language - There’s a lot of “globals” required (_src/utils/globals.js_). However, these are *shared* components, not items on the global object. Change the language to make this clearer. | priority | drop “globals” language there’s a lot of “globals” required src utils globals js however these are shared components not items on the global object change the language to make this clearer | 1 |
712,339 | 24,492,019,984 | IssuesEvent | 2022-10-10 03:48:31 | AlphaWallet/alpha-wallet-android | https://api.github.com/repos/AlphaWallet/alpha-wallet-android | closed | Make a build so that the artifacts (jars for Android, frameworks for iOS) and their dependencies can be included in another project | High Priority | Details might change, so refer to https://github.com/AlphaWallet/alpha-wallet-ios/issues/5234, at least for now | 1.0 | Make a build so that the artifacts (jars for Android, frameworks for iOS) and their dependencies can be included in another project - Details might change, so refer to https://github.com/AlphaWallet/alpha-wallet-ios/issues/5234, at least for now | priority | make a build so that the artifacts jars for android frameworks for ios and their dependencies can be included in another project details might change so refer to at least for now | 1 |
265,712 | 8,357,838,220 | IssuesEvent | 2018-10-02 23:15:31 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Master: adding world leader in store Authorization prevent server from startup | Fixed High Priority | 
save
shut down server
try to launch it again
then in debugger
> Obj.data.Government = 'Obj.data.Government' threw an exception of type 'System.NullReferenceException'
@johnkslg ? | 1.0 | Master: adding world leader in store Authorization prevent server from startup - 
save
shut down server
try to launch it again
then in debugger
> Obj.data.Government = 'Obj.data.Government' threw an exception of type 'System.NullReferenceException'
@johnkslg ? | priority | master adding world leader in store authorization prevent server from startup save shut down server try to launch it again then in debugger obj data government obj data government threw an exception of type system nullreferenceexception johnkslg | 1 |
124,300 | 4,894,640,766 | IssuesEvent | 2016-11-19 11:47:22 | dalaranwow/dalaran-wow | https://api.github.com/repos/dalaranwow/dalaran-wow | closed | Rogue: Blade Flurry | Class - Rogue Mechanics On PTR Priority - High | Ok so basically the idea behind this ability is to transfer 100% of the damage you deal to target x onto target y. The damage you deal to target x is being reduced by armor, but the damage you deal to target y with blade flurry is being reduced again by armor, and therefore gets double armor penalty. This shouldnt be the case, the transferred damage should do exactly as much and bypass armor.
| 1.0 | Rogue: Blade Flurry - Ok so basically the idea behind this ability is to transfer 100% of the damage you deal to target x onto target y. The damage you deal to target x is being reduced by armor, but the damage you deal to target y with blade flurry is being reduced again by armor, and therefore gets double armor penalty. This shouldnt be the case, the transferred damage should do exactly as much and bypass armor.
| priority | rogue blade flurry ok so basically the idea behind this ability is to transfer of the damage you deal to target x onto target y the damage you deal to target x is being reduced by armor but the damage you deal to target y with blade flurry is being reduced again by armor and therefore gets double armor penalty this shouldnt be the case the transferred damage should do exactly as much and bypass armor | 1 |
543,502 | 15,883,107,256 | IssuesEvent | 2021-04-09 16:55:38 | sopra-fs21-group-26/client | https://api.github.com/repos/sopra-fs21-group-26/client | opened | Screen for lobby transition | high priority task | <h1>Story #7 Lobby </h1>
<h2>Sub-Tasks:</h2>
- [ ] CSS
- [ ] Join Lobby Button
- [ ] Create Lobby Button
- [ ] Back to Menu Button
- [ ] Click Functionality & Routing
<h2>Estimate: 0.5h</h2> | 1.0 | Screen for lobby transition - <h1>Story #7 Lobby </h1>
<h2>Sub-Tasks:</h2>
- [ ] CSS
- [ ] Join Lobby Button
- [ ] Create Lobby Button
- [ ] Back to Menu Button
- [ ] Click Functionality & Routing
<h2>Estimate: 0.5h</h2> | priority | screen for lobby transition story lobby sub tasks css join lobby button create lobby button back to menu button click functionality routing estimate | 1 |
727,598 | 25,041,140,498 | IssuesEvent | 2022-11-04 20:55:12 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | closed | [AQ40] Ouro can have no targets even if attacked | Confirmed Priority-High 60 Instance - Raid - Vanilla | ### Current Behaviour
If pulled at max range or Ouro can't find another target, it will not cast spells or Submerge
### Expected Blizzlike Behaviour
Ouro will always target another player
### Source
_No response_
### Steps to reproduce the problem
1. `.gm on`
2. `.go xyz -9144.474609 2126.168701 -64.607368 531 3.682328`
3. `.gm off`
4. see if Ouro has a target. Does not submerge, does not cast spells
### Extra Notes
https://github.com/chromiecraft/chromiecraft/issues/4391
### AC rev. hash/commit
https://github.com/azerothcore/azerothcore-wotlk/commit/93622ccb481e3484b90971f3450b4e3c9c902c9d
### Operating system
Windows 10
### Custom changes or Modules
_No response_ | 1.0 | [AQ40] Ouro can have no targets even if attacked - ### Current Behaviour
If pulled at max range or Ouro can't find another target, it will not cast spells or Submerge
### Expected Blizzlike Behaviour
Ouro will always target another player
### Source
_No response_
### Steps to reproduce the problem
1. `.gm on`
2. `.go xyz -9144.474609 2126.168701 -64.607368 531 3.682328`
3. `.gm off`
4. see if Ouro has a target. Does not submerge, does not cast spells
### Extra Notes
https://github.com/chromiecraft/chromiecraft/issues/4391
### AC rev. hash/commit
https://github.com/azerothcore/azerothcore-wotlk/commit/93622ccb481e3484b90971f3450b4e3c9c902c9d
### Operating system
Windows 10
### Custom changes or Modules
_No response_ | priority | ouro can have no targets even if attacked current behaviour if pulled at max range or ouro can t find another target it will not cast spells or submerge expected blizzlike behaviour ouro will always target another player source no response steps to reproduce the problem gm on go xyz gm off see if ouro has a target does not submerge does not cast spells extra notes ac rev hash commit operating system windows custom changes or modules no response | 1 |
793,588 | 28,003,216,745 | IssuesEvent | 2023-03-27 13:54:17 | CivMC/SimpleAdminHacks | https://api.github.com/repos/CivMC/SimpleAdminHacks | closed | Add config option to prevent breaking diamond ore without a silk touch pickaxe | Category: Feature Priority: High | Players often mine with a normal pickaxe and save a silk touch for mining diamonds. But if there's block lag, or hiddenore decides the block you're breaking *right now* should be a diamond, you frustratingly break with the wrong pickaxe.
One option is having a player /config option that will prevent you from breaking an ore without a silk touch. That way a player can prevent themselves from accidentally breaking it with the wrong pickaxe.
This will help keep players away from using bots/scripts for these scenarios, or rule breaking environmental reading. | 1.0 | Add config option to prevent breaking diamond ore without a silk touch pickaxe - Players often mine with a normal pickaxe and save a silk touch for mining diamonds. But if there's block lag, or hiddenore decides the block you're breaking *right now* should be a diamond, you frustratingly break with the wrong pickaxe.
One option is having a player /config option that will prevent you from breaking an ore without a silk touch. That way a player can prevent themselves from accidentally breaking it with the wrong pickaxe.
This will help keep players away from using bots/scripts for these scenarios, or rule breaking environmental reading. | priority | add config option to prevent breaking diamond ore without a silk touch pickaxe players often mine with a normal pickaxe and save a silk touch for mining diamonds but if there s block lag or hiddenore decides the block you re breaking right now should be a diamond you frustratingly break with the wrong pickaxe one option is having a player config option that will prevent you from breaking an ore without a silk touch that way a player can prevent themselves from accidentally breaking it with the wrong pickaxe this will help keep players away from using bots scripts for these scenarios or rule breaking environmental reading | 1 |
328,100 | 9,986,036,507 | IssuesEvent | 2019-07-10 18:04:23 | MarchWorks/AniTV | https://api.github.com/repos/MarchWorks/AniTV | closed | Show subtitles while playing a video | bug priority: hight | The subtitles doesn't work. they need to be extracted then get added with the `track` tag but they wouldn't work yet because of the following warning `Resource interpreted as TextTrack but transferred with MIME type text/plain`
**Solutions**
- Solve `Resource interpreted as TextTrack but transferred with MIME type text/plain` warning somehow.
- Use the native web media player to play videos | 1.0 | Show subtitles while playing a video - The subtitles doesn't work. they need to be extracted then get added with the `track` tag but they wouldn't work yet because of the following warning `Resource interpreted as TextTrack but transferred with MIME type text/plain`
**Solutions**
- Solve `Resource interpreted as TextTrack but transferred with MIME type text/plain` warning somehow.
- Use the native web media player to play videos | priority | show subtitles while playing a video the subtitles doesn t work they need to be extracted then get added with the track tag but they wouldn t work yet because of the following warning resource interpreted as texttrack but transferred with mime type text plain solutions solve resource interpreted as texttrack but transferred with mime type text plain warning somehow use the native web media player to play videos | 1 |
643,497 | 20,958,364,960 | IssuesEvent | 2022-03-27 12:32:11 | isawnyu/pleiades-gazetteer | https://api.github.com/repos/isawnyu/pleiades-gazetteer | closed | ensure URIs in TTL export files are valid and well-formed: 3pts | bug priority: high | @ryanfb reports:
working with pleiades RDF dumps I get some issues with URL's in the ttl files not being URL-encoded that winds up needing hand-fixing before parsing in some things…e.g. angle brackets, square brackets, percent signs, etc. not being %-escaped
https://gist.github.com/ryanfb/6796a5e2d77a4c1423b4 for specific instances
Steps to reproduce:
current flow encountering them is using jena rdfcat to concatenate the places ttl and convert to rdfxm
Migrated from http://pleiades.stoa.org/docs/site-issue-tracker/60
| 1.0 | ensure URIs in TTL export files are valid and well-formed: 3pts - @ryanfb reports:
working with pleiades RDF dumps I get some issues with URL's in the ttl files not being URL-encoded that winds up needing hand-fixing before parsing in some things…e.g. angle brackets, square brackets, percent signs, etc. not being %-escaped
https://gist.github.com/ryanfb/6796a5e2d77a4c1423b4 for specific instances
Steps to reproduce:
current flow encountering them is using jena rdfcat to concatenate the places ttl and convert to rdfxm
Migrated from http://pleiades.stoa.org/docs/site-issue-tracker/60
| priority | ensure uris in ttl export files are valid and well formed ryanfb reports working with pleiades rdf dumps i get some issues with url s in the ttl files not being url encoded that winds up needing hand fixing before parsing in some things…e g angle brackets square brackets percent signs etc not being escaped for specific instances steps to reproduce current flow encountering them is using jena rdfcat to concatenate the places ttl and convert to rdfxm migrated from | 1 |
440,301 | 12,697,519,187 | IssuesEvent | 2020-06-22 11:58:33 | verdaccio/verdaccio | https://api.github.com/repos/verdaccio/verdaccio | reopened | /-/all endpoint doesn't use access groups | dev: high priority issue: bug | **Describe the bug**
We're using the simple htpasswd auth plugin currently. Tried using https://github.com/btshj-snail/snail-verdaccio-group/ but unfortunately couldn't get it to work yet, so we ended up with just regular lists of users instead of groups.
However, it seems that the /-/all endpoint stopped working - expectation here would be that authenticated users get all packages they are authenticated for when using that endpoint, but instead no packages are returned for anyone now (even if that person has explicit access to all packages).
**To Reproduce**
Set up explicit access instead of using $all or $authenticated:
```
'**':
# scoped packages
access: user-01 user-02
publish: user-01
unpublish: user-01
```
- Log in in the web backend or through npm
- Use the /-/all endpoint and see that it's empty
The same happens with explicit package access. The web interface lists the correct results, just that (important) all endpoint isn't working (which is important since we're using this with Unity Package Manager which accesses that endpoint to determine which packages it allows to download).
**Expected behavior**
user-01 and user-02 see all packages in /-/all endpoint
**Actual behavior**
/-/all endpoint is empty
**EDIT:** it seems the search endpoint, on the other hand, always returns all packages, also ignoring any auth! This is so weird.
**EDIT 2:** to summarize:
- current auth does not influence the outcome of search and /all
- search always returns all packages
- /all always returns 0 packages (unless for packages where $all is used)
- web interface lists the correct packages the user has auth for | 1.0 | /-/all endpoint doesn't use access groups - **Describe the bug**
We're using the simple htpasswd auth plugin currently. Tried using https://github.com/btshj-snail/snail-verdaccio-group/ but unfortunately couldn't get it to work yet, so we ended up with just regular lists of users instead of groups.
However, it seems that the /-/all endpoint stopped working - expectation here would be that authenticated users get all packages they are authenticated for when using that endpoint, but instead no packages are returned for anyone now (even if that person has explicit access to all packages).
**To Reproduce**
Set up explicit access instead of using $all or $authenticated:
```
'**':
# scoped packages
access: user-01 user-02
publish: user-01
unpublish: user-01
```
- Log in in the web backend or through npm
- Use the /-/all endpoint and see that it's empty
The same happens with explicit package access. The web interface lists the correct results, just that (important) all endpoint isn't working (which is important since we're using this with Unity Package Manager which accesses that endpoint to determine which packages it allows to download).
**Expected behavior**
user-01 and user-02 see all packages in /-/all endpoint
**Actual behavior**
/-/all endpoint is empty
**EDIT:** it seems the search endpoint, on the other hand, always returns all packages, also ignoring any auth! This is so weird.
**EDIT 2:** to summarize:
- current auth does not influence the outcome of search and /all
- search always returns all packages
- /all always returns 0 packages (unless for packages where $all is used)
- web interface lists the correct packages the user has auth for | priority | all endpoint doesn t use access groups describe the bug we re using the simple htpasswd auth plugin currently tried using but unfortunately couldn t get it to work yet so we ended up with just regular lists of users instead of groups however it seems that the all endpoint stopped working expectation here would be that authenticated users get all packages they are authenticated for when using that endpoint but instead no packages are returned for anyone now even if that person has explicit access to all packages to reproduce set up explicit access instead of using all or authenticated scoped packages access user user publish user unpublish user log in in the web backend or through npm use the all endpoint and see that it s empty the same happens with explicit package access the web interface lists the correct results just that important all endpoint isn t working which is important since we re using this with unity package manager which accesses that endpoint to determine which packages it allows to download expected behavior user and user see all packages in all endpoint actual behavior all endpoint is empty edit it seems the search endpoint on the other hand always returns all packages also ignoring any auth this is so weird edit to summarize current auth does not influence the outcome of search and all search always returns all packages all always returns packages unless for packages where all is used web interface lists the correct packages the user has auth for | 1 |
147,055 | 5,633,015,181 | IssuesEvent | 2017-04-05 17:55:08 | mPowering/django-orb | https://api.github.com/repos/mPowering/django-orb | closed | Integrity error when adding a new resource in Spanish | bug high priority | Adding a new resource when the ORB language is set to 'en' is working fine, but if I try to add a resource when the language is set to 'es' I get an integrity error (see below).
I was using 'English' as the text entered in the language field and presume that this is then trying to add a new tag (in name_es) for the language, rather than using the existing one that's causing the error?
The error also occurs on the staging server...
--------------------
IntegrityError at /resource/create/1/
(1062, "Duplicate entry 'English-9' for key 'orb_tag_name_797cef049b536205_uniq'")
Environment:
Request Method: POST
Request URL: http://localhost:8000/resource/create/1/
Django Version: 1.8.17
Python Version: 2.7.12
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'orb',
'crispy_forms',
'tastypie',
'tinymce',
'django_wysiwyg',
'haystack',
'sorl.thumbnail',
'orb.analytics',
'orb.review',
'django.contrib.humanize',
'modeltranslation',
'modeltranslation_exim',
'orb.peers')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'orb.middleware.SearchFormMiddleware')
Traceback:
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/django-mpowering/orb/views.py" in resource_create_step1_view
229. resource, form.cleaned_data.get('languages'), request.user, 'language')
File "/home/alex/data/Digital-Campus/development/mpowering/django-mpowering/orb/views.py" in resource_add_free_text_tags
894. 'update_user': user,
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/manager.py" in manager_method
127. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/modeltranslation/manager.py" in get_or_create
363. return super(MultilingualQuerySet, self).get_or_create(**kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/query.py" in get_or_create
407. return self._create_object_from_params(lookup, params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/query.py" in _create_object_from_params
447. six.reraise(*exc_info)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/query.py" in _create_object_from_params
439. obj = self.create(**params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/modeltranslation/manager.py" in create
355. return super(MultilingualQuerySet, self).create(**kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/query.py" in create
348. obj.save(force_insert=True, using=self.db)
File "/home/alex/data/Digital-Campus/development/mpowering/django-mpowering/orb/models.py" in save
540. super(Tag, self).save(*args, **kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/base.py" in save
734. force_update=force_update, update_fields=update_fields)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/base.py" in save_base
762. updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/base.py" in _save_table
846. result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/base.py" in _do_insert
885. using=using, raw=raw)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/manager.py" in manager_method
127. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/query.py" in _insert
920. return query.get_compiler(using=using).execute_sql(return_id)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py" in execute_sql
974. cursor.execute(sql, params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/backends/utils.py" in execute
79. return super(CursorDebugWrapper, self).execute(sql, params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/backends/utils.py" in execute
64. return self.cursor.execute(sql, params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/utils.py" in __exit__
98. six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/backends/utils.py" in execute
64. return self.cursor.execute(sql, params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py" in execute
124. return self.cursor.execute(query, args)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/MySQLdb/cursors.py" in execute
205. self.errorhandler(self, exc, value)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/MySQLdb/connections.py" in defaulterrorhandler
36. raise errorclass, errorvalue
Exception Type: IntegrityError at /resource/create/1/
Exception Value: (1062, "Duplicate entry 'English-9' for key 'orb_tag_name_797cef049b536205_uniq'") # | 1.0 | Integrity error when adding a new resource in Spanish - Adding a new resource when the ORB language is set to 'en' is working fine, but if I try to add a resource when the language is set to 'es' I get an integrity error (see below).
I was using 'English' as the text entered in the language field and presume that this is then trying to add a new tag (in name_es) for the language, rather than using the existing one that's causing the error?
The error also occurs on the staging server...
--------------------
IntegrityError at /resource/create/1/
(1062, "Duplicate entry 'English-9' for key 'orb_tag_name_797cef049b536205_uniq'")
Environment:
Request Method: POST
Request URL: http://localhost:8000/resource/create/1/
Django Version: 1.8.17
Python Version: 2.7.12
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'orb',
'crispy_forms',
'tastypie',
'tinymce',
'django_wysiwyg',
'haystack',
'sorl.thumbnail',
'orb.analytics',
'orb.review',
'django.contrib.humanize',
'modeltranslation',
'modeltranslation_exim',
'orb.peers')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'orb.middleware.SearchFormMiddleware')
Traceback:
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/django-mpowering/orb/views.py" in resource_create_step1_view
229. resource, form.cleaned_data.get('languages'), request.user, 'language')
File "/home/alex/data/Digital-Campus/development/mpowering/django-mpowering/orb/views.py" in resource_add_free_text_tags
894. 'update_user': user,
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/manager.py" in manager_method
127. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/modeltranslation/manager.py" in get_or_create
363. return super(MultilingualQuerySet, self).get_or_create(**kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/query.py" in get_or_create
407. return self._create_object_from_params(lookup, params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/query.py" in _create_object_from_params
447. six.reraise(*exc_info)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/query.py" in _create_object_from_params
439. obj = self.create(**params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/modeltranslation/manager.py" in create
355. return super(MultilingualQuerySet, self).create(**kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/query.py" in create
348. obj.save(force_insert=True, using=self.db)
File "/home/alex/data/Digital-Campus/development/mpowering/django-mpowering/orb/models.py" in save
540. super(Tag, self).save(*args, **kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/base.py" in save
734. force_update=force_update, update_fields=update_fields)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/base.py" in save_base
762. updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/base.py" in _save_table
846. result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/base.py" in _do_insert
885. using=using, raw=raw)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/manager.py" in manager_method
127. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/query.py" in _insert
920. return query.get_compiler(using=using).execute_sql(return_id)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py" in execute_sql
974. cursor.execute(sql, params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/backends/utils.py" in execute
79. return super(CursorDebugWrapper, self).execute(sql, params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/backends/utils.py" in execute
64. return self.cursor.execute(sql, params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/utils.py" in __exit__
98. six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/backends/utils.py" in execute
64. return self.cursor.execute(sql, params)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py" in execute
124. return self.cursor.execute(query, args)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/MySQLdb/cursors.py" in execute
205. self.errorhandler(self, exc, value)
File "/home/alex/data/Digital-Campus/development/mpowering/mpowering_core/env/local/lib/python2.7/site-packages/MySQLdb/connections.py" in defaulterrorhandler
36. raise errorclass, errorvalue
Exception Type: IntegrityError at /resource/create/1/
Exception Value: (1062, "Duplicate entry 'English-9' for key 'orb_tag_name_797cef049b536205_uniq'") # | priority | integrity error when adding a new resource in spanish adding a new resource when the orb language is set to en is working fine but if i try to add a resource when the language is set to es i get an integrity error see below i was using english as the text entered in the language field and presume that this is then trying to add a new tag in name es for the language rather than using the existing one that s causing the error the error also occurs on the staging server integrityerror at resource create duplicate entry english for key orb tag name uniq environment request method post request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles django contrib sites orb crispy forms tastypie tinymce django wysiwyg haystack sorl thumbnail orb analytics orb review django contrib humanize modeltranslation modeltranslation exim orb peers installed middleware django contrib sessions middleware sessionmiddleware django middleware locale localemiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware orb middleware searchformmiddleware traceback file home alex data digital campus development mpowering mpowering core env local lib site packages django core handlers base py in get response response wrapped callback request callback args callback kwargs file home alex data digital campus development mpowering django mpowering orb views py in resource create view resource form cleaned data get languages request user language file home alex data digital campus development mpowering django mpowering orb views py in resource add free text tags update user user file home alex data digital campus development mpowering mpowering core env local lib site packages django db models manager py in manager method return getattr self get queryset name args kwargs file home alex data digital campus development mpowering mpowering core env local lib site packages modeltranslation manager py in get or create return super multilingualqueryset self get or create kwargs file home alex data digital campus development mpowering mpowering core env local lib site packages django db models query py in get or create return self create object from params lookup params file home alex data digital campus development mpowering mpowering core env local lib site packages django db models query py in create object from params six reraise exc info file home alex data digital campus development mpowering mpowering core env local lib site packages django db models query py in create object from params obj self create params file home alex data digital campus development mpowering mpowering core env local lib site packages modeltranslation manager py in create return super multilingualqueryset self create kwargs file home alex data digital campus development mpowering mpowering core env local lib site packages django db models query py in create obj save force insert true using self db file home alex data digital campus development mpowering django mpowering orb models py in save super tag self save args kwargs file home alex data digital campus development mpowering mpowering core env local lib site packages django db models base py in save force update force update update fields update fields file home alex data digital campus development mpowering mpowering core env local lib site packages django db models base py in save base updated self save table raw cls force insert force update using update fields file home alex data digital campus development mpowering mpowering core env local lib site packages django db models base py in save table result self do insert cls base manager using fields update pk raw file home alex data digital campus development mpowering mpowering core env local lib site packages django db models base py in do insert using using raw raw file home alex data digital campus development mpowering mpowering core env local lib site packages django db models manager py in manager method return getattr self get queryset name args kwargs file home alex data digital campus development mpowering mpowering core env local lib site packages django db models query py in insert return query get compiler using using execute sql return id file home alex data digital campus development mpowering mpowering core env local lib site packages django db models sql compiler py in execute sql cursor execute sql params file home alex data digital campus development mpowering mpowering core env local lib site packages django db backends utils py in execute return super cursordebugwrapper self execute sql params file home alex data digital campus development mpowering mpowering core env local lib site packages django db backends utils py in execute return self cursor execute sql params file home alex data digital campus development mpowering mpowering core env local lib site packages django db utils py in exit six reraise dj exc type dj exc value traceback file home alex data digital campus development mpowering mpowering core env local lib site packages django db backends utils py in execute return self cursor execute sql params file home alex data digital campus development mpowering mpowering core env local lib site packages django db backends mysql base py in execute return self cursor execute query args file home alex data digital campus development mpowering mpowering core env local lib site packages mysqldb cursors py in execute self errorhandler self exc value file home alex data digital campus development mpowering mpowering core env local lib site packages mysqldb connections py in defaulterrorhandler raise errorclass errorvalue exception type integrityerror at resource create exception value duplicate entry english for key orb tag name uniq | 1 |
810,208 | 30,230,617,055 | IssuesEvent | 2023-07-06 06:35:01 | GSM-MSG/Hi-v2-Android | https://api.github.com/repos/GSM-MSG/Hi-v2-Android | closed | Homebase presentation 로직 작성 | 1️⃣ Priority: High ✨ Feature | ### Describe
- Homebase에 필요한 presentation 로직 구현
### Additional
_No response_ | 1.0 | Homebase presentation 로직 작성 - ### Describe
- Homebase에 필요한 presentation 로직 구현
### Additional
_No response_ | priority | homebase presentation 로직 작성 describe homebase에 필요한 presentation 로직 구현 additional no response | 1 |
5,767 | 2,579,449,950 | IssuesEvent | 2015-02-13 10:20:25 | olga-jane/prizm | https://api.github.com/repos/olga-jane/prizm | closed | Crash on release note (not saved - transient instance) | bug bug - crash/performance/leak Coding HIGH priority Release note | STR:
Open Mill->Release note
Fill in release note and railcar fields
Add one pipe
Click save
Result: exception
************** Exception Text **************
NHibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing or set cascade action for the property to something that would make it autosave. Type: Prizm.Domain.Entity.Mill.Railcar, Entity: Prizm.Domain.Entity.Mill.Railcar
at NHibernate.Engine.ForeignKeys.GetEntityIdentifierIfNotUnsaved(String entityName, Object entity, ISessionImplementor session)
at NHibernate.Type.EntityType.GetIdentifier(Object value, ISessionImplementor session)
at NHibernate.Type.ManyToOneType.IsDirty(Object old, Object current, Boolean[] checkable, ISessionImplementor session)
at NHibernate.Type.TypeHelper.Dirty(StandardProperty[] properties, Object[] currentState, Object[] previousState, Boolean[][] includeColumns, Boolean anyUninitializedProperties, ISessionImplementor session, Int32 i)
at NHibernate.Type.TypeHelper.FindDirty(StandardProperty[] properties, Object[] currentState, Object[] previousState, Boolean[][] includeColumns, Boolean anyUninitializedProperties, ISessionImplementor session)
at NHibernate.Persister.Entity.AbstractEntityPersister.FindDirty(Object[] currentState, Object[] previousState, Object entity, ISessionImplementor session)
at NHibernate.Event.Default.DefaultFlushEntityEventListener.DirtyCheck(FlushEntityEvent event)
at NHibernate.Event.Default.DefaultFlushEntityEventListener.IsUpdateNecessary(FlushEntityEvent event, Boolean mightBeDirty)
at NHibernate.Event.Default.DefaultFlushEntityEventListener.OnFlushEntity(FlushEntityEvent event)
at NHibernate.Event.Default.AbstractFlushingEventListener.FlushEntities(FlushEvent event)
at NHibernate.Event.Default.AbstractFlushingEventListener.FlushEverythingToExecutions(FlushEvent event)
at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event)
at NHibernate.Impl.SessionImpl.Flush()
at NHibernate.Transaction.AdoTransaction.Commit()
at Prizm.Main.Forms.ReleaseNote.ReleaseNoteRepositories.Commit()
at Prizm.Main.Forms.ReleaseNote.NewEdit.SaveReleaseNoteCommand.Execute()
at Prizm.Main.Commands.CommandInfo.SimpleButtonAttacher.btn_Click(Object sender, EventArgs e)
at System.Windows.Forms.Control.OnClick(EventArgs e)
at DevExpress.XtraEditors.BaseButton.OnClick(EventArgs e)
at DevExpress.XtraEditors.BaseButton.OnMouseUp(MouseEventArgs e)
at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
at System.Windows.Forms.Control.WndProc(Message& m)
at DevExpress.Utils.Controls.ControlBase.WndProc(Message& m)
at DevExpress.XtraEditors.BaseControl.WndProc(Message& msg)
at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) | 1.0 | Crash on release note (not saved - transient instance) - STR:
Open Mill->Release note
Fill in release note and railcar fields
Add one pipe
Click save
Result: exception
************** Exception Text **************
NHibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing or set cascade action for the property to something that would make it autosave. Type: Prizm.Domain.Entity.Mill.Railcar, Entity: Prizm.Domain.Entity.Mill.Railcar
at NHibernate.Engine.ForeignKeys.GetEntityIdentifierIfNotUnsaved(String entityName, Object entity, ISessionImplementor session)
at NHibernate.Type.EntityType.GetIdentifier(Object value, ISessionImplementor session)
at NHibernate.Type.ManyToOneType.IsDirty(Object old, Object current, Boolean[] checkable, ISessionImplementor session)
at NHibernate.Type.TypeHelper.Dirty(StandardProperty[] properties, Object[] currentState, Object[] previousState, Boolean[][] includeColumns, Boolean anyUninitializedProperties, ISessionImplementor session, Int32 i)
at NHibernate.Type.TypeHelper.FindDirty(StandardProperty[] properties, Object[] currentState, Object[] previousState, Boolean[][] includeColumns, Boolean anyUninitializedProperties, ISessionImplementor session)
at NHibernate.Persister.Entity.AbstractEntityPersister.FindDirty(Object[] currentState, Object[] previousState, Object entity, ISessionImplementor session)
at NHibernate.Event.Default.DefaultFlushEntityEventListener.DirtyCheck(FlushEntityEvent event)
at NHibernate.Event.Default.DefaultFlushEntityEventListener.IsUpdateNecessary(FlushEntityEvent event, Boolean mightBeDirty)
at NHibernate.Event.Default.DefaultFlushEntityEventListener.OnFlushEntity(FlushEntityEvent event)
at NHibernate.Event.Default.AbstractFlushingEventListener.FlushEntities(FlushEvent event)
at NHibernate.Event.Default.AbstractFlushingEventListener.FlushEverythingToExecutions(FlushEvent event)
at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event)
at NHibernate.Impl.SessionImpl.Flush()
at NHibernate.Transaction.AdoTransaction.Commit()
at Prizm.Main.Forms.ReleaseNote.ReleaseNoteRepositories.Commit()
at Prizm.Main.Forms.ReleaseNote.NewEdit.SaveReleaseNoteCommand.Execute()
at Prizm.Main.Commands.CommandInfo.SimpleButtonAttacher.btn_Click(Object sender, EventArgs e)
at System.Windows.Forms.Control.OnClick(EventArgs e)
at DevExpress.XtraEditors.BaseButton.OnClick(EventArgs e)
at DevExpress.XtraEditors.BaseButton.OnMouseUp(MouseEventArgs e)
at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
at System.Windows.Forms.Control.WndProc(Message& m)
at DevExpress.Utils.Controls.ControlBase.WndProc(Message& m)
at DevExpress.XtraEditors.BaseControl.WndProc(Message& msg)
at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) | priority | crash on release note not saved transient instance str open mill release note fill in release note and railcar fields add one pipe click save result exception exception text nhibernate transientobjectexception object references an unsaved transient instance save the transient instance before flushing or set cascade action for the property to something that would make it autosave type prizm domain entity mill railcar entity prizm domain entity mill railcar at nhibernate engine foreignkeys getentityidentifierifnotunsaved string entityname object entity isessionimplementor session at nhibernate type entitytype getidentifier object value isessionimplementor session at nhibernate type manytoonetype isdirty object old object current boolean checkable isessionimplementor session at nhibernate type typehelper dirty standardproperty properties object currentstate object previousstate boolean includecolumns boolean anyuninitializedproperties isessionimplementor session i at nhibernate type typehelper finddirty standardproperty properties object currentstate object previousstate boolean includecolumns boolean anyuninitializedproperties isessionimplementor session at nhibernate persister entity abstractentitypersister finddirty object currentstate object previousstate object entity isessionimplementor session at nhibernate event default defaultflushentityeventlistener dirtycheck flushentityevent event at nhibernate event default defaultflushentityeventlistener isupdatenecessary flushentityevent event boolean mightbedirty at nhibernate event default defaultflushentityeventlistener onflushentity flushentityevent event at nhibernate event default abstractflushingeventlistener flushentities flushevent event at nhibernate event default abstractflushingeventlistener flusheverythingtoexecutions flushevent event at nhibernate event default defaultflusheventlistener onflush flushevent event at nhibernate impl sessionimpl flush at nhibernate transaction adotransaction commit at prizm main forms releasenote releasenoterepositories commit at prizm main forms releasenote newedit savereleasenotecommand execute at prizm main commands commandinfo simplebuttonattacher btn click object sender eventargs e at system windows forms control onclick eventargs e at devexpress xtraeditors basebutton onclick eventargs e at devexpress xtraeditors basebutton onmouseup mouseeventargs e at system windows forms control wmmouseup message m mousebuttons button clicks at system windows forms control wndproc message m at devexpress utils controls controlbase wndproc message m at devexpress xtraeditors basecontrol wndproc message msg at system windows forms control controlnativewindow onmessage message m at system windows forms control controlnativewindow wndproc message m at system windows forms nativewindow callback intptr hwnd msg intptr wparam intptr lparam | 1 |
828,647 | 31,837,111,301 | IssuesEvent | 2023-09-14 14:08:24 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | reopened | //bindings/pydrake/visualization:py/model_visualizer_test test_reload failure in CI | type: bug priority: high component: continuous integration | ### What happened?
**//bindings/pydrake/visualization:py/model_visualizer_test** failed in CI:
Aug 7 (x2)
https://drake-jenkins.csail.mit.edu/view/Continuous%20Production/job/mac-arm-monterey-clang-bazel-continuous-release/1101/
https://drake-jenkins.csail.mit.edu/view/Continuous%20Production/job/mac-arm-monterey-clang-bazel-continuous-release/1102/
Aug 15
https://drake-jenkins.csail.mit.edu/view/Continuous%20Production/job/mac-arm-ventura-clang-bazel-continuous-release/345/
### Version
_No response_
### What operating system are you using?
Other
### What installation option are you using?
_No response_
### Relevant log output
```shell
======================================================================
FAIL [1.205s]: test_reload (model_visualizer_test.TestModelVisualizer.test_reload)
Checks that the _reload() function does not crash.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/admin/workspace/mac-arm-monterey-clang-bazel-continuous-release/_bazel_admin/d8209b58b645930155068a99d03380e4/sandbox/darwin-sandbox/7614/execroot/drake/bazel-out/darwin_arm64-opt/bin/bindings/pydrake/visualization/py/model_visualizer_test.runfiles/drake/bindings/pydrake/visualization/test/model_visualizer_test.py", line 229, in test_reload
self.assertEqual(meshcat.GetButtonClicks(button), 1)
AssertionError: 0 != 1
----------------------------------------------------------------------
```
| 1.0 | //bindings/pydrake/visualization:py/model_visualizer_test test_reload failure in CI - ### What happened?
**//bindings/pydrake/visualization:py/model_visualizer_test** failed in CI:
Aug 7 (x2)
https://drake-jenkins.csail.mit.edu/view/Continuous%20Production/job/mac-arm-monterey-clang-bazel-continuous-release/1101/
https://drake-jenkins.csail.mit.edu/view/Continuous%20Production/job/mac-arm-monterey-clang-bazel-continuous-release/1102/
Aug 15
https://drake-jenkins.csail.mit.edu/view/Continuous%20Production/job/mac-arm-ventura-clang-bazel-continuous-release/345/
### Version
_No response_
### What operating system are you using?
Other
### What installation option are you using?
_No response_
### Relevant log output
```shell
======================================================================
FAIL [1.205s]: test_reload (model_visualizer_test.TestModelVisualizer.test_reload)
Checks that the _reload() function does not crash.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/admin/workspace/mac-arm-monterey-clang-bazel-continuous-release/_bazel_admin/d8209b58b645930155068a99d03380e4/sandbox/darwin-sandbox/7614/execroot/drake/bazel-out/darwin_arm64-opt/bin/bindings/pydrake/visualization/py/model_visualizer_test.runfiles/drake/bindings/pydrake/visualization/test/model_visualizer_test.py", line 229, in test_reload
self.assertEqual(meshcat.GetButtonClicks(button), 1)
AssertionError: 0 != 1
----------------------------------------------------------------------
```
| priority | bindings pydrake visualization py model visualizer test test reload failure in ci what happened bindings pydrake visualization py model visualizer test failed in ci aug aug version no response what operating system are you using other what installation option are you using no response relevant log output shell fail test reload model visualizer test testmodelvisualizer test reload checks that the reload function does not crash traceback most recent call last file users admin workspace mac arm monterey clang bazel continuous release bazel admin sandbox darwin sandbox execroot drake bazel out darwin opt bin bindings pydrake visualization py model visualizer test runfiles drake bindings pydrake visualization test model visualizer test py line in test reload self assertequal meshcat getbuttonclicks button assertionerror | 1 |
620,620 | 19,565,712,738 | IssuesEvent | 2022-01-03 23:46:06 | bounswe/2021SpringGroup6 | https://api.github.com/repos/bounswe/2021SpringGroup6 | closed | Android - Event Search filters implementation | Type: Feature Status: Complete Platform: Mobile Priority: High | Filters(sport type, city name etc.) will be implemented in the event search. The location-based search will be combined with other filters. In location based search, the user will select two locations for two markers. Google Map service will be used for the map. | 1.0 | Android - Event Search filters implementation - Filters(sport type, city name etc.) will be implemented in the event search. The location-based search will be combined with other filters. In location based search, the user will select two locations for two markers. Google Map service will be used for the map. | priority | android event search filters implementation filters sport type city name etc will be implemented in the event search the location based search will be combined with other filters in location based search the user will select two locations for two markers google map service will be used for the map | 1 |
68,667 | 3,292,081,098 | IssuesEvent | 2015-10-30 12:58:05 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | Request for analysis server to return/send the PID of its process | Analyzer-Server Area-Analyzer Customer-Atom Priority-High Type-Enhancement | In order to help display resource consumption information to the user, and in order for clients to make informed decisions around the health of the AS, we would like to request a new API (or augmentation of existing API) to return the PID from the analysis server's process.
See also https://github.com/dart-lang/sdk/issues/24020
Thanks for the review! | 1.0 | Request for analysis server to return/send the PID of its process - In order to help display resource consumption information to the user, and in order for clients to make informed decisions around the health of the AS, we would like to request a new API (or augmentation of existing API) to return the PID from the analysis server's process.
See also https://github.com/dart-lang/sdk/issues/24020
Thanks for the review! | priority | request for analysis server to return send the pid of its process in order to help display resource consumption information to the user and in order for clients to make informed decisions around the health of the as we would like to request a new api or augmentation of existing api to return the pid from the analysis server s process see also thanks for the review | 1 |
388,869 | 11,493,814,963 | IssuesEvent | 2020-02-11 23:56:49 | MolSnoo/Alter-Ego | https://api.github.com/repos/MolSnoo/Alter-Ego | reopened | Make stash command | feature high priority | Stash command should allow a player (or moderator) to stash an item from either of their hands to another inventory slot. | 1.0 | Make stash command - Stash command should allow a player (or moderator) to stash an item from either of their hands to another inventory slot. | priority | make stash command stash command should allow a player or moderator to stash an item from either of their hands to another inventory slot | 1 |
641,757 | 20,833,927,084 | IssuesEvent | 2022-03-19 22:24:46 | NerdyNomads/Text-Savvy | https://api.github.com/repos/NerdyNomads/Text-Savvy | opened | Workspace database is connected to the web app. | high priority front-end back-end | Connect the workspace database and get it's functions (add, delete, update, etc.) working. | 1.0 | Workspace database is connected to the web app. - Connect the workspace database and get it's functions (add, delete, update, etc.) working. | priority | workspace database is connected to the web app connect the workspace database and get it s functions add delete update etc working | 1 |
168,036 | 6,359,927,716 | IssuesEvent | 2017-07-31 08:45:32 | BinPar/PRM | https://api.github.com/repos/BinPar/PRM | closed | PRM UNI PRO: DASHBOARD PROMOTORES no funciona correctamente | Priority: High | Los datos que aparecen en los DB de los promotores no reflejan los datos con fidelidad.
En el de Noemí, no aparece ningún dato.
En el mío, si que aparecen datos.

@CristianBinpar @minigoBinpar @franciscorrr | 1.0 | PRM UNI PRO: DASHBOARD PROMOTORES no funciona correctamente - Los datos que aparecen en los DB de los promotores no reflejan los datos con fidelidad.
En el de Noemí, no aparece ningún dato.
En el mío, si que aparecen datos.

@CristianBinpar @minigoBinpar @franciscorrr | priority | prm uni pro dashboard promotores no funciona correctamente los datos que aparecen en los db de los promotores no reflejan los datos con fidelidad en el de noemí no aparece ningún dato en el mío si que aparecen datos cristianbinpar minigobinpar franciscorrr | 1 |
508,586 | 14,703,040,913 | IssuesEvent | 2021-01-04 14:30:06 | hotosm/tasking-manager | https://api.github.com/repos/hotosm/tasking-manager | opened | Create public organization frontend page | Component: Frontend Difficulty: 5 Priority: High Status: Needs implementation Type: Enhancement | Create a public frontend page for organizations.
**Content:**
- [ ] Projects list
- [ ] Basic organization information
- [ ] Managers list
- [ ] Teams list
It can be something similat to the teams page:

**Authentication:**
- It should be accessible by any user, even unauthenticated ones | 1.0 | Create public organization frontend page - Create a public frontend page for organizations.
**Content:**
- [ ] Projects list
- [ ] Basic organization information
- [ ] Managers list
- [ ] Teams list
It can be something similat to the teams page:

**Authentication:**
- It should be accessible by any user, even unauthenticated ones | priority | create public organization frontend page create a public frontend page for organizations content projects list basic organization information managers list teams list it can be something similat to the teams page authentication it should be accessible by any user even unauthenticated ones | 1 |
390,264 | 11,541,043,973 | IssuesEvent | 2020-02-18 02:30:39 | bigbass1997/NotEnoughProduction | https://api.github.com/repos/bigbass1997/NotEnoughProduction | closed | Compressed RecipeAtlas Compatibility | Feature High Priority | Change the recipe loading logic to accept JSON files that have been compressed. This doesn't have to work with all types of compression; rather it only needs to be the same type as what RecEx uses to compress the JSON in the first place.
The atlas loading process should still support normal JSON files too. If the decompression process takes longer than 1 second, then perhaps this should actually be apart of the download process instead. So that when the compressed atlas is downloaded, it's decompressed and saved that way.
Planning to use the [zeroturnaround/zt-zip](https://github.com/zeroturnaround/zt-zip) library (maven) to assist with the (de)compression. | 1.0 | Compressed RecipeAtlas Compatibility - Change the recipe loading logic to accept JSON files that have been compressed. This doesn't have to work with all types of compression; rather it only needs to be the same type as what RecEx uses to compress the JSON in the first place.
The atlas loading process should still support normal JSON files too. If the decompression process takes longer than 1 second, then perhaps this should actually be apart of the download process instead. So that when the compressed atlas is downloaded, it's decompressed and saved that way.
Planning to use the [zeroturnaround/zt-zip](https://github.com/zeroturnaround/zt-zip) library (maven) to assist with the (de)compression. | priority | compressed recipeatlas compatibility change the recipe loading logic to accept json files that have been compressed this doesn t have to work with all types of compression rather it only needs to be the same type as what recex uses to compress the json in the first place the atlas loading process should still support normal json files too if the decompression process takes longer than second then perhaps this should actually be apart of the download process instead so that when the compressed atlas is downloaded it s decompressed and saved that way planning to use the library maven to assist with the de compression | 1 |
291,577 | 8,933,332,311 | IssuesEvent | 2019-01-23 01:32:33 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [8.0.0 and below] USER ISSUE: Land Claim Issues | High Priority In Progress | **Version:** 0.7.8.5 beta
**Steps to Reproduce:**
1. Select your Land Claim Stake
2. Go stand on the plot of land you want to claim
3. Right click on the plot of land to claim it.
**Expected behavior:**
That it would go the same color that I have set which is red
for that plot.
**Actual behavior:**
Instead it is staying the same ground color instead of going
red but it insists that I already own that plot if I try and own it a it again it
it says that I already own that plot. But it refuses to go the same color as the rest
of my plots which I have set to red.
I don't know why and Yes I have tried to unown it and then reown it again with
the same results it won't turn red like the rest of my plots when I have
my land stake tool selected and I don't know why. I believe this to be some kind of
bug with the game at this point because I have tried everything even unowning it
and then reowning it and it does the same thing! It just won't turn red.
Now if I stand on the plot and look down and hit E it is listed as me owning it
but why isn't it red like the rest of my plots when I have my land select tool
selected.
Now in the screenshot you can see that I am not standing on it and I did that
on purpose for the screenshot so it wasn't yellow and you can see that
the plot is just the normal ground color and yes I have my Land Claim Stake
tool selected as you can see all the other ones around me are all red except that
one. This is maddening why is it doing this with this one spot.
This is my own server FYI running on a machine that I have here
as a test server that I play around with. Never saw this before
it has never done this in the past just with this version.
A copy of the image can also be seen at https://ibb.co/1zy107k
**Do you have mods installed? Does issue happen when no mods are installed?:**
I have no mods installed.

| 1.0 | [8.0.0 and below] USER ISSUE: Land Claim Issues - **Version:** 0.7.8.5 beta
**Steps to Reproduce:**
1. Select your Land Claim Stake
2. Go stand on the plot of land you want to claim
3. Right click on the plot of land to claim it.
**Expected behavior:**
That it would go the same color that I have set which is red
for that plot.
**Actual behavior:**
Instead it is staying the same ground color instead of going
red but it insists that I already own that plot if I try and own it a it again it
it says that I already own that plot. But it refuses to go the same color as the rest
of my plots which I have set to red.
I don't know why and Yes I have tried to unown it and then reown it again with
the same results it won't turn red like the rest of my plots when I have
my land stake tool selected and I don't know why. I believe this to be some kind of
bug with the game at this point because I have tried everything even unowning it
and then reowning it and it does the same thing! It just won't turn red.
Now if I stand on the plot and look down and hit E it is listed as me owning it
but why isn't it red like the rest of my plots when I have my land select tool
selected.
Now in the screenshot you can see that I am not standing on it and I did that
on purpose for the screenshot so it wasn't yellow and you can see that
the plot is just the normal ground color and yes I have my Land Claim Stake
tool selected as you can see all the other ones around me are all red except that
one. This is maddening why is it doing this with this one spot.
This is my own server FYI running on a machine that I have here
as a test server that I play around with. Never saw this before
it has never done this in the past just with this version.
A copy of the image can also be seen at https://ibb.co/1zy107k
**Do you have mods installed? Does issue happen when no mods are installed?:**
I have no mods installed.

| priority | user issue land claim issues version beta steps to reproduce select your land claim stake go stand on the plot of land you want to claim right click on the plot of land to claim it expected behavior that it would go the same color that i have set which is red for that plot actual behavior instead it is staying the same ground color instead of going red but it insists that i already own that plot if i try and own it a it again it it says that i already own that plot but it refuses to go the same color as the rest of my plots which i have set to red i don t know why and yes i have tried to unown it and then reown it again with the same results it won t turn red like the rest of my plots when i have my land stake tool selected and i don t know why i believe this to be some kind of bug with the game at this point because i have tried everything even unowning it and then reowning it and it does the same thing it just won t turn red now if i stand on the plot and look down and hit e it is listed as me owning it but why isn t it red like the rest of my plots when i have my land select tool selected now in the screenshot you can see that i am not standing on it and i did that on purpose for the screenshot so it wasn t yellow and you can see that the plot is just the normal ground color and yes i have my land claim stake tool selected as you can see all the other ones around me are all red except that one this is maddening why is it doing this with this one spot this is my own server fyi running on a machine that i have here as a test server that i play around with never saw this before it has never done this in the past just with this version a copy of the image can also be seen at do you have mods installed does issue happen when no mods are installed i have no mods installed | 1 |
240,142 | 7,800,479,094 | IssuesEvent | 2018-06-09 09:55:36 | tine20/Tine-2.0-Open-Source-Groupware-and-CRM | https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM | closed | 0009522:
update htmlpurifier to 4.6.0 | Bug Mantis Tinebase high priority | **Reported by pschuele on 10 Jan 2014 15:53**
**Version:** Collin (2013.10.3)
update htmlpurifier to 4.6.0
**Additional information:** http://htmlpurifier.org/
| 1.0 | 0009522:
update htmlpurifier to 4.6.0 - **Reported by pschuele on 10 Jan 2014 15:53**
**Version:** Collin (2013.10.3)
update htmlpurifier to 4.6.0
**Additional information:** http://htmlpurifier.org/
| priority | update htmlpurifier to reported by pschuele on jan version collin update htmlpurifier to additional information | 1 |
390,073 | 11,524,429,523 | IssuesEvent | 2020-02-15 00:32:33 | zulip/zulip | https://api.github.com/repos/zulip/zulip | opened | Optimize performance of get_events | area: production difficult priority: high | The get_events Tornado endpoint is our highest-traffic endpoint, so something well worth optimizing. In https://chat.zulip.org/#narrow/stream/3-backend/topic/profiling.20get_events/near/816856 I describe a bunch of profiling I did to figure out where the time is going, resulting in several commits being merged. Here's some summary details on where the time is going (all profiled in a local development environment):
* The actual view function code for `get_events_backend` only takes 250us to run, which is great!
* ~300us is spent in the `has_request_variables` wrapper
* 1400us is spent in rate-limiting code, which is tracked at #13913.
* 350us+ in`process_client`. The thread in Zulip has a number of thoughts on optimizing this; the biggest win is likely to come from debouncing the RabbitMQ writes to not be 1/request, since those seem to take 200us+ even with RabbitMQ running on localhost. Some design work is likely required here, but the semantics we probably want are "process_user_activity writes a local-memory data structure and also causes a timer to be set to send the data to RabbitMQ in 5 seconds if such a timer isn't already running". This would have the effect of at most delaying UserActivity logging by 5 seconds, while in a high-traffic environment processing hundreds of requests per second, it'd turn 200us/request of RabbitMQ work into ~200us/(100+ requests), which is negligible.
* 300us in our logging middleware. This is dominated by `logger.info`; I'm not sure if there's anything to be done here (would need to profile that code path inside Django specifically). Certainly 250us to log a line of output after it's been constructed feels really high to me.
* ~~630us in CommonMiddleware~~ addressed in 229090a3a58de15eb2a76a8e513fa5d9a190170c~~. I'm leaving a note here that we should look into extending this change to address all API endpoints; we'll want to test that doing so doesn't break anything, but I think it shouldn't, because those redirects would have been useless as none of our API endpoints end with `/`.
* 370us in LocaleMiddleware.process_request, setting up translations (likely runs twice per get_events request). We may want to look at whether we can disable i18n for this code path.
* ~300us for Tornado to write the HTTP response over the wire (probably unavoidable)
* ~300us to convert a Tornado request into a Django HttpRequest (run twice; may be possible to optimize by copying the HttpRequest object after making it the first time, rather than making a request a second time? Unclear.)
* ~80us to check authentication (mostly a single memcached query) -- likely hard to optimize.
* Likely 1000us or more of time in Django-internal places I haven't profiled that we're also unlikely to be able to optimize a lot.
It's be a significant improvement to Zulip's backend scalability if we can optimize the larger line items here, which makes this particular case very different from basically anything else (where microseconds are not worth thinking about). | 1.0 | Optimize performance of get_events - The get_events Tornado endpoint is our highest-traffic endpoint, so something well worth optimizing. In https://chat.zulip.org/#narrow/stream/3-backend/topic/profiling.20get_events/near/816856 I describe a bunch of profiling I did to figure out where the time is going, resulting in several commits being merged. Here's some summary details on where the time is going (all profiled in a local development environment):
* The actual view function code for `get_events_backend` only takes 250us to run, which is great!
* ~300us is spent in the `has_request_variables` wrapper
* 1400us is spent in rate-limiting code, which is tracked at #13913.
* 350us+ in`process_client`. The thread in Zulip has a number of thoughts on optimizing this; the biggest win is likely to come from debouncing the RabbitMQ writes to not be 1/request, since those seem to take 200us+ even with RabbitMQ running on localhost. Some design work is likely required here, but the semantics we probably want are "process_user_activity writes a local-memory data structure and also causes a timer to be set to send the data to RabbitMQ in 5 seconds if such a timer isn't already running". This would have the effect of at most delaying UserActivity logging by 5 seconds, while in a high-traffic environment processing hundreds of requests per second, it'd turn 200us/request of RabbitMQ work into ~200us/(100+ requests), which is negligible.
* 300us in our logging middleware. This is dominated by `logger.info`; I'm not sure if there's anything to be done here (would need to profile that code path inside Django specifically). Certainly 250us to log a line of output after it's been constructed feels really high to me.
* ~~630us in CommonMiddleware~~ addressed in 229090a3a58de15eb2a76a8e513fa5d9a190170c~~. I'm leaving a note here that we should look into extending this change to address all API endpoints; we'll want to test that doing so doesn't break anything, but I think it shouldn't, because those redirects would have been useless as none of our API endpoints end with `/`.
* 370us in LocaleMiddleware.process_request, setting up translations (likely runs twice per get_events request). We may want to look at whether we can disable i18n for this code path.
* ~300us for Tornado to write the HTTP response over the wire (probably unavoidable)
* ~300us to convert a Tornado request into a Django HttpRequest (run twice; may be possible to optimize by copying the HttpRequest object after making it the first time, rather than making a request a second time? Unclear.)
* ~80us to check authentication (mostly a single memcached query) -- likely hard to optimize.
* Likely 1000us or more of time in Django-internal places I haven't profiled that we're also unlikely to be able to optimize a lot.
It's be a significant improvement to Zulip's backend scalability if we can optimize the larger line items here, which makes this particular case very different from basically anything else (where microseconds are not worth thinking about). | priority | optimize performance of get events the get events tornado endpoint is our highest traffic endpoint so something well worth optimizing in i describe a bunch of profiling i did to figure out where the time is going resulting in several commits being merged here s some summary details on where the time is going all profiled in a local development environment the actual view function code for get events backend only takes to run which is great is spent in the has request variables wrapper is spent in rate limiting code which is tracked at in process client the thread in zulip has a number of thoughts on optimizing this the biggest win is likely to come from debouncing the rabbitmq writes to not be request since those seem to take even with rabbitmq running on localhost some design work is likely required here but the semantics we probably want are process user activity writes a local memory data structure and also causes a timer to be set to send the data to rabbitmq in seconds if such a timer isn t already running this would have the effect of at most delaying useractivity logging by seconds while in a high traffic environment processing hundreds of requests per second it d turn request of rabbitmq work into requests which is negligible in our logging middleware this is dominated by logger info i m not sure if there s anything to be done here would need to profile that code path inside django specifically certainly to log a line of output after it s been constructed feels really high to me in commonmiddleware addressed in i m leaving a note here that we should look into extending this change to address all api endpoints we ll want to test that doing so doesn t break anything but i think it shouldn t because those redirects would have been useless as none of our api endpoints end with in localemiddleware process request setting up translations likely runs twice per get events request we may want to look at whether we can disable for this code path for tornado to write the http response over the wire probably unavoidable to convert a tornado request into a django httprequest run twice may be possible to optimize by copying the httprequest object after making it the first time rather than making a request a second time unclear to check authentication mostly a single memcached query likely hard to optimize likely or more of time in django internal places i haven t profiled that we re also unlikely to be able to optimize a lot it s be a significant improvement to zulip s backend scalability if we can optimize the larger line items here which makes this particular case very different from basically anything else where microseconds are not worth thinking about | 1 |
158,165 | 6,022,966,265 | IssuesEvent | 2017-06-07 22:26:55 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | Intent to implement: Bind | Category: Dynamic/Personalized Content INTENT TO IMPLEMENT P1: High Priority Type: Feature Request | ## Summary
Bind is a new AMP extension that allows elements to mutate in response to user actions or data changes via data binding and simple JS-like expressions.
This will allow publishers to build responsive UIs for use cases not covered by existing AMP extended components. See #5059 for possible use cases.
## Usage
In this example, the \<p\> element will have its `textContent` changed from "" to "Hello World".
```
<amp-dataset id=”foo”>{ bar: “Hello World” }</amp-dataset>
<p [text]=”foo.bar”></p>
```
Syntax is subject to change.
## Details
Syntax discussion: #5760
Design doc: [Link](https://docs.google.com/document/d/1MdeRkyK84hREMqIglAf3-MYaFSWV2uYbUnleJjC1bYw/pub) | 1.0 | Intent to implement: Bind - ## Summary
Bind is a new AMP extension that allows elements to mutate in response to user actions or data changes via data binding and simple JS-like expressions.
This will allow publishers to build responsive UIs for use cases not covered by existing AMP extended components. See #5059 for possible use cases.
## Usage
In this example, the \<p\> element will have its `textContent` changed from "" to "Hello World".
```
<amp-dataset id=”foo”>{ bar: “Hello World” }</amp-dataset>
<p [text]=”foo.bar”></p>
```
Syntax is subject to change.
## Details
Syntax discussion: #5760
Design doc: [Link](https://docs.google.com/document/d/1MdeRkyK84hREMqIglAf3-MYaFSWV2uYbUnleJjC1bYw/pub) | priority | intent to implement bind summary bind is a new amp extension that allows elements to mutate in response to user actions or data changes via data binding and simple js like expressions this will allow publishers to build responsive uis for use cases not covered by existing amp extended components see for possible use cases usage in this example the element will have its textcontent changed from to hello world bar “hello world” syntax is subject to change details syntax discussion design doc | 1 |
527,095 | 15,308,465,857 | IssuesEvent | 2021-02-24 22:30:40 | returntocorp/semgrep | https://api.github.com/repos/returntocorp/semgrep | closed | Duplicates across rulesets should be eliminated | external-user priority:high | **Describe the bug**
I recently changed a policy from one ruleset (`r2c-ci`) to multiple rulesets. Upon adding a dummy test file to trigger a finding in CI, the finding was reported twice. I have not verified, but I suspect the rule is present in two rulesets so is therefore reporting twice:
<img width="425" alt="image" src="https://user-images.githubusercontent.com/6817500/102645668-b021f200-4117-11eb-9f99-97b380f20780.png">
**To Reproduce**
Here are the rulesets I used:
<img width="173" alt="image" src="https://user-images.githubusercontent.com/6817500/102645936-1c9cf100-4118-11eb-9336-64c7c50fdaa9.png">
I then just added a test.py file to my repository with the contents `5 == 5`.
**Expected behavior**
Finding reported just once.
**Screenshots**
NA
**What is the priority of the bug to you?**
Is this a P0 (blocking your adoption of Semgrep or workflow), P1 (important to fix or quite annoying), P2 (regular bug that should get fixed)?
P2 -- regular
**Environment**
v 0.35.0
| 1.0 | Duplicates across rulesets should be eliminated - **Describe the bug**
I recently changed a policy from one ruleset (`r2c-ci`) to multiple rulesets. Upon adding a dummy test file to trigger a finding in CI, the finding was reported twice. I have not verified, but I suspect the rule is present in two rulesets so is therefore reporting twice:
<img width="425" alt="image" src="https://user-images.githubusercontent.com/6817500/102645668-b021f200-4117-11eb-9f99-97b380f20780.png">
**To Reproduce**
Here are the rulesets I used:
<img width="173" alt="image" src="https://user-images.githubusercontent.com/6817500/102645936-1c9cf100-4118-11eb-9336-64c7c50fdaa9.png">
I then just added a test.py file to my repository with the contents `5 == 5`.
**Expected behavior**
Finding reported just once.
**Screenshots**
NA
**What is the priority of the bug to you?**
Is this a P0 (blocking your adoption of Semgrep or workflow), P1 (important to fix or quite annoying), P2 (regular bug that should get fixed)?
P2 -- regular
**Environment**
v 0.35.0
| priority | duplicates across rulesets should be eliminated describe the bug i recently changed a policy from one ruleset ci to multiple rulesets upon adding a dummy test file to trigger a finding in ci the finding was reported twice i have not verified but i suspect the rule is present in two rulesets so is therefore reporting twice img width alt image src to reproduce here are the rulesets i used img width alt image src i then just added a test py file to my repository with the contents expected behavior finding reported just once screenshots na what is the priority of the bug to you is this a blocking your adoption of semgrep or workflow important to fix or quite annoying regular bug that should get fixed regular environment v | 1 |
271,383 | 8,483,361,693 | IssuesEvent | 2018-10-25 21:19:43 | A2Z-hq/a2z-backend | https://api.github.com/repos/A2Z-hq/a2z-backend | opened | Create database model for different section( resources ) | priority:high | - Take a Reference from model schema defined in README.md file | 1.0 | Create database model for different section( resources ) - - Take a Reference from model schema defined in README.md file | priority | create database model for different section resources take a reference from model schema defined in readme md file | 1 |
769,904 | 27,021,930,013 | IssuesEvent | 2023-02-11 04:55:21 | zulip/zulip | https://api.github.com/repos/zulip/zulip | opened | Make it possible to add users to groups in invitation modal | area: onboarding area: settings (admin/org) priority: high area: popovers | In #19525, we are using groups to make Zulip's permissions model much more
flexible. As this project progresses, groups will become a key tool for managing
user permissions, stream subscriptions, etc. We should therefore make it
possible to immediately add users to groups when they are invited to join Zulip.
New input to be added to the **Invite users to Zulip** modal:
* Label: Groups they should join
* Placement: Above "Streams they should join"
* Input:
* Text box with typeahead for groups
* When selected, groups are displayed as pills, with the styling used in the
message feed
* Clicking on a pill should display group info, same as in the message
feed
* Each pill should have an `x` on the right for removal
Notes:
* This is only for adding users to custom groups; system groups (a.k.a. roles)
will still be managed separately.
Blockers:
- [ ] #23466 | 1.0 | Make it possible to add users to groups in invitation modal - In #19525, we are using groups to make Zulip's permissions model much more
flexible. As this project progresses, groups will become a key tool for managing
user permissions, stream subscriptions, etc. We should therefore make it
possible to immediately add users to groups when they are invited to join Zulip.
New input to be added to the **Invite users to Zulip** modal:
* Label: Groups they should join
* Placement: Above "Streams they should join"
* Input:
* Text box with typeahead for groups
* When selected, groups are displayed as pills, with the styling used in the
message feed
* Clicking on a pill should display group info, same as in the message
feed
* Each pill should have an `x` on the right for removal
Notes:
* This is only for adding users to custom groups; system groups (a.k.a. roles)
will still be managed separately.
Blockers:
- [ ] #23466 | priority | make it possible to add users to groups in invitation modal in we are using groups to make zulip s permissions model much more flexible as this project progresses groups will become a key tool for managing user permissions stream subscriptions etc we should therefore make it possible to immediately add users to groups when they are invited to join zulip new input to be added to the invite users to zulip modal label groups they should join placement above streams they should join input text box with typeahead for groups when selected groups are displayed as pills with the styling used in the message feed clicking on a pill should display group info same as in the message feed each pill should have an x on the right for removal notes this is only for adding users to custom groups system groups a k a roles will still be managed separately blockers | 1 |
447,701 | 12,892,008,228 | IssuesEvent | 2020-07-13 18:47:51 | coyiutoc/Honda-Capstone-Salmon | https://api.github.com/repos/coyiutoc/Honda-Capstone-Salmon | opened | Search by Headers | 7/13 moscow high priority | Add ability to search by cluster names.
- [ ] Prototype in Figma
- Considerations on whether this is a separate search bar in the main workspace, or have another type of "card" in the left-hand search column
- [ ] Implement in React | 1.0 | Search by Headers - Add ability to search by cluster names.
- [ ] Prototype in Figma
- Considerations on whether this is a separate search bar in the main workspace, or have another type of "card" in the left-hand search column
- [ ] Implement in React | priority | search by headers add ability to search by cluster names prototype in figma considerations on whether this is a separate search bar in the main workspace or have another type of card in the left hand search column implement in react | 1 |
224,425 | 7,470,070,401 | IssuesEvent | 2018-04-03 02:37:41 | CanberraOceanRacingClub/namadgi3 | https://api.github.com/repos/CanberraOceanRacingClub/namadgi3 | opened | MATCH CORC PLB NUMBERS TO AMSA PLB NUMBERS | Bavaria Working bee priority 1: High | NEEDS TO BE checked and updated for SOP21 Operations Manager.
The new 2016 PLBs are registered but we need to check and record which CORC number we have labelled them with. | 1.0 | MATCH CORC PLB NUMBERS TO AMSA PLB NUMBERS - NEEDS TO BE checked and updated for SOP21 Operations Manager.
The new 2016 PLBs are registered but we need to check and record which CORC number we have labelled them with. | priority | match corc plb numbers to amsa plb numbers needs to be checked and updated for operations manager the new plbs are registered but we need to check and record which corc number we have labelled them with | 1 |
462,855 | 13,254,993,517 | IssuesEvent | 2020-08-20 10:09:28 | OpenNebula/one | https://api.github.com/repos/OpenNebula/one | closed | Avoid gathering KVM I/O statistics for saving VMs | Category: Drivers - Monitor Category: KVM Priority: High Sponsored Status: Accepted Type: Bug | **Description**
Due to a bug (report TBD) in libvirt 5.0.0 in Debian 10, if KVM domain is being saved or (system) snapshotted, the queued domain statistics query operations (`domstats`), which directly talks to the QEMU over monitor channel, might crash the libvirt right after the save/snapshot is over. This naturally results in operation failure in the OpenNebula. OpenNebula could avoid gathering statistics for saving VMs, as it
- doesn't make much sense (provided data are reduced),
- slow downs the monitoring operation (even tens of seconds, as operation is blocked by running save),
- unnecessarily stresses the broken libvirtd.
Note: I **wasn't** able to reproduce the problem with
- libvirt 6.0.0 (Ubuntu 20.04),
- libvirt 6.5.0 (Debian Testing)
- libvirt 3.0 (Debian 9) and
- libvirt 4.5 (CentOS 7)
(so we can expect this is just misbehavior of 5.x series.)
## Goal
We need to ignore paused VMs if reason is `saving` (or all?):
```
virsh # domstate one-123 --reason
paused (saving)
```
at least to decrease the chance I/O monitoring hits into snapshotting operations and breaks it.
Fix needs to be introduced in branches
### OpenNebula 5.12
https://github.com/OpenNebula/one/blob/627e92c80b240552bac01e020000ba6a0b3472d2/src/im_mad/remotes/lib/kvm.rb#L285
and?
https://github.com/OpenNebula/one/blob/627e92c80b240552bac01e020000ba6a0b3472d2/src/im_mad/remotes/lib/kvm.rb#L262-L263
### OpenNebula 5.10
https://github.com/OpenNebula/one/blob/f5bde2b712a7e1b683238161179b10e025dccbc9/src/vmm_mad/remotes/kvm/poll#L158-L160
Customer ticket ref. https://opennebula.zendesk.com/agent/tickets/19076
**To Reproduce**
- Have Debian 10 and libvirt 5.0.0 hypervisor
- Throttle disks (e.g., `TOTAL_IOPS_SEC="50"`)
- Generate some disk/memory content, (system) snapshot/save VM
- If lucky, the libvirtd crashes after several minutes
**Details**
- Affected Component: Monitoring
- Hypervisor: KVM, libvirt 5.0.0
- Version: 5.10.3+
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
| 1.0 | Avoid gathering KVM I/O statistics for saving VMs - **Description**
Due to a bug (report TBD) in libvirt 5.0.0 in Debian 10, if KVM domain is being saved or (system) snapshotted, the queued domain statistics query operations (`domstats`), which directly talks to the QEMU over monitor channel, might crash the libvirt right after the save/snapshot is over. This naturally results in operation failure in the OpenNebula. OpenNebula could avoid gathering statistics for saving VMs, as it
- doesn't make much sense (provided data are reduced),
- slow downs the monitoring operation (even tens of seconds, as operation is blocked by running save),
- unnecessarily stresses the broken libvirtd.
Note: I **wasn't** able to reproduce the problem with
- libvirt 6.0.0 (Ubuntu 20.04),
- libvirt 6.5.0 (Debian Testing)
- libvirt 3.0 (Debian 9) and
- libvirt 4.5 (CentOS 7)
(so we can expect this is just misbehavior of 5.x series.)
## Goal
We need to ignore paused VMs if reason is `saving` (or all?):
```
virsh # domstate one-123 --reason
paused (saving)
```
at least to decrease the chance I/O monitoring hits into snapshotting operations and breaks it.
Fix needs to be introduced in branches
### OpenNebula 5.12
https://github.com/OpenNebula/one/blob/627e92c80b240552bac01e020000ba6a0b3472d2/src/im_mad/remotes/lib/kvm.rb#L285
and?
https://github.com/OpenNebula/one/blob/627e92c80b240552bac01e020000ba6a0b3472d2/src/im_mad/remotes/lib/kvm.rb#L262-L263
### OpenNebula 5.10
https://github.com/OpenNebula/one/blob/f5bde2b712a7e1b683238161179b10e025dccbc9/src/vmm_mad/remotes/kvm/poll#L158-L160
Customer ticket ref. https://opennebula.zendesk.com/agent/tickets/19076
**To Reproduce**
- Have Debian 10 and libvirt 5.0.0 hypervisor
- Throttle disks (e.g., `TOTAL_IOPS_SEC="50"`)
- Generate some disk/memory content, (system) snapshot/save VM
- If lucky, the libvirtd crashes after several minutes
**Details**
- Affected Component: Monitoring
- Hypervisor: KVM, libvirt 5.0.0
- Version: 5.10.3+
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
| priority | avoid gathering kvm i o statistics for saving vms description due to a bug report tbd in libvirt in debian if kvm domain is being saved or system snapshotted the queued domain statistics query operations domstats which directly talks to the qemu over monitor channel might crash the libvirt right after the save snapshot is over this naturally results in operation failure in the opennebula opennebula could avoid gathering statistics for saving vms as it doesn t make much sense provided data are reduced slow downs the monitoring operation even tens of seconds as operation is blocked by running save unnecessarily stresses the broken libvirtd note i wasn t able to reproduce the problem with libvirt ubuntu libvirt debian testing libvirt debian and libvirt centos so we can expect this is just misbehavior of x series goal we need to ignore paused vms if reason is saving or all virsh domstate one reason paused saving at least to decrease the chance i o monitoring hits into snapshotting operations and breaks it fix needs to be introduced in branches opennebula and opennebula customer ticket ref to reproduce have debian and libvirt hypervisor throttle disks e g total iops sec generate some disk memory content system snapshot save vm if lucky the libvirtd crashes after several minutes details affected component monitoring hypervisor kvm libvirt version progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches | 1 |
80,337 | 3,560,969,855 | IssuesEvent | 2016-01-23 13:19:13 | hpcugent/easybuild-framework | https://api.github.com/repos/hpcugent/easybuild-framework | closed | libmkl_solver libs are deprecated (and empty) | bug priority (3) high software-specific | The `libmkl_solver` libraries that are a part of Intel MKL have been deprecated for a long time, see http://software.intel.com/en-us/articles/mkl_solver_libraries_are_deprecated_libraries_since_version_10_2_Update_2.
Intel MKL v10.x still ships them for backwards compatibility, but in MKL v11.x most of the `libmkl_solver` libs are gone, thus causing issues when building certain software packages (e.g. QuantumESPRESSO with `ictce/5.x`).
```
-bash-3.2$ ls software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver*
software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver.a software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver_lp64.a
software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver_ilp64.a software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver_lp64_sequential.a
software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver_ilp64_sequential.a
-bash-3.2$ ls software/imkl/11.0.3.163/mkl/lib/intel64/libmkl_solver*
software/imkl/11.0.3.163/mkl/lib/intel64/libmkl_solver.a
```
This should be fixed in `easybuild/toolchains/linalg/intelmkl.py`. | 1.0 | libmkl_solver libs are deprecated (and empty) - The `libmkl_solver` libraries that are a part of Intel MKL have been deprecated for a long time, see http://software.intel.com/en-us/articles/mkl_solver_libraries_are_deprecated_libraries_since_version_10_2_Update_2.
Intel MKL v10.x still ships them for backwards compatibility, but in MKL v11.x most of the `libmkl_solver` libs are gone, thus causing issues when building certain software packages (e.g. QuantumESPRESSO with `ictce/5.x`).
```
-bash-3.2$ ls software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver*
software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver.a software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver_lp64.a
software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver_ilp64.a software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver_lp64_sequential.a
software/imkl/10.3.6.233/mkl/lib/intel64/libmkl_solver_ilp64_sequential.a
-bash-3.2$ ls software/imkl/11.0.3.163/mkl/lib/intel64/libmkl_solver*
software/imkl/11.0.3.163/mkl/lib/intel64/libmkl_solver.a
```
This should be fixed in `easybuild/toolchains/linalg/intelmkl.py`. | priority | libmkl solver libs are deprecated and empty the libmkl solver libraries that are a part of intel mkl have been deprecated for a long time see intel mkl x still ships them for backwards compatibility but in mkl x most of the libmkl solver libs are gone thus causing issues when building certain software packages e g quantumespresso with ictce x bash ls software imkl mkl lib libmkl solver software imkl mkl lib libmkl solver a software imkl mkl lib libmkl solver a software imkl mkl lib libmkl solver a software imkl mkl lib libmkl solver sequential a software imkl mkl lib libmkl solver sequential a bash ls software imkl mkl lib libmkl solver software imkl mkl lib libmkl solver a this should be fixed in easybuild toolchains linalg intelmkl py | 1 |
547,763 | 16,046,520,792 | IssuesEvent | 2021-04-22 14:12:49 | reichlab/forecast-repository | https://api.github.com/repos/reichlab/forecast-repository | closed | Change as_of forecast query semantics to handle partial forecast uploads | High Priority | Right now `as_of` operates at the at the forecast/timezero/issue_date level, not factoring in the unit/target level. This works OK with our current covidhub convention whereby forecast updates are done for **all** unit/target combinations. However, when we implement county-level truth forecasts, the files will be too large (~16MB) to upload with duplicates. Therefore we want to update/patch only "diffs". However, that will expose the weakness of the current `as_of` implementation. See the example at [Zoltar as_of query examples](https://docs.google.com/spreadsheets/d/1lT-WhgUG5vgonqjO_AvUDfXpNMC-alC7VHUzP4EJz7E/edit?ts=5fce8828#gid=0).
A prerequisite to: [Think about representing truth as an oracle model #280]. | 1.0 | Change as_of forecast query semantics to handle partial forecast uploads - Right now `as_of` operates at the at the forecast/timezero/issue_date level, not factoring in the unit/target level. This works OK with our current covidhub convention whereby forecast updates are done for **all** unit/target combinations. However, when we implement county-level truth forecasts, the files will be too large (~16MB) to upload with duplicates. Therefore we want to update/patch only "diffs". However, that will expose the weakness of the current `as_of` implementation. See the example at [Zoltar as_of query examples](https://docs.google.com/spreadsheets/d/1lT-WhgUG5vgonqjO_AvUDfXpNMC-alC7VHUzP4EJz7E/edit?ts=5fce8828#gid=0).
A prerequisite to: [Think about representing truth as an oracle model #280]. | priority | change as of forecast query semantics to handle partial forecast uploads right now as of operates at the at the forecast timezero issue date level not factoring in the unit target level this works ok with our current covidhub convention whereby forecast updates are done for all unit target combinations however when we implement county level truth forecasts the files will be too large to upload with duplicates therefore we want to update patch only diffs however that will expose the weakness of the current as of implementation see the example at a prerequisite to | 1 |
687,943 | 23,543,362,568 | IssuesEvent | 2022-08-20 18:54:33 | dnd-side-project/dnd-7th-4-backend | https://api.github.com/repos/dnd-side-project/dnd-7th-4-backend | closed | [Feature] /search 지역 검색 시, 날씨 상태에 맞는 일러스트 업데이트 | Type: Improvement Priority-High | - [x] s3 버킷에 `버킷 / icons / 시간대별 날씨` 에 해당 이미지 업로드
- [x] 하늘상태, 강수의 기준에 맞는 "날씨" 판별 후, 해당 일러스트 이미지 제공
- [x] 현재 기온을 api3 -> api1로 업데이트하면서, 현재 "기온"에 대한 정확도 개선
<img width="294" alt="image" src="https://user-images.githubusercontent.com/71487608/185761237-22694fe4-2dec-46b9-9048-9c1a746fe7d8.png">
| 1.0 | [Feature] /search 지역 검색 시, 날씨 상태에 맞는 일러스트 업데이트 - - [x] s3 버킷에 `버킷 / icons / 시간대별 날씨` 에 해당 이미지 업로드
- [x] 하늘상태, 강수의 기준에 맞는 "날씨" 판별 후, 해당 일러스트 이미지 제공
- [x] 현재 기온을 api3 -> api1로 업데이트하면서, 현재 "기온"에 대한 정확도 개선
<img width="294" alt="image" src="https://user-images.githubusercontent.com/71487608/185761237-22694fe4-2dec-46b9-9048-9c1a746fe7d8.png">
| priority | search 지역 검색 시 날씨 상태에 맞는 일러스트 업데이트 버킷에 버킷 icons 시간대별 날씨 에 해당 이미지 업로드 하늘상태 강수의 기준에 맞는 날씨 판별 후 해당 일러스트 이미지 제공 현재 기온을 업데이트하면서 현재 기온 에 대한 정확도 개선 img width alt image src | 1 |
162,349 | 6,151,452,838 | IssuesEvent | 2017-06-28 02:38:04 | mercadopago/px-ios | https://api.github.com/repos/mercadopago/px-ios | closed | Unificar los callbacks cancel | Priority: High Type: Enhancement | Unificar el callback cancel del back y de la salida de revisa y confirma
| 1.0 | Unificar los callbacks cancel - Unificar el callback cancel del back y de la salida de revisa y confirma
| priority | unificar los callbacks cancel unificar el callback cancel del back y de la salida de revisa y confirma | 1 |
814,773 | 30,521,570,204 | IssuesEvent | 2023-07-19 08:26:46 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | Support setting the default registry. | kind/feature priority/high kind/need-to-verify | <!--
You don't need to remove this comment section, it's invisible on the issues page.
## General remarks
* Attention, please fill out this issues form using English only!
* 注意!GitHub Issue 仅支持英文,中文 Issue 请在 [论坛](https://kubesphere.com.cn/forum/) 提交。
* This form is to report bugs. For general usage questions you can join our Slack channel
[KubeSphere-users](https://join.slack.com/t/kubesphere/shared_invite/enQtNTE3MDIxNzUxNzQ0LTZkNTdkYWNiYTVkMTM5ZThhODY1MjAyZmVlYWEwZmQ3ODQ1NmM1MGVkNWEzZTRhNzk0MzM5MmY4NDc3ZWVhMjE)
-->
https://github.com/kubesphere/issues/issues/95
**Describe the Bug**
A clear and concise description of what the bug is.
For UI issues please also add a screenshot that shows the issue.
**Versions Used**
KubeSphere:
Kubernetes: (If KubeSphere installer used, you can skip this)
**Environment**
How many nodes and their hardware configuration:
For example: CentOS 7.5 / 3 masters: 8cpu/8g; 3 nodes: 8cpu/16g
(and other info are welcomed to help us debugging)
**How To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
| 1.0 | Support setting the default registry. - <!--
You don't need to remove this comment section, it's invisible on the issues page.
## General remarks
* Attention, please fill out this issues form using English only!
* 注意!GitHub Issue 仅支持英文,中文 Issue 请在 [论坛](https://kubesphere.com.cn/forum/) 提交。
* This form is to report bugs. For general usage questions you can join our Slack channel
[KubeSphere-users](https://join.slack.com/t/kubesphere/shared_invite/enQtNTE3MDIxNzUxNzQ0LTZkNTdkYWNiYTVkMTM5ZThhODY1MjAyZmVlYWEwZmQ3ODQ1NmM1MGVkNWEzZTRhNzk0MzM5MmY4NDc3ZWVhMjE)
-->
https://github.com/kubesphere/issues/issues/95
**Describe the Bug**
A clear and concise description of what the bug is.
For UI issues please also add a screenshot that shows the issue.
**Versions Used**
KubeSphere:
Kubernetes: (If KubeSphere installer used, you can skip this)
**Environment**
How many nodes and their hardware configuration:
For example: CentOS 7.5 / 3 masters: 8cpu/8g; 3 nodes: 8cpu/16g
(and other info are welcomed to help us debugging)
**How To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
| priority | support setting the default registry you don t need to remove this comment section it s invisible on the issues page general remarks attention please fill out this issues form using english only 注意!github issue 仅支持英文,中文 issue 请在 提交。 this form is to report bugs for general usage questions you can join our slack channel describe the bug a clear and concise description of what the bug is for ui issues please also add a screenshot that shows the issue versions used kubesphere kubernetes if kubesphere installer used you can skip this environment how many nodes and their hardware configuration for example centos masters nodes and other info are welcomed to help us debugging how to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior a clear and concise description of what you expected to happen | 1 |
703,214 | 24,150,311,695 | IssuesEvent | 2022-09-21 23:29:38 | HackerN64/HackerSM64 | https://api.github.com/repos/HackerN64/HackerSM64 | closed | Exit Course Sound Effect Does Not Play | bug high priority | The sound effect that plays when exiting course and respawning in the castle simply does not activate. It doesn't play when voiding out of the vanish cap stage either. | 1.0 | Exit Course Sound Effect Does Not Play - The sound effect that plays when exiting course and respawning in the castle simply does not activate. It doesn't play when voiding out of the vanish cap stage either. | priority | exit course sound effect does not play the sound effect that plays when exiting course and respawning in the castle simply does not activate it doesn t play when voiding out of the vanish cap stage either | 1 |
105,536 | 4,237,524,478 | IssuesEvent | 2016-07-05 22:09:40 | vickychijwani/quill | https://api.github.com/repos/vickychijwani/quill | closed | Publish action not enabled after editing post settings (tags, featured post, ...) | bug high-priority | 1. Open a published post
2. Edit its tags / "featured" setting
3. The "Publish" action should now be enabled as soon as the drawer is closed, but it is not | 1.0 | Publish action not enabled after editing post settings (tags, featured post, ...) - 1. Open a published post
2. Edit its tags / "featured" setting
3. The "Publish" action should now be enabled as soon as the drawer is closed, but it is not | priority | publish action not enabled after editing post settings tags featured post open a published post edit its tags featured setting the publish action should now be enabled as soon as the drawer is closed but it is not | 1 |
78,706 | 3,516,496,589 | IssuesEvent | 2016-01-12 00:03:18 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Snow: "More info" links not clickable when snowing, source needs to be updated | Bug Priority: High Relevancy | Steps to reproduce:
1. Go to https://duckduckgo.com/?q=is+it+snowing+in+ushuaia&ia=answer (or somewhere else where it's likely to be snowing now)
2. Try to click on the information icon or "More at ..." link.
What should happen:
The links should be clickable
What actually happens:
The animated snow layer is on top of the content box, making it not possible to interact with it.
------
IA Page: http://duck.co/ia/view/snow | 1.0 | Snow: "More info" links not clickable when snowing, source needs to be updated - Steps to reproduce:
1. Go to https://duckduckgo.com/?q=is+it+snowing+in+ushuaia&ia=answer (or somewhere else where it's likely to be snowing now)
2. Try to click on the information icon or "More at ..." link.
What should happen:
The links should be clickable
What actually happens:
The animated snow layer is on top of the content box, making it not possible to interact with it.
------
IA Page: http://duck.co/ia/view/snow | priority | snow more info links not clickable when snowing source needs to be updated steps to reproduce go to or somewhere else where it s likely to be snowing now try to click on the information icon or more at link what should happen the links should be clickable what actually happens the animated snow layer is on top of the content box making it not possible to interact with it ia page | 1 |
809,932 | 30,217,726,198 | IssuesEvent | 2023-07-05 16:47:48 | calcom/cal.com | https://api.github.com/repos/calcom/cal.com | closed | [CAL-1089] Availability is ignored and can schedule event extending over available time | 🐛 bug 👩🔬 needs investigation High priority bookings | ### Issue Summary
Notice how my availability ends at 20:00, but a 30min event can still be scheduled at 19:45.
<img width="367" alt="image" src="https://user-images.githubusercontent.com/15384781/219028103-4f30e2a7-5e9d-4b94-aeea-c27d05bbd8fe.png">
<img width="724" alt="image" src="https://user-images.githubusercontent.com/15384781/219028498-6fd072c5-0d8f-4d59-941d-95256e6b666b.png">
### Steps to Reproduce
1. Set availability to 17:45 - 20:00
2. Create new event 30min event
3. Event can be booked at 19:45
<sub>[CAL-1089](https://linear.app/calcom/issue/CAL-1089/availability-is-ignored-and-can-schedule-event-extending-over)</sub> | 1.0 | [CAL-1089] Availability is ignored and can schedule event extending over available time - ### Issue Summary
Notice how my availability ends at 20:00, but a 30min event can still be scheduled at 19:45.
<img width="367" alt="image" src="https://user-images.githubusercontent.com/15384781/219028103-4f30e2a7-5e9d-4b94-aeea-c27d05bbd8fe.png">
<img width="724" alt="image" src="https://user-images.githubusercontent.com/15384781/219028498-6fd072c5-0d8f-4d59-941d-95256e6b666b.png">
### Steps to Reproduce
1. Set availability to 17:45 - 20:00
2. Create new event 30min event
3. Event can be booked at 19:45
<sub>[CAL-1089](https://linear.app/calcom/issue/CAL-1089/availability-is-ignored-and-can-schedule-event-extending-over)</sub> | priority | availability is ignored and can schedule event extending over available time issue summary notice how my availability ends at but a event can still be scheduled at img width alt image src img width alt image src steps to reproduce set availability to create new event event event can be booked at | 1 |
696,436 | 23,901,081,498 | IssuesEvent | 2022-09-08 18:50:43 | CarnegieLearningWeb/UpGrade | https://api.github.com/repos/CarnegieLearningWeb/UpGrade | reopened | Unify /api/mark and /api/failed | bug dependencies priority: high | Right now Mathia will always make some callback to UpGrade when it hits the SelectSection experiment point: either `/api/mark` if the (a) no experiment was running or (b) an experiment was running and Mathia was able to carry out the condition that UpGrade asked it to; or `/api/failed` if (c) an experiment was running but Mathia was not able to carry out the condition that UpGrade asked it to.
This is at least bad ergonomics for developers integrating with UpGrade: it’s not clear if you should call `/api/mark` XOR `/api/failed`, vs. `/api/mark` AND MAYBE ALSO `/api/failed`. It also seems to be the case that UpGrade isn’t interpreting a lone call to `/api/failed` as a student encountering the experiment point, since such students didn’t show up on the experiment dashboard after the testing done on Dec. 16.
So, we think we ought to instead do away with `/api/failed` as a separate API call, and add to `/api/mark` a way to pass the status of the user’s encounter with the experiment point. That status is either:
1. no experiment running
1. experiment running, experiment condition successfully applied
1. experiment running, experiment condition could not be applied
This both makes it clear to devs integrating with UpGrade that `/api/mark` should always be called; and makes sure that UpGrade has clear information about the state of each user WRT each experiment point.
cc @SritterCL @amurphy-cl | 1.0 | Unify /api/mark and /api/failed - Right now Mathia will always make some callback to UpGrade when it hits the SelectSection experiment point: either `/api/mark` if the (a) no experiment was running or (b) an experiment was running and Mathia was able to carry out the condition that UpGrade asked it to; or `/api/failed` if (c) an experiment was running but Mathia was not able to carry out the condition that UpGrade asked it to.
This is at least bad ergonomics for developers integrating with UpGrade: it’s not clear if you should call `/api/mark` XOR `/api/failed`, vs. `/api/mark` AND MAYBE ALSO `/api/failed`. It also seems to be the case that UpGrade isn’t interpreting a lone call to `/api/failed` as a student encountering the experiment point, since such students didn’t show up on the experiment dashboard after the testing done on Dec. 16.
So, we think we ought to instead do away with `/api/failed` as a separate API call, and add to `/api/mark` a way to pass the status of the user’s encounter with the experiment point. That status is either:
1. no experiment running
1. experiment running, experiment condition successfully applied
1. experiment running, experiment condition could not be applied
This both makes it clear to devs integrating with UpGrade that `/api/mark` should always be called; and makes sure that UpGrade has clear information about the state of each user WRT each experiment point.
cc @SritterCL @amurphy-cl | priority | unify api mark and api failed right now mathia will always make some callback to upgrade when it hits the selectsection experiment point either api mark if the a no experiment was running or b an experiment was running and mathia was able to carry out the condition that upgrade asked it to or api failed if c an experiment was running but mathia was not able to carry out the condition that upgrade asked it to this is at least bad ergonomics for developers integrating with upgrade it’s not clear if you should call api mark xor api failed vs api mark and maybe also api failed it also seems to be the case that upgrade isn’t interpreting a lone call to api failed as a student encountering the experiment point since such students didn’t show up on the experiment dashboard after the testing done on dec so we think we ought to instead do away with api failed as a separate api call and add to api mark a way to pass the status of the user’s encounter with the experiment point that status is either no experiment running experiment running experiment condition successfully applied experiment running experiment condition could not be applied this both makes it clear to devs integrating with upgrade that api mark should always be called and makes sure that upgrade has clear information about the state of each user wrt each experiment point cc srittercl amurphy cl | 1 |
749,742 | 26,177,665,367 | IssuesEvent | 2023-01-02 11:50:35 | bounswe/bounswe2022group4 | https://api.github.com/repos/bounswe/bounswe2022group4 | closed | PRESENTATION: Create Scenarios For Final Customer Presentation | Category - To Do Priority - High Status: Completed | ### Description:
We will meet with @MercerFrey in BM4 and we will create realistic scenarios for final customer presentation.
* We will decide to which functionalities we will present and how to present them.
* We will prepare a demo and document it to use during presentation.
* We will also test projector and computers that will be used during presentation to encounter a problem like first Milestone.
* We have started to create post and comments to fill our system as much as possible. We will continue to use our functionalities with a lot of users. (Customer wanted to create different realistic post and comments)
### ⏰ Deadline:
* 27.12.2022 13:00
| 1.0 | PRESENTATION: Create Scenarios For Final Customer Presentation - ### Description:
We will meet with @MercerFrey in BM4 and we will create realistic scenarios for final customer presentation.
* We will decide to which functionalities we will present and how to present them.
* We will prepare a demo and document it to use during presentation.
* We will also test projector and computers that will be used during presentation to encounter a problem like first Milestone.
* We have started to create post and comments to fill our system as much as possible. We will continue to use our functionalities with a lot of users. (Customer wanted to create different realistic post and comments)
### ⏰ Deadline:
* 27.12.2022 13:00
| priority | presentation create scenarios for final customer presentation description we will meet with mercerfrey in and we will create realistic scenarios for final customer presentation we will decide to which functionalities we will present and how to present them we will prepare a demo and document it to use during presentation we will also test projector and computers that will be used during presentation to encounter a problem like first milestone we have started to create post and comments to fill our system as much as possible we will continue to use our functionalities with a lot of users customer wanted to create different realistic post and comments ⏰ deadline | 1 |
425,171 | 12,336,644,056 | IssuesEvent | 2020-05-14 13:53:25 | codebar/planner | https://api.github.com/repos/codebar/planner | closed | Any event invitations should not be emailed to banned members | high-priority | ## Description of the issue 📄
Even though banned members are unable to RSVP to the monthlies, they still receive invitations.
The process sending out monthlies invitations should be updated to not include banned members.
## Screenshots 📷
## Steps to fix 🛠
Have a look at `InvitationManager` as thsi is where all event invitations are trigerred from
`app/models/invitation_manager.rb`
## To do 📋
* [ ] Claim this issue (comment below, or assign yourself if you are part of the codebar org)
* [ ] Fork and clone the repository
* [ ] Update the relevant files. Follow the steps to fix section in this issue.
* [ ] Commit your changes as one commit. Use the title of this issue as your commit message
* [ ] Submit a pull request
* [ ] Mention this issue in the PR description by including it's number
* [ ] Have your pull request reviewed & merged by a codebar team member
| 1.0 | Any event invitations should not be emailed to banned members - ## Description of the issue 📄
Even though banned members are unable to RSVP to the monthlies, they still receive invitations.
The process sending out monthlies invitations should be updated to not include banned members.
## Screenshots 📷
## Steps to fix 🛠
Have a look at `InvitationManager` as thsi is where all event invitations are trigerred from
`app/models/invitation_manager.rb`
## To do 📋
* [ ] Claim this issue (comment below, or assign yourself if you are part of the codebar org)
* [ ] Fork and clone the repository
* [ ] Update the relevant files. Follow the steps to fix section in this issue.
* [ ] Commit your changes as one commit. Use the title of this issue as your commit message
* [ ] Submit a pull request
* [ ] Mention this issue in the PR description by including it's number
* [ ] Have your pull request reviewed & merged by a codebar team member
| priority | any event invitations should not be emailed to banned members description of the issue 📄 even though banned members are unable to rsvp to the monthlies they still receive invitations the process sending out monthlies invitations should be updated to not include banned members screenshots 📷 steps to fix 🛠 have a look at invitationmanager as thsi is where all event invitations are trigerred from app models invitation manager rb to do 📋 claim this issue comment below or assign yourself if you are part of the codebar org fork and clone the repository update the relevant files follow the steps to fix section in this issue commit your changes as one commit use the title of this issue as your commit message submit a pull request mention this issue in the pr description by including it s number have your pull request reviewed merged by a codebar team member | 1 |
797,897 | 28,208,227,457 | IssuesEvent | 2023-04-05 00:13:50 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | opened | [DocDB] All RPC threads stuck in YBMetaDataCache::GetTable | area/docdb priority/high area/ycql status/awaiting-triage | ### Description
In one of our 2.12 customers, observed all RPC threads being stuck in this stack, during a rather bursty workload:
```
yb::tserver::TabletServiceImpl::Write();
yb::tablet::TabletPeer::WriteAsync();
yb::tablet::Tablet::AcquireLocksAndPerformDocOperations();
yb::tablet::WriteQuery::Execute();yb::tablet::WriteQuery::DoExecute();
yb::tablet::WriteQuery::CompleteExecute();
yb::tablet::WriteQuery::ExecuteDone();yb::tablet::WriteQuery::CqlExecuteDone();
yb::tablet::WriteQuery::UpdateQLIndexes();
yb::client::YBMetaDataCache::GetTable();
yb::client::YBClient::OpenTable();
yb::client::YBClient::Data::GetTableSchema();
yb::Synchronizer::WaitUntil();
std::condition_variable::wait();__pthread_cond_wait;(unknown)
```
Based on code inspection, this points to the following code
```
// TODO create async version of GetTable.
// It is ok to have sync call here, because we use cache and it should not take too long.
auto status = metadata_cache->GetTable(
index_info->table_id(), &index_table, &cache_used_ignored);
```
Which digging further, shows that, in theory, we do have a per-table cache, but in practice, the very first time we try to fill that cache, we do not have a barrier, to ensure 1 and only 1 thread goes out to fetch the metadata, but instead, all threads could end up in the `client_->OpenTable` call below:
```
Status YBMetaDataCache::GetTable(const TableId& table_id,
std::shared_ptr<YBTable>* table,
bool* cache_used) {
{
std::lock_guard<std::mutex> lock(cached_tables_mutex_);
auto itr = cached_tables_by_id_.find(table_id);
if (itr != cached_tables_by_id_.end()) {
*table = itr->second;
*cache_used = true;
return Status::OK();
}
}
RETURN_NOT_OK(client_->OpenTable(table_id, table));
{
std::lock_guard<std::mutex> lock(cached_tables_mutex_);
cached_tables_by_name_[(*table)->name()] = *table;
cached_tables_by_id_[table_id] = *table;
}
*cache_used = false;
return Status::OK();
}
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information. | 1.0 | [DocDB] All RPC threads stuck in YBMetaDataCache::GetTable - ### Description
In one of our 2.12 customers, observed all RPC threads being stuck in this stack, during a rather bursty workload:
```
yb::tserver::TabletServiceImpl::Write();
yb::tablet::TabletPeer::WriteAsync();
yb::tablet::Tablet::AcquireLocksAndPerformDocOperations();
yb::tablet::WriteQuery::Execute();yb::tablet::WriteQuery::DoExecute();
yb::tablet::WriteQuery::CompleteExecute();
yb::tablet::WriteQuery::ExecuteDone();yb::tablet::WriteQuery::CqlExecuteDone();
yb::tablet::WriteQuery::UpdateQLIndexes();
yb::client::YBMetaDataCache::GetTable();
yb::client::YBClient::OpenTable();
yb::client::YBClient::Data::GetTableSchema();
yb::Synchronizer::WaitUntil();
std::condition_variable::wait();__pthread_cond_wait;(unknown)
```
Based on code inspection, this points to the following code
```
// TODO create async version of GetTable.
// It is ok to have sync call here, because we use cache and it should not take too long.
auto status = metadata_cache->GetTable(
index_info->table_id(), &index_table, &cache_used_ignored);
```
Which digging further, shows that, in theory, we do have a per-table cache, but in practice, the very first time we try to fill that cache, we do not have a barrier, to ensure 1 and only 1 thread goes out to fetch the metadata, but instead, all threads could end up in the `client_->OpenTable` call below:
```
Status YBMetaDataCache::GetTable(const TableId& table_id,
std::shared_ptr<YBTable>* table,
bool* cache_used) {
{
std::lock_guard<std::mutex> lock(cached_tables_mutex_);
auto itr = cached_tables_by_id_.find(table_id);
if (itr != cached_tables_by_id_.end()) {
*table = itr->second;
*cache_used = true;
return Status::OK();
}
}
RETURN_NOT_OK(client_->OpenTable(table_id, table));
{
std::lock_guard<std::mutex> lock(cached_tables_mutex_);
cached_tables_by_name_[(*table)->name()] = *table;
cached_tables_by_id_[table_id] = *table;
}
*cache_used = false;
return Status::OK();
}
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information. | priority | all rpc threads stuck in ybmetadatacache gettable description in one of our customers observed all rpc threads being stuck in this stack during a rather bursty workload yb tserver tabletserviceimpl write yb tablet tabletpeer writeasync yb tablet tablet acquirelocksandperformdocoperations yb tablet writequery execute yb tablet writequery doexecute yb tablet writequery completeexecute yb tablet writequery executedone yb tablet writequery cqlexecutedone yb tablet writequery updateqlindexes yb client ybmetadatacache gettable yb client ybclient opentable yb client ybclient data gettableschema yb synchronizer waituntil std condition variable wait pthread cond wait unknown based on code inspection this points to the following code todo create async version of gettable it is ok to have sync call here because we use cache and it should not take too long auto status metadata cache gettable index info table id index table cache used ignored which digging further shows that in theory we do have a per table cache but in practice the very first time we try to fill that cache we do not have a barrier to ensure and only thread goes out to fetch the metadata but instead all threads could end up in the client opentable call below status ybmetadatacache gettable const tableid table id std shared ptr table bool cache used std lock guard lock cached tables mutex auto itr cached tables by id find table id if itr cached tables by id end table itr second cache used true return status ok return not ok client opentable table id table std lock guard lock cached tables mutex cached tables by name table cached tables by id table cache used false return status ok warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
287,637 | 8,817,786,426 | IssuesEvent | 2018-12-31 05:19:28 | Veil-Project/veil | https://api.github.com/repos/Veil-Project/veil | closed | Transaction Detail data is incorrect | Transaction Records bug high priority wallet | In macOS wallet (released Veil wallet.app version) on testnet, the Transaction Detail view has incorrect data.
If clicking on a Zerocoin Mint item, the details seem okay, except for the date, which is always 25/10/18. (I reported this early on, but didn't put it directly into Github.)
If clicking on a Basecoin Mined item, in addition to the date problem, the Fee amount shows in the order of positive or negative _billions_ of Veil or thereabouts!


| 1.0 | Transaction Detail data is incorrect - In macOS wallet (released Veil wallet.app version) on testnet, the Transaction Detail view has incorrect data.
If clicking on a Zerocoin Mint item, the details seem okay, except for the date, which is always 25/10/18. (I reported this early on, but didn't put it directly into Github.)
If clicking on a Basecoin Mined item, in addition to the date problem, the Fee amount shows in the order of positive or negative _billions_ of Veil or thereabouts!


| priority | transaction detail data is incorrect in macos wallet released veil wallet app version on testnet the transaction detail view has incorrect data if clicking on a zerocoin mint item the details seem okay except for the date which is always i reported this early on but didn t put it directly into github if clicking on a basecoin mined item in addition to the date problem the fee amount shows in the order of positive or negative billions of veil or thereabouts | 1 |
777,688 | 27,290,717,457 | IssuesEvent | 2023-02-23 16:25:03 | GSM-MSG/GCMS-FrontEnd-V2 | https://api.github.com/repos/GSM-MSG/GCMS-FrontEnd-V2 | closed | club member 페이지 권한 이슈 | 1️⃣ Priority: High 🐞 Bug | ### Describe
club member 페이지에서 부장 권한 가진 사람이 자기자신을 위임 또는 추방을 할 수 있다.(...?)
### Additional
_No response_ | 1.0 | club member 페이지 권한 이슈 - ### Describe
club member 페이지에서 부장 권한 가진 사람이 자기자신을 위임 또는 추방을 할 수 있다.(...?)
### Additional
_No response_ | priority | club member 페이지 권한 이슈 describe club member 페이지에서 부장 권한 가진 사람이 자기자신을 위임 또는 추방을 할 수 있다 additional no response | 1 |
359,290 | 10,673,815,719 | IssuesEvent | 2019-10-21 08:07:06 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [master-preview] Ashlar Conclave Blocks display | High Priority | Fence:

SlopePoint:

Brace:

Ladder:

| 1.0 | [master-preview] Ashlar Conclave Blocks display - Fence:

SlopePoint:

Brace:

Ladder:

| priority | ashlar conclave blocks display fence slopepoint brace ladder | 1 |
359,073 | 10,659,700,546 | IssuesEvent | 2019-10-18 08:19:51 | AY1920S1-CS2103T-T10-2/main | https://api.github.com/repos/AY1920S1-CS2103T-T10-2/main | reopened | As a Health Conscious person, I want to create custom workouts from the exercise I have created | priority.High status.Ongoing type.Story | So that I can manage and plan my workouts better | 1.0 | As a Health Conscious person, I want to create custom workouts from the exercise I have created - So that I can manage and plan my workouts better | priority | as a health conscious person i want to create custom workouts from the exercise i have created so that i can manage and plan my workouts better | 1 |
711,608 | 24,469,490,353 | IssuesEvent | 2022-10-07 18:17:07 | NCAR/wrfcloud | https://api.github.com/repos/NCAR/wrfcloud | closed | Add UPP for post-processing | priority: high type: new feature component: NWP components | ## Describe the New Feature ##
Some of the desired forecast products will require a post-processing package since they cannot be output by WRF. UPP provides these diagnostics and will also make the incorporation of MET verification easy, since it works well with UPP's grib2 output.
### Acceptance Testing ###
UPP will read in WRF netcdf output files.
Tests required will include:
- Confirming UPP image is built with Imagebuilder
- Testing a new run_upp.py script to run UPP on the WRF output files
### Time Estimate ###
1 week total
### Sub-Issues ###
Consider breaking the new feature down into sub-issues.
- [x] #50 (need assistance from engineer)
- [x] #51 (scientist)
### Relevant Deadlines ###
September 30, 2022
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [x] Select **scientist(s)** or **no scientist** required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
### Projects and Milestone ###
- [x] Select **Project**
- [x] Select **Milestone** as the next official version or **Backlog of Development Ideas**
## New Feature Checklist ##
- [x] Complete the issue definition above, including the **Time Estimate** and **Funding source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>/<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)**, **Project**, and **Development** issue
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
| 1.0 | Add UPP for post-processing - ## Describe the New Feature ##
Some of the desired forecast products will require a post-processing package since they cannot be output by WRF. UPP provides these diagnostics and will also make the incorporation of MET verification easy, since it works well with UPP's grib2 output.
### Acceptance Testing ###
UPP will read in WRF netcdf output files.
Tests required will include:
- Confirming UPP image is built with Imagebuilder
- Testing a new run_upp.py script to run UPP on the WRF output files
### Time Estimate ###
1 week total
### Sub-Issues ###
Consider breaking the new feature down into sub-issues.
- [x] #50 (need assistance from engineer)
- [x] #51 (scientist)
### Relevant Deadlines ###
September 30, 2022
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [x] Select **scientist(s)** or **no scientist** required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
### Projects and Milestone ###
- [x] Select **Project**
- [x] Select **Milestone** as the next official version or **Backlog of Development Ideas**
## New Feature Checklist ##
- [x] Complete the issue definition above, including the **Time Estimate** and **Funding source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>/<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)**, **Project**, and **Development** issue
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
| priority | add upp for post processing describe the new feature some of the desired forecast products will require a post processing package since they cannot be output by wrf upp provides these diagnostics and will also make the incorporation of met verification easy since it works well with upp s output acceptance testing upp will read in wrf netcdf output files tests required will include confirming upp image is built with imagebuilder testing a new run upp py script to run upp on the wrf output files time estimate week total sub issues consider breaking the new feature down into sub issues need assistance from engineer scientist relevant deadlines september define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority projects and milestone select project select milestone as the next official version or backlog of development ideas new feature checklist complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s project and development issue select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue | 1 |
137,834 | 5,317,062,688 | IssuesEvent | 2017-02-13 21:32:09 | Zchanson1/ImplantCompare | https://api.github.com/repos/Zchanson1/ImplantCompare | closed | Make UX Seamless across all apps | High Priority | Make the iOS app match our web app for seamless user experience. | 1.0 | Make UX Seamless across all apps - Make the iOS app match our web app for seamless user experience. | priority | make ux seamless across all apps make the ios app match our web app for seamless user experience | 1 |
191,779 | 6,842,642,634 | IssuesEvent | 2017-11-12 04:46:47 | CS2103AUG2017-W09-B1/main | https://api.github.com/repos/CS2103AUG2017-W09-B1/main | closed | FindCommand : The app doesn't response | priority.high type.bug | When I try to find the persons who have 'hahah' in the name and 'sci' in the tag, the app does not give any response!

| 1.0 | FindCommand : The app doesn't response - When I try to find the persons who have 'hahah' in the name and 'sci' in the tag, the app does not give any response!

| priority | findcommand the app doesn t response when i try to find the persons who have hahah in the name and sci in the tag the app does not give any response | 1 |
813,966 | 30,481,134,365 | IssuesEvent | 2023-07-17 20:27:28 | bcgov/bc-wallet-mobile | https://api.github.com/repos/bcgov/bc-wallet-mobile | closed | Revocation Notification Layout | priority/high | - notification design should follow the error notification colour palette (red)
- [link to styleguide](https://xd.adobe.com/view/eb9e0945-61d7-40ca-8e8e-bf4bd0c72934-becd/) | 1.0 | Revocation Notification Layout - - notification design should follow the error notification colour palette (red)
- [link to styleguide](https://xd.adobe.com/view/eb9e0945-61d7-40ca-8e8e-bf4bd0c72934-becd/) | priority | revocation notification layout notification design should follow the error notification colour palette red | 1 |
751,934 | 26,266,836,444 | IssuesEvent | 2023-01-06 13:21:43 | GenomicMedLab/cool-seq-tool | https://api.github.com/repos/GenomicMedLab/cool-seq-tool | closed | Investigate CPU credit usage on aws rds instance | bug priority:high performance | On Friday, our cloudwatch alarm went off:
> CPUCreditUsage > 2.5 for 1 datapoints within 1 minute
This was caused by hitting the `/gnomad_vcf_to_protein` endpoint in the variation-normalizer with SNVs. We should investigate this since it causes a timeout in variation-normalizer | 1.0 | Investigate CPU credit usage on aws rds instance - On Friday, our cloudwatch alarm went off:
> CPUCreditUsage > 2.5 for 1 datapoints within 1 minute
This was caused by hitting the `/gnomad_vcf_to_protein` endpoint in the variation-normalizer with SNVs. We should investigate this since it causes a timeout in variation-normalizer | priority | investigate cpu credit usage on aws rds instance on friday our cloudwatch alarm went off cpucreditusage for datapoints within minute this was caused by hitting the gnomad vcf to protein endpoint in the variation normalizer with snvs we should investigate this since it causes a timeout in variation normalizer | 1 |
379,011 | 11,211,984,542 | IssuesEvent | 2020-01-06 16:34:04 | emory-libraries/ezpaarse-platforms | https://api.github.com/repos/emory-libraries/ezpaarse-platforms | opened | Update Springer Parsers | High Priority Stakeholder Priority Update Parser | These were last updated in July, 2017:
https://github.com/emory-libraries/ezpaarse-platforms/tree/master/springer
https://github.com/emory-libraries/ezpaarse-platforms/tree/master/springer-materials
https://github.com/emory-libraries/ezpaarse-platforms/tree/master/springer-protocols
They need to be synced with these:
https://github.com/ezpaarse-project/ezpaarse-platforms/tree/master/springer
https://github.com/ezpaarse-project/ezpaarse-platforms/tree/master/springer-protocols
https://github.com/ezpaarse-project/ezpaarse-platforms/tree/master/springer-materials
Check to make sure some of the newer links like these are parsing:
https://link-springer-com.proxy.library.emory.edu/journal/417/235/8
https://link-springer-com.proxy.library.emory.edu/article/10.1023/A%3A1023254226592
https://link-springer-com.proxy.library.emory.edu/journal/11136/10/1/page/1
https://link-springer-com.proxy.library.emory.edu/content/pdf/10.1177%2F009207002236914.pdf | 2.0 | Update Springer Parsers - These were last updated in July, 2017:
https://github.com/emory-libraries/ezpaarse-platforms/tree/master/springer
https://github.com/emory-libraries/ezpaarse-platforms/tree/master/springer-materials
https://github.com/emory-libraries/ezpaarse-platforms/tree/master/springer-protocols
They need to be synced with these:
https://github.com/ezpaarse-project/ezpaarse-platforms/tree/master/springer
https://github.com/ezpaarse-project/ezpaarse-platforms/tree/master/springer-protocols
https://github.com/ezpaarse-project/ezpaarse-platforms/tree/master/springer-materials
Check to make sure some of the newer links like these are parsing:
https://link-springer-com.proxy.library.emory.edu/journal/417/235/8
https://link-springer-com.proxy.library.emory.edu/article/10.1023/A%3A1023254226592
https://link-springer-com.proxy.library.emory.edu/journal/11136/10/1/page/1
https://link-springer-com.proxy.library.emory.edu/content/pdf/10.1177%2F009207002236914.pdf | priority | update springer parsers these were last updated in july they need to be synced with these check to make sure some of the newer links like these are parsing | 1 |
322,744 | 9,828,254,543 | IssuesEvent | 2019-06-15 09:47:24 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Fatal error : when remove author meta name and date on AMP posts/articles in design-2 | NEED FAST REVIEW [Priority: HIGH] bug | When we need to remove the author meta name and date on AMP posts while using design -2.
I have try to recreate in localhost so i follow these steps.
Customizing > AMP> Design >Meta info
But same think i observe in local
Here see the screenshots:
1.https://monosnap.com/file/KbvEkhOlr3CCXVisFqyXCzpwm1LODg
2.https://monosnap.com/file/78gJTuHvYuyW9MQhowgDTcRIjc5aeJ
| 1.0 | Fatal error : when remove author meta name and date on AMP posts/articles in design-2 - When we need to remove the author meta name and date on AMP posts while using design -2.
I have try to recreate in localhost so i follow these steps.
Customizing > AMP> Design >Meta info
But same think i observe in local
Here see the screenshots:
1.https://monosnap.com/file/KbvEkhOlr3CCXVisFqyXCzpwm1LODg
2.https://monosnap.com/file/78gJTuHvYuyW9MQhowgDTcRIjc5aeJ
| priority | fatal error when remove author meta name and date on amp posts articles in design when we need to remove the author meta name and date on amp posts while using design i have try to recreate in localhost so i follow these steps customizing amp design meta info but same think i observe in local here see the screenshots | 1 |
220,551 | 7,360,917,197 | IssuesEvent | 2018-03-10 23:48:39 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Housing Update Message is Incorrect | High Priority | 
Not sure whats causing this, but it looks like the message is just incorrect with different skillrate multipliers. | 1.0 | Housing Update Message is Incorrect - 
Not sure whats causing this, but it looks like the message is just incorrect with different skillrate multipliers. | priority | housing update message is incorrect not sure whats causing this but it looks like the message is just incorrect with different skillrate multipliers | 1 |
4,360 | 2,550,691,029 | IssuesEvent | 2015-02-01 20:40:55 | davidlaprade/umbgov_rails | https://api.github.com/repos/davidlaprade/umbgov_rails | closed | Create Senate Roster Page | enhancement high-priority | They want a page with pictures of each senator and justice that are links to their profiles
- [x] create user#show route, action, view
- [x] update senate roster page accordingly | 1.0 | Create Senate Roster Page - They want a page with pictures of each senator and justice that are links to their profiles
- [x] create user#show route, action, view
- [x] update senate roster page accordingly | priority | create senate roster page they want a page with pictures of each senator and justice that are links to their profiles create user show route action view update senate roster page accordingly | 1 |
97,179 | 3,985,644,862 | IssuesEvent | 2016-05-08 01:35:53 | dankelley/oce | https://api.github.com/repos/dankelley/oce | closed | handle flags for argo | argo flags high priority request | This rests on #892, although it seems likely that working on both at the same time will prove to be useful. | 1.0 | handle flags for argo - This rests on #892, although it seems likely that working on both at the same time will prove to be useful. | priority | handle flags for argo this rests on although it seems likely that working on both at the same time will prove to be useful | 1 |
717,017 | 24,657,058,254 | IssuesEvent | 2022-10-18 01:14:54 | AY2223S1-CS2103T-W15-1/tp | https://api.github.com/repos/AY2223S1-CS2103T-W15-1/tp | closed | As a student, I can search for contacts using a particular tag | type.Story priority.High | ... so that I can view all relevant contacts in a particular group I am interested in.
For example:
`search t/family` returns all people tagged with family in the contact list. | 1.0 | As a student, I can search for contacts using a particular tag - ... so that I can view all relevant contacts in a particular group I am interested in.
For example:
`search t/family` returns all people tagged with family in the contact list. | priority | as a student i can search for contacts using a particular tag so that i can view all relevant contacts in a particular group i am interested in for example search t family returns all people tagged with family in the contact list | 1 |
347,178 | 10,426,042,924 | IssuesEvent | 2019-09-16 16:39:48 | jetrails/magento2-cloudflare | https://api.github.com/repos/jetrails/magento2-cloudflare | closed | Magento 2.2 Issue With Force Redirect To Extension | bug priority: high | When the extension is installed, it seems that for 2.2, it forces the user to the extension page. | 1.0 | Magento 2.2 Issue With Force Redirect To Extension - When the extension is installed, it seems that for 2.2, it forces the user to the extension page. | priority | magento issue with force redirect to extension when the extension is installed it seems that for it forces the user to the extension page | 1 |
730,332 | 25,168,195,051 | IssuesEvent | 2022-11-10 23:13:26 | VoltanFr/memcheck | https://api.github.com/repos/VoltanFr/memcheck | closed | Authoring bug: download image full size crashes | bug image complexity-low priority-high page-authoring | In the authoring page, when displaying the markdown preview, if I click on an image and then on the link `Download in full resolution`, Mnesios reports `Request failed with status code 500`.
This was seen in Mnesios version 0.42.2, editing the card [88af43aa-e877-4c70-fd99-08d7eba80d99](https://www.mnesios.com/Authoring?CardId=88af43aa-e877-4c70-fd99-08d7eba80d99).
| 1.0 | Authoring bug: download image full size crashes - In the authoring page, when displaying the markdown preview, if I click on an image and then on the link `Download in full resolution`, Mnesios reports `Request failed with status code 500`.
This was seen in Mnesios version 0.42.2, editing the card [88af43aa-e877-4c70-fd99-08d7eba80d99](https://www.mnesios.com/Authoring?CardId=88af43aa-e877-4c70-fd99-08d7eba80d99).
| priority | authoring bug download image full size crashes in the authoring page when displaying the markdown preview if i click on an image and then on the link download in full resolution mnesios reports request failed with status code this was seen in mnesios version editing the card | 1 |
757,348 | 26,507,982,928 | IssuesEvent | 2023-01-18 15:08:04 | asaierika/Kizuna | https://api.github.com/repos/asaierika/Kizuna | opened | Backward compatibility for managers | bug high priority | Many requirements in demi-human realm are not required in human realm thus resulting in bugs when using the same managers for both | 1.0 | Backward compatibility for managers - Many requirements in demi-human realm are not required in human realm thus resulting in bugs when using the same managers for both | priority | backward compatibility for managers many requirements in demi human realm are not required in human realm thus resulting in bugs when using the same managers for both | 1 |
331,698 | 10,075,959,182 | IssuesEvent | 2019-07-24 15:15:21 | AnSyn/ansyn | https://api.github.com/repos/AnSyn/ansyn | closed | (bug) chrome44: open overlay from tools "display footprint " | Bug Chrome 44 Priority: High Severity: Low | when hovering/chossing footprint over the base map the footprint disapears | 1.0 | (bug) chrome44: open overlay from tools "display footprint " - when hovering/chossing footprint over the base map the footprint disapears | priority | bug open overlay from tools display footprint when hovering chossing footprint over the base map the footprint disapears | 1 |
148,250 | 5,664,134,733 | IssuesEvent | 2017-04-11 01:00:29 | HackGT/Ultimate-Checkin | https://api.github.com/repos/HackGT/Ultimate-Checkin | closed | Improve performance for large lists | high priority | Searching and loading large (~300 or more) lists of attendees causes a significant delay that is only made worse by the fact that the list is destroyed and recreated upon every keypress in the search field. [Relevant section of code](https://github.com/HackGT/Ultimate-Checkin/blob/master/client/js/main.ts#L136) that removes and recreates new DOM elements in an inefficient manner. | 1.0 | Improve performance for large lists - Searching and loading large (~300 or more) lists of attendees causes a significant delay that is only made worse by the fact that the list is destroyed and recreated upon every keypress in the search field. [Relevant section of code](https://github.com/HackGT/Ultimate-Checkin/blob/master/client/js/main.ts#L136) that removes and recreates new DOM elements in an inefficient manner. | priority | improve performance for large lists searching and loading large or more lists of attendees causes a significant delay that is only made worse by the fact that the list is destroyed and recreated upon every keypress in the search field that removes and recreates new dom elements in an inefficient manner | 1 |
149,553 | 5,720,819,492 | IssuesEvent | 2017-04-20 03:35:41 | borevitzlab/spc-eyepi | https://api.github.com/repos/borevitzlab/spc-eyepi | closed | image upload status needs to update more reliably and frequently | Priority: High wontfix | "last upload time" is frequently inaccurate on website needs to report accurately. if the uploads are really happening this randomly then that needs to be cleaned up enough that cameras upload within a reasonable time of capture and in a predictable way
thanks | 1.0 | image upload status needs to update more reliably and frequently - "last upload time" is frequently inaccurate on website needs to report accurately. if the uploads are really happening this randomly then that needs to be cleaned up enough that cameras upload within a reasonable time of capture and in a predictable way
thanks | priority | image upload status needs to update more reliably and frequently last upload time is frequently inaccurate on website needs to report accurately if the uploads are really happening this randomly then that needs to be cleaned up enough that cameras upload within a reasonable time of capture and in a predictable way thanks | 1 |
177,533 | 6,584,912,350 | IssuesEvent | 2017-09-13 12:13:03 | ballerinalang/composer | https://api.github.com/repos/ballerinalang/composer | opened | Adding statements in the action view should be blocked | Priority/High Severity/Major Type/Improvement | Pack 0.93 13/09
Currently, the composer indicates (marking the area green) that statements can be added to the action view. But, it does not show the added statements according to the design which is correct. But, you let the user adding statements here is wrong. Also, if you switch to the default or compact view, those statements will be available. But, the user might think those statements are not added.
Therefore, adding statements in the action view should be blocked | 1.0 | Adding statements in the action view should be blocked - Pack 0.93 13/09
Currently, the composer indicates (marking the area green) that statements can be added to the action view. But, it does not show the added statements according to the design which is correct. But, you let the user adding statements here is wrong. Also, if you switch to the default or compact view, those statements will be available. But, the user might think those statements are not added.
Therefore, adding statements in the action view should be blocked | priority | adding statements in the action view should be blocked pack currently the composer indicates marking the area green that statements can be added to the action view but it does not show the added statements according to the design which is correct but you let the user adding statements here is wrong also if you switch to the default or compact view those statements will be available but the user might think those statements are not added therefore adding statements in the action view should be blocked | 1 |
156,797 | 5,989,315,481 | IssuesEvent | 2017-06-02 08:15:40 | Caleydo/taggle | https://api.github.com/repos/Caleydo/taggle | closed | Encoding for missing values for number, string, and categorical columns | bug enhancement high priority PR_filed | Steps to reproduce:
- Add a categorical column
- Stratify it
- Add a numerical column with null values
Expected result:
- The dataset should be correctly visualized
Actual result:
- The visualization shows entries twice; the aggregator column and row number column are not shown

| 1.0 | Encoding for missing values for number, string, and categorical columns - Steps to reproduce:
- Add a categorical column
- Stratify it
- Add a numerical column with null values
Expected result:
- The dataset should be correctly visualized
Actual result:
- The visualization shows entries twice; the aggregator column and row number column are not shown

| priority | encoding for missing values for number string and categorical columns steps to reproduce add a categorical column stratify it add a numerical column with null values expected result the dataset should be correctly visualized actual result the visualization shows entries twice the aggregator column and row number column are not shown | 1 |
154,526 | 5,920,573,471 | IssuesEvent | 2017-05-22 20:37:15 | DistrictDataLabs/cultivar | https://api.github.com/repos/DistrictDataLabs/cultivar | closed | Update dev files to make Vagrant and Docker startup work | Beginner DevOps in progress priority: high type: bug | Vagrant and docker dev environments don't start up properly:
- Vagrantfile is using base box's python 2 instead of python 3, which this codebase is (currently) python3 compatible only, starting with needing `urllib.parse`
```
cultivar $ vagrant up
# ....
==> default: /home/vagrant/project/venv/local/lib/python2.7/site-packages/six.py:808: RemovedInDjango110Warning: SubfieldBase has been deprecated. Use Field.from_db_value instead.
==> default: return meta(name, bases, d)
==> default: Traceback (most recent call last):
==> default: File "manage.py", line 39, in <module>
==> default:
==> default: execute_from_command_line(sys.argv)
==> default: File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
==> default:
==> default: utility.execute()
==> default: File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 327, in execute
==> default:
==> default: django.setup()
==> default: File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
==> default:
==> default: apps.populate(settings.INSTALLED_APPS)
==> default: File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
==> default:
==> default: app_config.import_models(all_models)
==> default: File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/apps/config.py", line 202, in import_models
==> default:
==> default: self.models_module = import_module(models_module_name)
==> default: File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
==> default:
==> default: __import__(name)
==> default: File "/home/vagrant/project/members/models.py", line 20, in <module>
==> default:
==> default: import urllib.parse
==> default: ImportError
==> default: :
==> default: No module named parse
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
cultivar $
```
- Docker compose file has container links to rabbitmq service but no rabbitmq service is defined. Possibly the dangling redis service was intended to replace it but if so the docker-compose as well as the rest of the default celery config needs to be updated accordingly, so it needs to go one way or the other:
```
cultivar $ docker-compose up -d
ERROR: Service 'celery' depends on service 'rabbitmq' which is undefined.
``` | 1.0 | Update dev files to make Vagrant and Docker startup work - Vagrant and docker dev environments don't start up properly:
- Vagrantfile is using base box's python 2 instead of python 3, which this codebase is (currently) python3 compatible only, starting with needing `urllib.parse`
```
cultivar $ vagrant up
# ....
==> default: /home/vagrant/project/venv/local/lib/python2.7/site-packages/six.py:808: RemovedInDjango110Warning: SubfieldBase has been deprecated. Use Field.from_db_value instead.
==> default: return meta(name, bases, d)
==> default: Traceback (most recent call last):
==> default: File "manage.py", line 39, in <module>
==> default:
==> default: execute_from_command_line(sys.argv)
==> default: File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
==> default:
==> default: utility.execute()
==> default: File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 327, in execute
==> default:
==> default: django.setup()
==> default: File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
==> default:
==> default: apps.populate(settings.INSTALLED_APPS)
==> default: File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
==> default:
==> default: app_config.import_models(all_models)
==> default: File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/apps/config.py", line 202, in import_models
==> default:
==> default: self.models_module = import_module(models_module_name)
==> default: File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
==> default:
==> default: __import__(name)
==> default: File "/home/vagrant/project/members/models.py", line 20, in <module>
==> default:
==> default: import urllib.parse
==> default: ImportError
==> default: :
==> default: No module named parse
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
cultivar $
```
- Docker compose file has container links to rabbitmq service but no rabbitmq service is defined. Possibly the dangling redis service was intended to replace it but if so the docker-compose as well as the rest of the default celery config needs to be updated accordingly, so it needs to go one way or the other:
```
cultivar $ docker-compose up -d
ERROR: Service 'celery' depends on service 'rabbitmq' which is undefined.
``` | priority | update dev files to make vagrant and docker startup work vagrant and docker dev environments don t start up properly vagrantfile is using base box s python instead of python which this codebase is currently compatible only starting with needing urllib parse cultivar vagrant up default home vagrant project venv local lib site packages six py subfieldbase has been deprecated use field from db value instead default return meta name bases d default traceback most recent call last default file manage py line in default default execute from command line sys argv default file home vagrant project venv local lib site packages django core management init py line in execute from command line default default utility execute default file home vagrant project venv local lib site packages django core management init py line in execute default default django setup default file home vagrant project venv local lib site packages django init py line in setup default default apps populate settings installed apps default file home vagrant project venv local lib site packages django apps registry py line in populate default default app config import models all models default file home vagrant project venv local lib site packages django apps config py line in import models default default self models module import module models module name default file usr lib importlib init py line in import module default default import name default file home vagrant project members models py line in default default import urllib parse default importerror default default no module named parse the ssh command responded with a non zero exit status vagrant assumes that this means the command failed the output for this command should be in the log above please read the output to determine what went wrong cultivar docker compose file has container links to rabbitmq service but no rabbitmq service is defined possibly the dangling redis service was intended to replace it but if so the docker compose as well as the rest of the default celery config needs to be updated accordingly so it needs to go one way or the other cultivar docker compose up d error service celery depends on service rabbitmq which is undefined | 1 |
397,215 | 11,725,372,785 | IssuesEvent | 2020-03-10 12:50:04 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | Issue with Forum search when logged for non-admin users | bug priority: high | **Describe the bug**
When you log in as a non-admin user and try to search forums using the search bar appearing on the forum page.
You will get the no results 'Sorry, there were no discussions found.
But When logged as Admin, the forum's search works fine.
**To Reproduce**
Steps to reproduce the behavior:
1. Enable the forum banner and search from theme settings >> forums (If not enabled).
1. Logged as a normal user and search for forums or discussions.
2. See error (Sorry, there were no discussions found).
**Expected behavior**
It should return the result that matches the search keyword.
**Screencast*
https://screencast-o-matic.com/watch/cYe6lBxqyA
**Support ticket links**
https://buddyboss.zendesk.com/agent/tickets/63049
| 1.0 | Issue with Forum search when logged for non-admin users - **Describe the bug**
When you log in as a non-admin user and try to search forums using the search bar appearing on the forum page.
You will get the no results 'Sorry, there were no discussions found.
But When logged as Admin, the forum's search works fine.
**To Reproduce**
Steps to reproduce the behavior:
1. Enable the forum banner and search from theme settings >> forums (If not enabled).
1. Logged as a normal user and search for forums or discussions.
2. See error (Sorry, there were no discussions found).
**Expected behavior**
It should return the result that matches the search keyword.
**Screencast*
https://screencast-o-matic.com/watch/cYe6lBxqyA
**Support ticket links**
https://buddyboss.zendesk.com/agent/tickets/63049
| priority | issue with forum search when logged for non admin users describe the bug when you log in as a non admin user and try to search forums using the search bar appearing on the forum page you will get the no results sorry there were no discussions found but when logged as admin the forum s search works fine to reproduce steps to reproduce the behavior enable the forum banner and search from theme settings forums if not enabled logged as a normal user and search for forums or discussions see error sorry there were no discussions found expected behavior it should return the result that matches the search keyword screencast support ticket links | 1 |
446,602 | 12,867,592,667 | IssuesEvent | 2020-07-10 07:13:36 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | closed | SS14 crashes on Mac and uses old binary icon. | Priority: 1-high Type: Bug W: Next | <img width="955" alt="Screen Shot 2020-05-31 at 11 59 22 AM" src="https://user-images.githubusercontent.com/49448379/83360472-75e93280-a336-11ea-8f23-3a38c895b3b5.png">
<img width="514" alt="Screen Shot 2020-05-31 at 12 03 08 PM" src="https://user-images.githubusercontent.com/49448379/83360518-b648b080-a336-11ea-87bf-78ab62c020d3.png">
Reproduced by downloading the launcher and trying to connect to Wizard's Den. | 1.0 | SS14 crashes on Mac and uses old binary icon. - <img width="955" alt="Screen Shot 2020-05-31 at 11 59 22 AM" src="https://user-images.githubusercontent.com/49448379/83360472-75e93280-a336-11ea-8f23-3a38c895b3b5.png">
<img width="514" alt="Screen Shot 2020-05-31 at 12 03 08 PM" src="https://user-images.githubusercontent.com/49448379/83360518-b648b080-a336-11ea-87bf-78ab62c020d3.png">
Reproduced by downloading the launcher and trying to connect to Wizard's Den. | priority | crashes on mac and uses old binary icon img width alt screen shot at am src img width alt screen shot at pm src reproduced by downloading the launcher and trying to connect to wizard s den | 1 |
489,301 | 14,104,415,253 | IssuesEvent | 2020-11-06 11:50:16 | wso2/product-apim-tooling | https://api.github.com/repos/wso2/product-apim-tooling | closed | Support custom throttling policies from apictl | Next Release - 4.x Priority/High Type/New Feature | **Description:**
Custom application/subscription, advanced policies import/export are currently not supported. We need to support it. | 1.0 | Support custom throttling policies from apictl - **Description:**
Custom application/subscription, advanced policies import/export are currently not supported. We need to support it. | priority | support custom throttling policies from apictl description custom application subscription advanced policies import export are currently not supported we need to support it | 1 |
390,051 | 11,521,072,921 | IssuesEvent | 2020-02-14 15:57:59 | python/mypy | https://api.github.com/repos/python/mypy | closed | Internal error on dataclasses with properties | crash priority-0-high topic-plugins | On CPython 3.7.2 [GCC 7.3.0] on linux, with mypy 0.670
When attempting to follow the guidelines described in [this article](https://blog.florimondmanca.com/reconciling-dataclasses-and-properties-in-python)
As illustrated here:
```
from dataclasses import dataclass
from typing import Optional
@dataclass(init=False)
class C:
a: Optional[int]
def __init__(self, a: Optional[int] = None) -> None:
self._a = None
self.a = a
@property
def a(self) -> Optional[int]:
return self._a
@a.setter
def a(self, value: Optional[int]) -> Optional[int]:
self._a = value
return self._a
```
Mypy (without options) raises an `INTERNAL ERROR` with the following:
```
Traceback (most recent call last):
File ".../bin/mypy", line 10, in <module>
sys.exit(console_entry())
File ".../lib/python3.7/site-packages/mypy/__main__.py", line 7, in console_entry
main(None)
File ".../lib/python3.7/site-packages/mypy/main.py", line 91, in main
res = build.build(sources, options, None, flush_errors, fscache)
File ".../lib/python3.7/site-packages/mypy/build.py", line 162, in build
result = _build(sources, options, alt_lib_path, flush_errors, fscache)
File ".../lib/python3.7/site-packages/mypy/build.py", line 217, in _build
graph = dispatch(sources, manager)
File ".../lib/python3.7/site-packages/mypy/build.py", line 2360, in dispatch
process_graph(graph, manager)
File ".../lib/python3.7/site-packages/mypy/build.py", line 2660, in process_graph
process_stale_scc(graph, scc, manager)
File ".../lib/python3.7/site-packages/mypy/build.py", line 2761, in process_stale_scc
graph[id].semantic_analysis()
File ".../lib/python3.7/site-packages/mypy/build.py", line 1896, in semantic_analysis
self.manager.semantic_analyzer.visit_file(self.tree, self.xpath, self.options, patches)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 306, in visit_file
self.accept(d)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 3787, in accept
node.accept(self)
File ".../lib/python3.7/site-packages/mypy/nodes.py", line 846, in accept
return visitor.visit_class_def(self)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 804, in visit_class_def
self.analyze_class(defn)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 822, in analyze_class
self.analyze_class_body_common(defn)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 830, in analyze_class_body_common
self.apply_class_plugin_hooks(defn)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 892, in apply_class_plugin_hooks
hook(ClassDefContext(defn, decorator, self))
File ".../lib/python3.7/site-packages/mypy/plugins/dataclasses.py", line 312, in dataclass_class_maker_callback
transformer.transform()
File ".../lib/python3.7/site-packages/mypy/plugins/dataclasses.py", line 79, in transform
attributes = self.collect_attributes()
File ".../lib/python3.7/site-packages/mypy/plugins/dataclasses.py", line 189, in collect_attributes
assert isinstance(node, Var)
AssertionError:
```
Any suggestions about workarounds or better implementations of the initial premise would be much appreciated! | 1.0 | Internal error on dataclasses with properties - On CPython 3.7.2 [GCC 7.3.0] on linux, with mypy 0.670
When attempting to follow the guidelines described in [this article](https://blog.florimondmanca.com/reconciling-dataclasses-and-properties-in-python)
As illustrated here:
```
from dataclasses import dataclass
from typing import Optional
@dataclass(init=False)
class C:
a: Optional[int]
def __init__(self, a: Optional[int] = None) -> None:
self._a = None
self.a = a
@property
def a(self) -> Optional[int]:
return self._a
@a.setter
def a(self, value: Optional[int]) -> Optional[int]:
self._a = value
return self._a
```
Mypy (without options) raises an `INTERNAL ERROR` with the following:
```
Traceback (most recent call last):
File ".../bin/mypy", line 10, in <module>
sys.exit(console_entry())
File ".../lib/python3.7/site-packages/mypy/__main__.py", line 7, in console_entry
main(None)
File ".../lib/python3.7/site-packages/mypy/main.py", line 91, in main
res = build.build(sources, options, None, flush_errors, fscache)
File ".../lib/python3.7/site-packages/mypy/build.py", line 162, in build
result = _build(sources, options, alt_lib_path, flush_errors, fscache)
File ".../lib/python3.7/site-packages/mypy/build.py", line 217, in _build
graph = dispatch(sources, manager)
File ".../lib/python3.7/site-packages/mypy/build.py", line 2360, in dispatch
process_graph(graph, manager)
File ".../lib/python3.7/site-packages/mypy/build.py", line 2660, in process_graph
process_stale_scc(graph, scc, manager)
File ".../lib/python3.7/site-packages/mypy/build.py", line 2761, in process_stale_scc
graph[id].semantic_analysis()
File ".../lib/python3.7/site-packages/mypy/build.py", line 1896, in semantic_analysis
self.manager.semantic_analyzer.visit_file(self.tree, self.xpath, self.options, patches)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 306, in visit_file
self.accept(d)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 3787, in accept
node.accept(self)
File ".../lib/python3.7/site-packages/mypy/nodes.py", line 846, in accept
return visitor.visit_class_def(self)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 804, in visit_class_def
self.analyze_class(defn)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 822, in analyze_class
self.analyze_class_body_common(defn)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 830, in analyze_class_body_common
self.apply_class_plugin_hooks(defn)
File ".../lib/python3.7/site-packages/mypy/semanal.py", line 892, in apply_class_plugin_hooks
hook(ClassDefContext(defn, decorator, self))
File ".../lib/python3.7/site-packages/mypy/plugins/dataclasses.py", line 312, in dataclass_class_maker_callback
transformer.transform()
File ".../lib/python3.7/site-packages/mypy/plugins/dataclasses.py", line 79, in transform
attributes = self.collect_attributes()
File ".../lib/python3.7/site-packages/mypy/plugins/dataclasses.py", line 189, in collect_attributes
assert isinstance(node, Var)
AssertionError:
```
Any suggestions about workarounds or better implementations of the initial premise would be much appreciated! | priority | internal error on dataclasses with properties on cpython on linux with mypy when attempting to follow the guidelines described in as illustrated here from dataclasses import dataclass from typing import optional dataclass init false class c a optional def init self a optional none none self a none self a a property def a self optional return self a a setter def a self value optional optional self a value return self a mypy without options raises an internal error with the following traceback most recent call last file bin mypy line in sys exit console entry file lib site packages mypy main py line in console entry main none file lib site packages mypy main py line in main res build build sources options none flush errors fscache file lib site packages mypy build py line in build result build sources options alt lib path flush errors fscache file lib site packages mypy build py line in build graph dispatch sources manager file lib site packages mypy build py line in dispatch process graph graph manager file lib site packages mypy build py line in process graph process stale scc graph scc manager file lib site packages mypy build py line in process stale scc graph semantic analysis file lib site packages mypy build py line in semantic analysis self manager semantic analyzer visit file self tree self xpath self options patches file lib site packages mypy semanal py line in visit file self accept d file lib site packages mypy semanal py line in accept node accept self file lib site packages mypy nodes py line in accept return visitor visit class def self file lib site packages mypy semanal py line in visit class def self analyze class defn file lib site packages mypy semanal py line in analyze class self analyze class body common defn file lib site packages mypy semanal py line in analyze class body common self apply class plugin hooks defn file lib site packages mypy semanal py line in apply class plugin hooks hook classdefcontext defn decorator self file lib site packages mypy plugins dataclasses py line in dataclass class maker callback transformer transform file lib site packages mypy plugins dataclasses py line in transform attributes self collect attributes file lib site packages mypy plugins dataclasses py line in collect attributes assert isinstance node var assertionerror any suggestions about workarounds or better implementations of the initial premise would be much appreciated | 1 |
282,460 | 8,706,686,807 | IssuesEvent | 2018-12-06 04:05:50 | magda-io/magda | https://api.github.com/repos/magda-io/magda | opened | Focus moves unexpectedly and creates keyboard trap - IE only (critical) | priority: high | ### Problem description
Intopia issue 39 - When an Internet Explorer user adds search options or refinements – by way of autocomplete suggestions or by filtering their search with facets – the search history creates a keyboard trap. If the user moves focus, focus moves immediately to the top of the page. Tabbing forward goes only as far as the search box with the displayed search history and then focus returns to the top of the page. The keyboard user is trapped.
This only occurs if the main search box has built a search history, or if facets are applied to a search. Otherwise, there is no keyboard trap.
Note that in Firefox, focus also shifts to the top of the page when a user attempts to tab out of a search box with a search history, but the user can then tab through to the search box and past it if they wish – there’s no keyboard trap, but the unexpected focus shift is confusing for users.
### Problem reproduction steps
Ensure that the keyboard user can move focus out of the search box and forward through the page.
### Screenshot / Design / File reference
[39] Focus moves unexpectedly and creates keyboard trap (IE only)
intopia.digi tal 44
Type: WCAG 2.1 failure
Fails 2.1.2 No Keyboard Trap - Level A
Location: Search box, Search page
Platform: Desktop
Severity: Critical
What’s the issue?
When an Internet Explorer user adds search options or refinements – by way of autocomplete suggestions or by filtering their search with facets – the search history creates a keyboard trap. If the user moves focus, focus moves immediately to the top of the page. Tabbing forward goes only as far as the search box with the displayed search history and then focus returns to the top of the page. The keyboard user is trapped.
This only occurs if the main search box has built a search history, or if facets are applied to a search. Otherwise, there is no keyboard trap.
Note that in Firefox, focus also shifts to the top of the page when a user attempts to tab out of a search box with a search history, but the user can then tab through to the search box and past it if they wish – there’s no keyboard trap, but the unexpected focus shift is confusing for users.
Why it matters
The keyboard user’s fundamental ability to navigate the page is broken.
Our recommendation
Ensure that the keyboard user can move focus out of the search box and forward through the page. | 1.0 | Focus moves unexpectedly and creates keyboard trap - IE only (critical) - ### Problem description
Intopia issue 39 - When an Internet Explorer user adds search options or refinements – by way of autocomplete suggestions or by filtering their search with facets – the search history creates a keyboard trap. If the user moves focus, focus moves immediately to the top of the page. Tabbing forward goes only as far as the search box with the displayed search history and then focus returns to the top of the page. The keyboard user is trapped.
This only occurs if the main search box has built a search history, or if facets are applied to a search. Otherwise, there is no keyboard trap.
Note that in Firefox, focus also shifts to the top of the page when a user attempts to tab out of a search box with a search history, but the user can then tab through to the search box and past it if they wish – there’s no keyboard trap, but the unexpected focus shift is confusing for users.
### Problem reproduction steps
Ensure that the keyboard user can move focus out of the search box and forward through the page.
### Screenshot / Design / File reference
[39] Focus moves unexpectedly and creates keyboard trap (IE only)
intopia.digi tal 44
Type: WCAG 2.1 failure
Fails 2.1.2 No Keyboard Trap - Level A
Location: Search box, Search page
Platform: Desktop
Severity: Critical
What’s the issue?
When an Internet Explorer user adds search options or refinements – by way of autocomplete suggestions or by filtering their search with facets – the search history creates a keyboard trap. If the user moves focus, focus moves immediately to the top of the page. Tabbing forward goes only as far as the search box with the displayed search history and then focus returns to the top of the page. The keyboard user is trapped.
This only occurs if the main search box has built a search history, or if facets are applied to a search. Otherwise, there is no keyboard trap.
Note that in Firefox, focus also shifts to the top of the page when a user attempts to tab out of a search box with a search history, but the user can then tab through to the search box and past it if they wish – there’s no keyboard trap, but the unexpected focus shift is confusing for users.
Why it matters
The keyboard user’s fundamental ability to navigate the page is broken.
Our recommendation
Ensure that the keyboard user can move focus out of the search box and forward through the page. | priority | focus moves unexpectedly and creates keyboard trap ie only critical problem description intopia issue when an internet explorer user adds search options or refinements – by way of autocomplete suggestions or by filtering their search with facets – the search history creates a keyboard trap if the user moves focus focus moves immediately to the top of the page tabbing forward goes only as far as the search box with the displayed search history and then focus returns to the top of the page the keyboard user is trapped this only occurs if the main search box has built a search history or if facets are applied to a search otherwise there is no keyboard trap note that in firefox focus also shifts to the top of the page when a user attempts to tab out of a search box with a search history but the user can then tab through to the search box and past it if they wish – there’s no keyboard trap but the unexpected focus shift is confusing for users problem reproduction steps ensure that the keyboard user can move focus out of the search box and forward through the page screenshot design file reference focus moves unexpectedly and creates keyboard trap ie only intopia digi tal type wcag failure fails no keyboard trap level a location search box search page platform desktop severity critical what’s the issue when an internet explorer user adds search options or refinements – by way of autocomplete suggestions or by filtering their search with facets – the search history creates a keyboard trap if the user moves focus focus moves immediately to the top of the page tabbing forward goes only as far as the search box with the displayed search history and then focus returns to the top of the page the keyboard user is trapped this only occurs if the main search box has built a search history or if facets are applied to a search otherwise there is no keyboard trap note that in firefox focus also shifts to the top of the page when a user attempts to tab out of a search box with a search history but the user can then tab through to the search box and past it if they wish – there’s no keyboard trap but the unexpected focus shift is confusing for users why it matters the keyboard user’s fundamental ability to navigate the page is broken our recommendation ensure that the keyboard user can move focus out of the search box and forward through the page | 1 |
120,029 | 4,779,378,669 | IssuesEvent | 2016-10-27 22:16:26 | adobe/brackets | https://api.github.com/repos/adobe/brackets | closed | grunt build fails due to linting errors | high priority | ### Description
`grunt build` is failing due to 36 linting errors
### Steps to Reproduce
1. npm install
2. grunt setup
3. grunt build
**Expected behavior:** No linting errors
**Actual behavior:**
```
36 problems (36 errors, 0 warnings)
Warning: Task "eslint:src" failed. Use --force to continue.
```
### Versions
OSX 10.11.6. Latest master of brackets
| 1.0 | grunt build fails due to linting errors - ### Description
`grunt build` is failing due to 36 linting errors
### Steps to Reproduce
1. npm install
2. grunt setup
3. grunt build
**Expected behavior:** No linting errors
**Actual behavior:**
```
36 problems (36 errors, 0 warnings)
Warning: Task "eslint:src" failed. Use --force to continue.
```
### Versions
OSX 10.11.6. Latest master of brackets
| priority | grunt build fails due to linting errors description grunt build is failing due to linting errors steps to reproduce npm install grunt setup grunt build expected behavior no linting errors actual behavior problems errors warnings warning task eslint src failed use force to continue versions osx latest master of brackets | 1 |
479,742 | 13,805,272,660 | IssuesEvent | 2020-10-11 12:59:24 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | opened | Missing TLAs under the IRS Performance Reporting | Priority: High | - [ ] There are spray operators whose entries are not showing up under the IRS Performance Reporting. We need to check if there is data from the following SOPs getting to the system: tlanch12, tlanch18, tlanch20, tlanch21 and tlanch25. | 1.0 | Missing TLAs under the IRS Performance Reporting - - [ ] There are spray operators whose entries are not showing up under the IRS Performance Reporting. We need to check if there is data from the following SOPs getting to the system: tlanch12, tlanch18, tlanch20, tlanch21 and tlanch25. | priority | missing tlas under the irs performance reporting there are spray operators whose entries are not showing up under the irs performance reporting we need to check if there is data from the following sops getting to the system and | 1 |
355,309 | 10,579,229,520 | IssuesEvent | 2019-10-08 01:50:03 | storybookjs/storybook | https://api.github.com/repos/storybookjs/storybook | closed | Addon-docs: Canvas-only stories | addon: docs feature request high priority todo | Users wish to define stories that show up in Storybook's canvas (and in visual regression tests, etc) but not to show up in docs.
### Case 1: DocsPage
User defines stories in CSF and renders docs using `DocsPage`, but wishes to exclude some fo the stories from the `DocsPage` to reduce noise on the page.
### Case 2: MDX Stories
User writes documentation & stories side-by-side in a single `MDX` file, and wants those stories to show up in the canvas but not in the docs themselves. They want something similar to the recipe ["CSF stories with MDX docs"](https://github.com/storybookjs/storybook/blob/next/addons/docs/docs/recipes.md#csf-stories-with-mdx-docs) but want to do everything in `MDX`
## Proposed solution
Stories get an optional `docs.disable` parameter that:
- [ ] Excludes it from DocsPage if applied in CSF
- [ ] Doesn't render the story in the Docs tab if applied in MDX
### CSF
```js
export const foo = () => <Button label="foo" />
foo.story = { parameters: { docs: { disable: true } } }
```
### MDX
```jsx
<Story name="foo" parameters={{ docs: { disable: true }} >
<Button label="foo" />
</Story>
``` | 1.0 | Addon-docs: Canvas-only stories - Users wish to define stories that show up in Storybook's canvas (and in visual regression tests, etc) but not to show up in docs.
### Case 1: DocsPage
User defines stories in CSF and renders docs using `DocsPage`, but wishes to exclude some fo the stories from the `DocsPage` to reduce noise on the page.
### Case 2: MDX Stories
User writes documentation & stories side-by-side in a single `MDX` file, and wants those stories to show up in the canvas but not in the docs themselves. They want something similar to the recipe ["CSF stories with MDX docs"](https://github.com/storybookjs/storybook/blob/next/addons/docs/docs/recipes.md#csf-stories-with-mdx-docs) but want to do everything in `MDX`
## Proposed solution
Stories get an optional `docs.disable` parameter that:
- [ ] Excludes it from DocsPage if applied in CSF
- [ ] Doesn't render the story in the Docs tab if applied in MDX
### CSF
```js
export const foo = () => <Button label="foo" />
foo.story = { parameters: { docs: { disable: true } } }
```
### MDX
```jsx
<Story name="foo" parameters={{ docs: { disable: true }} >
<Button label="foo" />
</Story>
``` | priority | addon docs canvas only stories users wish to define stories that show up in storybook s canvas and in visual regression tests etc but not to show up in docs case docspage user defines stories in csf and renders docs using docspage but wishes to exclude some fo the stories from the docspage to reduce noise on the page case mdx stories user writes documentation stories side by side in a single mdx file and wants those stories to show up in the canvas but not in the docs themselves they want something similar to the recipe but want to do everything in mdx proposed solution stories get an optional docs disable parameter that excludes it from docspage if applied in csf doesn t render the story in the docs tab if applied in mdx csf js export const foo foo story parameters docs disable true mdx jsx | 1 |
171,029 | 6,476,763,539 | IssuesEvent | 2017-08-18 00:07:49 | sussol/mobile | https://api.github.com/repos/sussol/mobile | opened | Supplier Requisition lines disappear after Supplier Invoice is synced back to mobile store | Bug Priority: High | Build Number: current master (last commit 17/08/17 34a449a27baaab071b4817085c9bedb0282e3a1b) and mobile inter-store-requisition-cleanup branch.
Description: When a customer invoice is synced out from Store B and a Supplier Invoice is created in Store A following a request from Store A, the requisition lines on the original Supplier Requisition (in Store A) disappears on mobile.
Reproducible: Always (as long as SI is being created on mobile)
Reproduction Steps: With two mobiles running, create Supplier Requisition on Store A. Sync both stores to create Customer Requisition and Customer Invoice on Store B. Sync both stores again. If a supplier invoice is created in Store A, the requisition lines on the Supplier Requisition disappear, but the requisition remains.
Comments:
The Supplier Requisition and the Requisition lines are still there on the mSupply server, even after they have disappeared on mobile. | 1.0 | Supplier Requisition lines disappear after Supplier Invoice is synced back to mobile store - Build Number: current master (last commit 17/08/17 34a449a27baaab071b4817085c9bedb0282e3a1b) and mobile inter-store-requisition-cleanup branch.
Description: When a customer invoice is synced out from Store B and a Supplier Invoice is created in Store A following a request from Store A, the requisition lines on the original Supplier Requisition (in Store A) disappears on mobile.
Reproducible: Always (as long as SI is being created on mobile)
Reproduction Steps: With two mobiles running, create Supplier Requisition on Store A. Sync both stores to create Customer Requisition and Customer Invoice on Store B. Sync both stores again. If a supplier invoice is created in Store A, the requisition lines on the Supplier Requisition disappear, but the requisition remains.
Comments:
The Supplier Requisition and the Requisition lines are still there on the mSupply server, even after they have disappeared on mobile. | priority | supplier requisition lines disappear after supplier invoice is synced back to mobile store build number current master last commit and mobile inter store requisition cleanup branch description when a customer invoice is synced out from store b and a supplier invoice is created in store a following a request from store a the requisition lines on the original supplier requisition in store a disappears on mobile reproducible always as long as si is being created on mobile reproduction steps with two mobiles running create supplier requisition on store a sync both stores to create customer requisition and customer invoice on store b sync both stores again if a supplier invoice is created in store a the requisition lines on the supplier requisition disappear but the requisition remains comments the supplier requisition and the requisition lines are still there on the msupply server even after they have disappeared on mobile | 1 |
476,659 | 13,748,279,242 | IssuesEvent | 2020-10-06 08:51:13 | nthState/Haptrix | https://api.github.com/repos/nthState/Haptrix | closed | [Bug] - Haptrix does not open AHAP files without Metadata | high-priority macOS | Any metadata should be optional. Notice that it is not required or even mentioned in Apple's [documentation for AHAP files](https://developer.apple.com/documentation/corehaptics/representing_haptic_patterns_in_ahap_files). | 1.0 | [Bug] - Haptrix does not open AHAP files without Metadata - Any metadata should be optional. Notice that it is not required or even mentioned in Apple's [documentation for AHAP files](https://developer.apple.com/documentation/corehaptics/representing_haptic_patterns_in_ahap_files). | priority | haptrix does not open ahap files without metadata any metadata should be optional notice that it is not required or even mentioned in apple s | 1 |
323,590 | 9,856,934,151 | IssuesEvent | 2019-06-20 00:16:21 | hacker-s-gate/events | https://api.github.com/repos/hacker-s-gate/events | closed | 2019年06月開催 LT 会/宣伝 | high priority | #9
- [ ] 宣伝(全員で)
- [ ] SNS・知人に直接・所属コミュニティ等
- [x] 開催直後
- [ ] 定期
- [ ] 直前
- [ ] 社内(拡散のお願い)
- [ ] slack
- [ ] 朝礼 | 1.0 | 2019年06月開催 LT 会/宣伝 - #9
- [ ] 宣伝(全員で)
- [ ] SNS・知人に直接・所属コミュニティ等
- [x] 開催直後
- [ ] 定期
- [ ] 直前
- [ ] 社内(拡散のお願い)
- [ ] slack
- [ ] 朝礼 | priority | lt 会/宣伝 宣伝(全員で) sns・知人に直接・所属コミュニティ等 開催直後 定期 直前 社内(拡散のお願い) slack 朝礼 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.