Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
386,701 | 11,449,129,359 | IssuesEvent | 2020-02-06 06:09:51 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1384] Contracts: broken creation and acceptance | Priority: High Status: Fixed | 1. Try to create a contract

Don't finish it
2. Try to create the new one

3. And now try to open non-finished one from the Work tab
You can't
And from Economy viewer too
And you can't accept a contract too with the same reason (no contract UI)

| 1.0 | [0.9.0 staging-1384] Contracts: broken creation and acceptance - 1. Try to create a contract

Don't finish it
2. Try to create the new one

3. And now try to open non-finished one from the Work tab
You can't
And from Economy viewer too
And you can't accept a contract too with the same reason (no contract UI)

| priority | contracts broken creation and acceptance try to create a contract don t finish it try to create the new one and now try to open non finished one from the work tab you can t and from economy viewer too and you can t accept a contract too with the same reason no contract ui | 1 |
398,796 | 11,742,347,795 | IssuesEvent | 2020-03-12 00:27:27 | statechannels/monorepo | https://api.github.com/repos/statechannels/monorepo | closed | Create workflow that opens and directly funds a ledger channel | High Priority xstate-wallet | We need a workflow that opens and directly funds a ledger channel with the hub. | 1.0 | Create workflow that opens and directly funds a ledger channel - We need a workflow that opens and directly funds a ledger channel with the hub. | priority | create workflow that opens and directly funds a ledger channel we need a workflow that opens and directly funds a ledger channel with the hub | 1 |
480,800 | 13,867,143,216 | IssuesEvent | 2020-10-16 08:01:30 | wso2/product-is | https://api.github.com/repos/wso2/product-is | opened | Misleading description in Account disable setting | Priority/High identity-core improvement ux | **Is your suggestion related to an experience ? Please describe.**
Under Account Disable setting, there is a config to enable the feature and the description says `Allow an administrative user to disable user accounts`.
However, regardless of the feature is enabled or not, administrators are able to disable the account by management console or via API.
However, the configuration is taken into consideration for sending the account disable notifications to the user.
**Describe the improvement**
Some suggestions,
- Change the lable/desciption of the configuration to a more meaningful one
- Prevent allowing the administrators to disable accounts as the configuration explains
**Additional context**
- if the lables/description is changed it must be coming from the underneath governance connector implementationon
| 1.0 | Misleading description in Account disable setting - **Is your suggestion related to an experience ? Please describe.**
Under Account Disable setting, there is a config to enable the feature and the description says `Allow an administrative user to disable user accounts`.
However, regardless of the feature is enabled or not, administrators are able to disable the account by management console or via API.
However, the configuration is taken into consideration for sending the account disable notifications to the user.
**Describe the improvement**
Some suggestions,
- Change the lable/desciption of the configuration to a more meaningful one
- Prevent allowing the administrators to disable accounts as the configuration explains
**Additional context**
- if the lables/description is changed it must be coming from the underneath governance connector implementationon
| priority | misleading description in account disable setting is your suggestion related to an experience please describe under account disable setting there is a config to enable the feature and the description says allow an administrative user to disable user accounts however regardless of the feature is enabled or not administrators are able to disable the account by management console or via api however the configuration is taken into consideration for sending the account disable notifications to the user describe the improvement some suggestions change the lable desciption of the configuration to a more meaningful one prevent allowing the administrators to disable accounts as the configuration explains additional context if the lables description is changed it must be coming from the underneath governance connector implementationon | 1 |
580,458 | 17,258,680,611 | IssuesEvent | 2021-07-22 02:20:59 | TestCentric/testcentric-gui | https://api.github.com/repos/TestCentric/testcentric-gui | closed | Restore TestPropertiesDialog for use with the mini-GUI | Feature High Priority | The older `TestPropertiesDialog` was removed in favor of `TestPropertiesView`. However, this leaves the mini-GUI without any way to display test or result details.
We'll restore and modify the original dialog to work with the mini-GUI. Subsequently, we may be able to merge the two views into one, housing it either in a dialog or a tabbed window according to the layout selected. | 1.0 | Restore TestPropertiesDialog for use with the mini-GUI - The older `TestPropertiesDialog` was removed in favor of `TestPropertiesView`. However, this leaves the mini-GUI without any way to display test or result details.
We'll restore and modify the original dialog to work with the mini-GUI. Subsequently, we may be able to merge the two views into one, housing it either in a dialog or a tabbed window according to the layout selected. | priority | restore testpropertiesdialog for use with the mini gui the older testpropertiesdialog was removed in favor of testpropertiesview however this leaves the mini gui without any way to display test or result details we ll restore and modify the original dialog to work with the mini gui subsequently we may be able to merge the two views into one housing it either in a dialog or a tabbed window according to the layout selected | 1 |
434,639 | 12,520,997,495 | IssuesEvent | 2020-06-03 16:46:12 | scality/metalk8s | https://api.github.com/repos/scality/metalk8s | opened | Custom salt k8s module to List do not work with CustomObjects | complexity:easy kind:bug priority:high | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately to moonshot-platform@scality.com
-->
**Component**:
'salt'
<!-- E.g. 'salt', 'containers', 'kubernetes', 'build', 'tests'... -->
**What happened**:
If you try to list custom object using salt the output is wrong
```
[root@bootstrap /]# salt-run salt.cmd metalk8s_kubernetes.list_objects kind="PrometheusRule" apiVersion="monitoring.coreos.com/v1" namespace="metalk8s-monitoring"
- items
- kind
- apiVersion
- metadata
```
**What was expected**:
Real object output
**Steps to reproduce**
:arrow_double_up:
**Resolution proposal** (optional):
```
diff --git a/salt/_utils/kubernetes_utils.py b/salt/_utils/kubernetes_utils.py
index f7849436..ed2926a4 100644
--- a/salt/_utils/kubernetes_utils.py
+++ b/salt/_utils/kubernetes_utils.py
@@ -433,13 +433,6 @@ class CustomApiClient(ApiClient):
result = base_method(*args, **kwargs)
- if verb == 'list':
- return CustomObject({
- 'kind': '{}List'.format(self.kind),
- 'apiVersion': '{s.group}/{s.version}'.format(s=self),
- 'items': [CustomObject(obj) for obj in result],
- })
-
# TODO: do we have a result for `delete` methods?
return CustomObject(result)
``` | 1.0 | Custom salt k8s module to List do not work with CustomObjects - <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately to moonshot-platform@scality.com
-->
**Component**:
'salt'
<!-- E.g. 'salt', 'containers', 'kubernetes', 'build', 'tests'... -->
**What happened**:
If you try to list custom object using salt the output is wrong
```
[root@bootstrap /]# salt-run salt.cmd metalk8s_kubernetes.list_objects kind="PrometheusRule" apiVersion="monitoring.coreos.com/v1" namespace="metalk8s-monitoring"
- items
- kind
- apiVersion
- metadata
```
**What was expected**:
Real object output
**Steps to reproduce**
:arrow_double_up:
**Resolution proposal** (optional):
```
diff --git a/salt/_utils/kubernetes_utils.py b/salt/_utils/kubernetes_utils.py
index f7849436..ed2926a4 100644
--- a/salt/_utils/kubernetes_utils.py
+++ b/salt/_utils/kubernetes_utils.py
@@ -433,13 +433,6 @@ class CustomApiClient(ApiClient):
result = base_method(*args, **kwargs)
- if verb == 'list':
- return CustomObject({
- 'kind': '{}List'.format(self.kind),
- 'apiVersion': '{s.group}/{s.version}'.format(s=self),
- 'items': [CustomObject(obj) for obj in result],
- })
-
# TODO: do we have a result for `delete` methods?
return CustomObject(result)
``` | priority | custom salt module to list do not work with customobjects please use this template while reporting a bug and provide as much info as possible not doing so may result in your bug not being addressed in a timely manner thanks if the matter is security related please disclose it privately to moonshot platform scality com component salt what happened if you try to list custom object using salt the output is wrong salt run salt cmd kubernetes list objects kind prometheusrule apiversion monitoring coreos com namespace monitoring items kind apiversion metadata what was expected real object output steps to reproduce arrow double up resolution proposal optional diff git a salt utils kubernetes utils py b salt utils kubernetes utils py index a salt utils kubernetes utils py b salt utils kubernetes utils py class customapiclient apiclient result base method args kwargs if verb list return customobject kind list format self kind apiversion s group s version format s self items todo do we have a result for delete methods return customobject result | 1 |
671,981 | 22,782,524,594 | IssuesEvent | 2022-07-08 21:52:19 | mskcc/pluto-cwl | https://api.github.com/repos/mskcc/pluto-cwl | closed | fix duplicate file output in portal workflow | bug high priority | [`portal_cna_data_file`](https://github.com/mskcc/pluto-cwl/blob/3bc4fab5503e58521ed0eb9c0d035ac18460dc13/cwl/portal-workflow.cwl#L499) and [`merged_cna_file`](https://github.com/mskcc/pluto-cwl/blob/3bc4fab5503e58521ed0eb9c0d035ac18460dc13/cwl/portal-workflow.cwl#L517) are being output with the same filename (`data_CNA.txt`) which is causing one file or the other to get automatically renamed by cwltool / Toil
this is breaking test cases that expect both files to be named `data_CNA.txt`
in most test cases these two files also end up with the same sha1 hash so they are identical files
Need to figure out what the handling method for this should be | 1.0 | fix duplicate file output in portal workflow - [`portal_cna_data_file`](https://github.com/mskcc/pluto-cwl/blob/3bc4fab5503e58521ed0eb9c0d035ac18460dc13/cwl/portal-workflow.cwl#L499) and [`merged_cna_file`](https://github.com/mskcc/pluto-cwl/blob/3bc4fab5503e58521ed0eb9c0d035ac18460dc13/cwl/portal-workflow.cwl#L517) are being output with the same filename (`data_CNA.txt`) which is causing one file or the other to get automatically renamed by cwltool / Toil
this is breaking test cases that expect both files to be named `data_CNA.txt`
in most test cases these two files also end up with the same sha1 hash so they are identical files
Need to figure out what the handling method for this should be | priority | fix duplicate file output in portal workflow and are being output with the same filename data cna txt which is causing one file or the other to get automatically renamed by cwltool toil this is breaking test cases that expect both files to be named data cna txt in most test cases these two files also end up with the same hash so they are identical files need to figure out what the handling method for this should be | 1 |
315,850 | 9,632,839,797 | IssuesEvent | 2019-05-15 17:09:23 | epam/cloud-pipeline | https://api.github.com/repos/epam/cloud-pipeline | closed | Expose Git SSH clone URL to the GUI | kind/enhancement priority/high state/verify sys/gui | Extends #245
**API**
1. All the API methods, that provide clone URL (e.g. `/pipeline/{id}/load`) via the `repository`attribute - shall expose additional field `repository_ssh`
2. `repository_ssh` shall be retrieved from the GitLab, same as https URL
**GUI**
1. Add `HTTP/SSH` selector to the `GIT REPOSITORY` popup (e.g. [ant dropdown](https://2x.ant.design/components/dropdown/))
2. Default selection - `HTTP`, which shall display the same URL as now (`repository`). Popup header shall contain: `Clone repository via HTTPS` (where HTTPS is a dropdown)
3. If `SSH` is used - a value of the new `repository_ssh` attribute shall be used. Popup header shall contain: `Clone repository via SSH` (where SSH is a dropdown) | 1.0 | Expose Git SSH clone URL to the GUI - Extends #245
**API**
1. All the API methods, that provide clone URL (e.g. `/pipeline/{id}/load`) via the `repository`attribute - shall expose additional field `repository_ssh`
2. `repository_ssh` shall be retrieved from the GitLab, same as https URL
**GUI**
1. Add `HTTP/SSH` selector to the `GIT REPOSITORY` popup (e.g. [ant dropdown](https://2x.ant.design/components/dropdown/))
2. Default selection - `HTTP`, which shall display the same URL as now (`repository`). Popup header shall contain: `Clone repository via HTTPS` (where HTTPS is a dropdown)
3. If `SSH` is used - a value of the new `repository_ssh` attribute shall be used. Popup header shall contain: `Clone repository via SSH` (where SSH is a dropdown) | priority | expose git ssh clone url to the gui extends api all the api methods that provide clone url e g pipeline id load via the repository attribute shall expose additional field repository ssh repository ssh shall be retrieved from the gitlab same as https url gui add http ssh selector to the git repository popup e g default selection http which shall display the same url as now repository popup header shall contain clone repository via https where https is a dropdown if ssh is used a value of the new repository ssh attribute shall be used popup header shall contain clone repository via ssh where ssh is a dropdown | 1 |
412,614 | 12,053,179,839 | IssuesEvent | 2020-04-15 08:57:23 | TheOnlineJudge/ojudge | https://api.github.com/repos/TheOnlineJudge/ojudge | opened | Implement migration mechanism from joomla users | enhancement priority: high | The current Online Judge uses joomla. We have to implement a migration mechanism for those users to the new systems. This should include:
- Implement a 'joomla' password type, so the current joomla hashes can be used, probably forcing the user to update the password on first login, so it can be migrated to 'bcrypt', or automatically changing the hash to 'bcrypt' on first succesfull login.
- Keep track of the old userid, to correctly migrate submissions, or, maintain the same user IDs (I guess that migration would be preferred to allow the new user management to keep it's own sequence for user IDs).
- Link all of the submissions to the correct new IDs.
- Same thing with contests, but this needs more study, as the contest system will be different. | 1.0 | Implement migration mechanism from joomla users - The current Online Judge uses joomla. We have to implement a migration mechanism for those users to the new systems. This should include:
- Implement a 'joomla' password type, so the current joomla hashes can be used, probably forcing the user to update the password on first login, so it can be migrated to 'bcrypt', or automatically changing the hash to 'bcrypt' on first succesfull login.
- Keep track of the old userid, to correctly migrate submissions, or, maintain the same user IDs (I guess that migration would be preferred to allow the new user management to keep it's own sequence for user IDs).
- Link all of the submissions to the correct new IDs.
- Same thing with contests, but this needs more study, as the contest system will be different. | priority | implement migration mechanism from joomla users the current online judge uses joomla we have to implement a migration mechanism for those users to the new systems this should include implement a joomla password type so the current joomla hashes can be used probably forcing the user to update the password on first login so it can be migrated to bcrypt or automatically changing the hash to bcrypt on first succesfull login keep track of the old userid to correctly migrate submissions or maintain the same user ids i guess that migration would be preferred to allow the new user management to keep it s own sequence for user ids link all of the submissions to the correct new ids same thing with contests but this needs more study as the contest system will be different | 1 |
667,493 | 22,475,431,436 | IssuesEvent | 2022-06-22 11:53:25 | ballerina-platform/ballerina-dev-website | https://api.github.com/repos/ballerina-platform/ballerina-dev-website | closed | Fix the Formatting of the Release Note Template | Priority/Highest Type/Task Points/0.5 Area/CommonPages | ## Description
Need to fix the formatting of the release note templates below as per the latest updates for both Swan Lake and 1.2.x:
https://github.com/ballerina-platform/ballerina-release/blob/master/release-notes/release-note-template.md
## Related website/documentation area
> Add/Uncomment the relevant area label out of the following.
<!--Area/BBEs-->
<!--Area/HomePageSamples-->
<!--Area/LearnPages-->
<!--Area/CommonPages-->
<!--Area/Backend-->
<!--Area/UIUX-->
<!--Area/Workflows-->
<!--Area/Blog-->
## Describe your task(s)
> A detailed description of the task.
## Related issue(s) (optional)
> Any related issues such as sub tasks and issues reported in other repositories (e.g., component repositories), similar problems, etc.
## Suggested label(s) (optional)
> Optional comma-separated list of suggested labels. Non committers can’t assign labels to issues, and thereby, this will help issue creators who are not a committer to suggest possible labels.
## Suggested assignee(s) (optional)
> Optional comma-separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, and thereby, this will help issue creators who are not a committer to suggest possible assignees.
| 1.0 | Fix the Formatting of the Release Note Template - ## Description
Need to fix the formatting of the release note templates below as per the latest updates for both Swan Lake and 1.2.x:
https://github.com/ballerina-platform/ballerina-release/blob/master/release-notes/release-note-template.md
## Related website/documentation area
> Add/Uncomment the relevant area label out of the following.
<!--Area/BBEs-->
<!--Area/HomePageSamples-->
<!--Area/LearnPages-->
<!--Area/CommonPages-->
<!--Area/Backend-->
<!--Area/UIUX-->
<!--Area/Workflows-->
<!--Area/Blog-->
## Describe your task(s)
> A detailed description of the task.
## Related issue(s) (optional)
> Any related issues such as sub tasks and issues reported in other repositories (e.g., component repositories), similar problems, etc.
## Suggested label(s) (optional)
> Optional comma-separated list of suggested labels. Non committers can’t assign labels to issues, and thereby, this will help issue creators who are not a committer to suggest possible labels.
## Suggested assignee(s) (optional)
> Optional comma-separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, and thereby, this will help issue creators who are not a committer to suggest possible assignees.
| priority | fix the formatting of the release note template description need to fix the formatting of the release note templates below as per the latest updates for both swan lake and x related website documentation area add uncomment the relevant area label out of the following describe your task s a detailed description of the task related issue s optional any related issues such as sub tasks and issues reported in other repositories e g component repositories similar problems etc suggested label s optional optional comma separated list of suggested labels non committers can’t assign labels to issues and thereby this will help issue creators who are not a committer to suggest possible labels suggested assignee s optional optional comma separated list of suggested team members who should attend the issue non committers can’t assign issues to assignees and thereby this will help issue creators who are not a committer to suggest possible assignees | 1 |
225,268 | 7,480,265,911 | IssuesEvent | 2018-04-04 16:51:05 | ECP-CANDLE/Supervisor | https://api.github.com/repos/ECP-CANDLE/Supervisor | closed | Add test modules to workflows from report | priority=high | Add a test module for each workflow. This feeds into #20 . | 1.0 | Add test modules to workflows from report - Add a test module for each workflow. This feeds into #20 . | priority | add test modules to workflows from report add a test module for each workflow this feeds into | 1 |
503,820 | 14,598,393,190 | IssuesEvent | 2020-12-21 00:37:00 | Algalish/SmartIssueTracker | https://api.github.com/repos/Algalish/SmartIssueTracker | closed | Can't Rotate Foundation Block Anymore [HOLD mode question] | high-priority question | Ever since the update, the blocks are now fixed in one orientation and cannot be rotated at all. This mod can't be used until that's fixed. | 1.0 | Can't Rotate Foundation Block Anymore [HOLD mode question] - Ever since the update, the blocks are now fixed in one orientation and cannot be rotated at all. This mod can't be used until that's fixed. | priority | can t rotate foundation block anymore ever since the update the blocks are now fixed in one orientation and cannot be rotated at all this mod can t be used until that s fixed | 1 |
822,242 | 30,859,499,054 | IssuesEvent | 2023-08-03 00:53:18 | DiscoTrayStudios/WaterQualityTester | https://api.github.com/repos/DiscoTrayStudios/WaterQualityTester | opened | Streamline image-taking process | type: incomplete category: I/O priority: high size: small category: ui | **Describe the incomplete feature**
When an image is taken which cannot be fully processed by the computer vision library, the user is taken to a different screen where they have the choice to proceed to the results page (and view invalid data) or go back to the camera page to take another picture. This can be slow and tedious.
**Describe the solution you'd like**
If the image cannot be completely processed, stay on the camera page and give the user a pop-up notification describing the error (currently, one of "cannot find color key" or "cannot find test strip"). If the image is completely processed, go straight to the results page.
| 1.0 | Streamline image-taking process - **Describe the incomplete feature**
When an image is taken which cannot be fully processed by the computer vision library, the user is taken to a different screen where they have the choice to proceed to the results page (and view invalid data) or go back to the camera page to take another picture. This can be slow and tedious.
**Describe the solution you'd like**
If the image cannot be completely processed, stay on the camera page and give the user a pop-up notification describing the error (currently, one of "cannot find color key" or "cannot find test strip"). If the image is completely processed, go straight to the results page.
| priority | streamline image taking process describe the incomplete feature when an image is taken which cannot be fully processed by the computer vision library the user is taken to a different screen where they have the choice to proceed to the results page and view invalid data or go back to the camera page to take another picture this can be slow and tedious describe the solution you d like if the image cannot be completely processed stay on the camera page and give the user a pop up notification describing the error currently one of cannot find color key or cannot find test strip if the image is completely processed go straight to the results page | 1 |
490,561 | 14,135,966,660 | IssuesEvent | 2020-11-10 03:03:17 | bitrise-io/bb | https://api.github.com/repos/bitrise-io/bb | closed | Task: Combine V1 and V2 Slack apps | Category: Development Priority: High Status: Complete | ### Description:
The Slack integration overhaul has been pushed to a future release. We are going to combine the V1 and V2 Slack apps and re-deploy it to the new Heroku environment.
#### Subtasks
- [x] Remove credentials from V1 application.
- [x] Upload V1 application to GitHub => #3.
- [x] Adopt command logic in favour of parameter logic => #4.
- [x] Write "coming soon" messages for unimplemented features => #5.
- [x] Deploy to Heroku.
- [x] Request Simon to delete the old deployment.
### Implemented in:
Rust in the Slack app.
| 1.0 | Task: Combine V1 and V2 Slack apps - ### Description:
The Slack integration overhaul has been pushed to a future release. We are going to combine the V1 and V2 Slack apps and re-deploy it to the new Heroku environment.
#### Subtasks
- [x] Remove credentials from V1 application.
- [x] Upload V1 application to GitHub => #3.
- [x] Adopt command logic in favour of parameter logic => #4.
- [x] Write "coming soon" messages for unimplemented features => #5.
- [x] Deploy to Heroku.
- [x] Request Simon to delete the old deployment.
### Implemented in:
Rust in the Slack app.
| priority | task combine and slack apps description the slack integration overhaul has been pushed to a future release we are going to combine the and slack apps and re deploy it to the new heroku environment subtasks remove credentials from application upload application to github adopt command logic in favour of parameter logic write coming soon messages for unimplemented features deploy to heroku request simon to delete the old deployment implemented in rust in the slack app | 1 |
339,225 | 10,244,663,154 | IssuesEvent | 2019-08-20 10:58:59 | codetapacademy/codetap.academy | https://api.github.com/repos/codetapacademy/codetap.academy | opened | feat: add publish date to lecture | Priority: High Status: Available Type: Enhancement |
this is part of feat: enhance lecture with publish date and member level #173 | 1.0 | feat: add publish date to lecture -
this is part of feat: enhance lecture with publish date and member level #173 | priority | feat add publish date to lecture this is part of feat enhance lecture with publish date and member level | 1 |
239,690 | 7,799,928,042 | IssuesEvent | 2018-06-09 02:14:13 | tine20/Tine-2.0-Open-Source-Groupware-and-CRM | https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM | closed | 0006206:
relation type field can be empty | Bug Crm Mantis high priority | **Reported by pschuele on 4 Apr 2012 12:12**
**Version:** Maischa (2011-05-9)
relation type field can be empty
**Steps to reproduce:** - add contact to lead
- click into relation type col in contact grid
- remove value
- click 'Ok' without leaving the field
-> contact relation is added without relation type
**Additional information:** Relation type not supported.
.../Tinebase/Record/Abstract.php(937): Crm_Model_Lead->_setFromJson()
.../Tinebase/Record/Abstract.php(327): Tinebase_Record_Abstract->setFromJson()
.../Tinebase/Frontend/Json/Abstract.php(180): Tinebase_Record_Abstract->setFromJsonInUsersTimezone()
.../Crm/Frontend/Json.php(89): Tinebase_Frontend_Json_Abstract->_save()
[internal function]: Crm_Frontend_Json->saveLead()
.../library/Zend/Server/Abstract.php(232): call_user_func_array()
.../Zend/Json/Server.php(558): Zend_Server_Abstract->_dispatch()
.../Zend/Json/Server.php(197): Zend_Json_Server->_handle()
.../Tinebase/Server/Json.php(140): Zend_Json_Server->handle()
.../Tinebase/Server/Json.php(76): Tinebase_Server_Json->_handle()
.../Tinebase/Core.php(235): Tinebase_Server_Json->handle()
.../index.php(57): Tinebase_Core::dispatchRequest()
| 1.0 | 0006206:
relation type field can be empty - **Reported by pschuele on 4 Apr 2012 12:12**
**Version:** Maischa (2011-05-9)
relation type field can be empty
**Steps to reproduce:** - add contact to lead
- click into relation type col in contact grid
- remove value
- click 'Ok' without leaving the field
-> contact relation is added without relation type
**Additional information:** Relation type not supported.
.../Tinebase/Record/Abstract.php(937): Crm_Model_Lead->_setFromJson()
.../Tinebase/Record/Abstract.php(327): Tinebase_Record_Abstract->setFromJson()
.../Tinebase/Frontend/Json/Abstract.php(180): Tinebase_Record_Abstract->setFromJsonInUsersTimezone()
.../Crm/Frontend/Json.php(89): Tinebase_Frontend_Json_Abstract->_save()
[internal function]: Crm_Frontend_Json->saveLead()
.../library/Zend/Server/Abstract.php(232): call_user_func_array()
.../Zend/Json/Server.php(558): Zend_Server_Abstract->_dispatch()
.../Zend/Json/Server.php(197): Zend_Json_Server->_handle()
.../Tinebase/Server/Json.php(140): Zend_Json_Server->handle()
.../Tinebase/Server/Json.php(76): Tinebase_Server_Json->_handle()
.../Tinebase/Core.php(235): Tinebase_Server_Json->handle()
.../index.php(57): Tinebase_Core::dispatchRequest()
| priority | relation type field can be empty reported by pschuele on apr version maischa relation type field can be empty steps to reproduce add contact to lead click into relation type col in contact grid remove value click ok without leaving the field gt contact relation is added without relation type additional information relation type not supported tinebase record abstract php crm model lead gt setfromjson tinebase record abstract php tinebase record abstract gt setfromjson tinebase frontend json abstract php tinebase record abstract gt setfromjsoninuserstimezone crm frontend json php tinebase frontend json abstract gt save crm frontend json gt savelead library zend server abstract php call user func array zend json server php zend server abstract gt dispatch zend json server php zend json server gt handle tinebase server json php zend json server gt handle tinebase server json php tinebase server json gt handle tinebase core php tinebase server json gt handle index php tinebase core dispatchrequest | 1 |
424,629 | 12,321,164,333 | IssuesEvent | 2020-05-13 08:16:34 | incognitochain/incognito-chain | https://api.github.com/repos/incognitochain/incognito-chain | opened | [Portal][Local] List issues | Priority: High Type: Bug | - [ ] Liquidation by exchange rate wrong
- [ ] Unlock collateral incorrect when redeem success
| 1.0 | [Portal][Local] List issues - - [ ] Liquidation by exchange rate wrong
- [ ] Unlock collateral incorrect when redeem success
| priority | list issues liquidation by exchange rate wrong unlock collateral incorrect when redeem success | 1 |
735,094 | 25,379,046,409 | IssuesEvent | 2022-11-21 16:07:26 | zowe/zowe-cli | https://api.github.com/repos/zowe/zowe-cli | closed | Zowe CLI Exit Code 0 for Errors | bug good first issue priority-high | For example (on windows): `zowe jobs -g && echo %ERRORLEVEL%` results in an exit code `0` although the command itself reports `Command Error:`.
This exit code should probably be a non-zero value. | 1.0 | Zowe CLI Exit Code 0 for Errors - For example (on windows): `zowe jobs -g && echo %ERRORLEVEL%` results in an exit code `0` although the command itself reports `Command Error:`.
This exit code should probably be a non-zero value. | priority | zowe cli exit code for errors for example on windows zowe jobs g echo errorlevel results in an exit code although the command itself reports command error this exit code should probably be a non zero value | 1 |
636,438 | 20,600,433,577 | IssuesEvent | 2022-03-06 06:41:44 | ut-issl/tlm-cmd-db | https://api.github.com/repos/ut-issl/tlm-cmd-db | opened | C2A内のenumとtlm DB上のstatus変換の整合性をとるのがめんどくさすぎる | help wanted priority::high tools WINGS | ## 概要
C2A内のenumとtlm DB上のstatus変換の整合性をとるのがめんどくさすぎる
## 詳細
- https://github.com/ut-issl/tlm-cmd-db/blob/d7e6a2115ef33a595c749fbd87781328a40f4dda/TLM_DB/SAMPLE_TLM_DB_HK.csv#L25 のようなstatus変換が,ソースコードのenumと整合をいちいち取るのがだるすぎる.
- 普通に更新漏れも起きるし
## close条件
なんとかなったら
| 1.0 | C2A内のenumとtlm DB上のstatus変換の整合性をとるのがめんどくさすぎる - ## 概要
C2A内のenumとtlm DB上のstatus変換の整合性をとるのがめんどくさすぎる
## 詳細
- https://github.com/ut-issl/tlm-cmd-db/blob/d7e6a2115ef33a595c749fbd87781328a40f4dda/TLM_DB/SAMPLE_TLM_DB_HK.csv#L25 のようなstatus変換が,ソースコードのenumと整合をいちいち取るのがだるすぎる.
- 普通に更新漏れも起きるし
## close条件
なんとかなったら
| priority | db上のstatus変換の整合性をとるのがめんどくさすぎる 概要 db上のstatus変換の整合性をとるのがめんどくさすぎる 詳細 のようなstatus変換が,ソースコードのenumと整合をいちいち取るのがだるすぎる. 普通に更新漏れも起きるし close条件 なんとかなったら | 1 |
492,664 | 14,217,253,946 | IssuesEvent | 2020-11-17 10:06:05 | bounswe/bounswe2020group4 | https://api.github.com/repos/bounswe/bounswe2020group4 | closed | (BKND) Finalize the Project Plan | Backend Effort: Medium Priority: High Status: Completed Task: Assignment | Backend team will update the Project Plan to shape it into its final form.
Deadline: 19/11/20 | 1.0 | (BKND) Finalize the Project Plan - Backend team will update the Project Plan to shape it into its final form.
Deadline: 19/11/20 | priority | bknd finalize the project plan backend team will update the project plan to shape it into its final form deadline | 1 |
179,826 | 6,628,922,841 | IssuesEvent | 2017-09-24 01:08:20 | KeplerGO/PyKE | https://api.github.com/repos/KeplerGO/PyKE | closed | Release PyKE v3.0.0 | high priority | Thanks to @mirca's incredibly hard work, the master branch of PyKE is now compatible with Python 3 and no longer depends on PyRAF (cf #12), which were the key goals for "PyKe v3.0" aka "PyKE3". This means we can now start planning the first official release of PyKE3!
Let's start by releasing a "3.0.beta" and announce it to a small audience on social media, followed by an official "3.0.0" a few weeks later announced on the website.
Tasks to complete ahead of the release are likely to include the following:
- [x] Review the documentation and its hosting
- [x] Edit the keplerscience website to reflect PyKE 3.0
- [x] Ensure installation and citation instructions in the README are up to date
- [x] Add the `kepdraw` tool [#18]
- [x] Add the `kepsff` tool [#19]
- [x] Add a simple tutorial to the sphinx docs that explains how to get a lightcurve from K2 pixels using `kepmask` + `kepextract` + `kepflatten` + `kepsff` + `kepdraw` (cf. https://keplerscience.arc.nasa.gov/PyKEprimerWalkthroughE.shtml)
- [x] Decide on a name for PyKE to use in PyPI (`pyke`, `pykep`, `pykepler` are all taken by others). Let's use `pyketools`? | 1.0 | Release PyKE v3.0.0 - Thanks to @mirca's incredibly hard work, the master branch of PyKE is now compatible with Python 3 and no longer depends on PyRAF (cf #12), which were the key goals for "PyKe v3.0" aka "PyKE3". This means we can now start planning the first official release of PyKE3!
Let's start by releasing a "3.0.beta" and announce it to a small audience on social media, followed by an official "3.0.0" a few weeks later announced on the website.
Tasks to complete ahead of the release are likely to include the following:
- [x] Review the documentation and its hosting
- [x] Edit the keplerscience website to reflect PyKE 3.0
- [x] Ensure installation and citation instructions in the README are up to date
- [x] Add the `kepdraw` tool [#18]
- [x] Add the `kepsff` tool [#19]
- [x] Add a simple tutorial to the sphinx docs that explains how to get a lightcurve from K2 pixels using `kepmask` + `kepextract` + `kepflatten` + `kepsff` + `kepdraw` (cf. https://keplerscience.arc.nasa.gov/PyKEprimerWalkthroughE.shtml)
- [x] Decide on a name for PyKE to use in PyPI (`pyke`, `pykep`, `pykepler` are all taken by others). Let's use `pyketools`? | priority | release pyke thanks to mirca s incredibly hard work the master branch of pyke is now compatible with python and no longer depends on pyraf cf which were the key goals for pyke aka this means we can now start planning the first official release of let s start by releasing a beta and announce it to a small audience on social media followed by an official a few weeks later announced on the website tasks to complete ahead of the release are likely to include the following review the documentation and its hosting edit the keplerscience website to reflect pyke ensure installation and citation instructions in the readme are up to date add the kepdraw tool add the kepsff tool add a simple tutorial to the sphinx docs that explains how to get a lightcurve from pixels using kepmask kepextract kepflatten kepsff kepdraw cf decide on a name for pyke to use in pypi pyke pykep pykepler are all taken by others let s use pyketools | 1 |
178,576 | 6,612,160,922 | IssuesEvent | 2017-09-20 01:55:23 | portworx/torpedo | https://api.github.com/repos/portworx/torpedo | closed | Generate application specs from yaml files | framework priority/high | Currently the applications are programiatically defined in golang. This takes a learning curve for someone to add a new application.
End users are more familiar with yaml config files for describing k8s primitives. So we need a stub module that takes yaml files and auto-genrates golang objects which subsequently get used by the torpedo framework. | 1.0 | Generate application specs from yaml files - Currently the applications are programiatically defined in golang. This takes a learning curve for someone to add a new application.
End users are more familiar with yaml config files for describing k8s primitives. So we need a stub module that takes yaml files and auto-genrates golang objects which subsequently get used by the torpedo framework. | priority | generate application specs from yaml files currently the applications are programiatically defined in golang this takes a learning curve for someone to add a new application end users are more familiar with yaml config files for describing primitives so we need a stub module that takes yaml files and auto genrates golang objects which subsequently get used by the torpedo framework | 1 |
394,573 | 11,645,444,196 | IssuesEvent | 2020-03-01 01:28:44 | Thorium-Sim/thorium | https://api.github.com/repos/Thorium-Sim/thorium | closed | Messages freezing core | priority/high type/bug | ### Requested By: Jordan
### Priority: High
### Version: 2.6.0
If the messaging core is open and a new message is received (tab changes color and the notification sound plays) core will freeze. Clicking won't do anything, key commands won't do anything. Closing the tab and opening a new one will resume functionality.
### Steps to Reproduce
Have the messaging core open but no messages selected.
Have a message sent from somewhere else.
Click in futility | 1.0 | Messages freezing core - ### Requested By: Jordan
### Priority: High
### Version: 2.6.0
If the messaging core is open and a new message is received (tab changes color and the notification sound plays) core will freeze. Clicking won't do anything, key commands won't do anything. Closing the tab and opening a new one will resume functionality.
### Steps to Reproduce
Have the messaging core open but no messages selected.
Have a message sent from somewhere else.
Click in futility | priority | messages freezing core requested by jordan priority high version if the messaging core is open and a new message is received tab changes color and the notification sound plays core will freeze clicking won t do anything key commands won t do anything closing the tab and opening a new one will resume functionality steps to reproduce have the messaging core open but no messages selected have a message sent from somewhere else click in futility | 1 |
610,053 | 18,892,927,737 | IssuesEvent | 2021-11-15 15:02:06 | 50ra4/clock-in-app | https://api.github.com/repos/50ra4/clock-in-app | closed | serviceのリファクタリング | priority high | ref: #61
- [x] 共通で使える関数を切り出して参照する形にする(`queryToxxx`とか)#132
- [x] dairyTimeRecordの配列にもorderが入るようにする #132
- [x] firestore、firebaseの型や関数などをservice外で利用させないようにする #136
- [x] 更新前後のデータを比較し、更新回数を抑止する #137 | 1.0 | serviceのリファクタリング - ref: #61
- [x] 共通で使える関数を切り出して参照する形にする(`queryToxxx`とか)#132
- [x] dairyTimeRecordの配列にもorderが入るようにする #132
- [x] firestore、firebaseの型や関数などをservice外で利用させないようにする #136
- [x] 更新前後のデータを比較し、更新回数を抑止する #137 | priority | serviceのリファクタリング ref 共通で使える関数を切り出して参照する形にする( querytoxxx とか) dairytimerecordの配列にもorderが入るようにする firestore、firebaseの型や関数などをservice外で利用させないようにする 更新前後のデータを比較し、更新回数を抑止する | 1 |
679,653 | 23,241,025,352 | IssuesEvent | 2022-08-03 15:35:27 | bigbio/quantms | https://api.github.com/repos/bigbio/quantms | closed | DIANN changes in the current pipeline | enhancement high-priority dia analysis | ### Description of feature
Dear @daichengxin @vdemichev @jpfeuffer:
I have started testing the DIANN pipeline in quantms. The main problem I found is that still the pipeline is **really slow**. However, after talking to @vdemichev some other issues should be solved. First two ideas about what do we want to solve between DIANN and quantms:
quantms + diann should be able to:
- reanalyze DIA-based data in PRIDE Archive annotated in SDRFs and Uniprot FASTA files.
- export the results into standard file formats including mzTab + mzML (still pending issue here #119 that we will continue discussing)
- use the capabilities of quantms to run DIANN distributed and cloud as much as possible.
The current pipeline/parallelization does the following :
1. Generate the "generic config files" for the analysis including the PTMs, Enzyme rules, etc. For that, we use the following command in one node:
```
prepare_diann_parameters.py generate \
--enzyme "Trypsin" \
--fix_mod "" \
--var_mod "Oxidation (M),Carbamidomethyl (C)" \
--precursor_tolerence 10 \
--precursor_tolerence_unit ppm \
--fragment_tolerence 0.05 \
--fragment_tolerence_unit Da \
> GENERATE_DIANN_CFG.log
```
This actually generates a file (**diann_config.cfg**) like containing the following information:
```
--dir ./mzMLs --cut K*,R*,!*P --var-mod Oxidation,15.994915,M --var-mod Carbamidomethyl,57.021464,C --mass-acc 10 --mass-acc-ms1 20 --matrices --report-lib-info
```
2. Each raw file in the SDRF is converted to mzML; or the mzML that was converted outside the pipeline (ABSiex case) is used for the analysis.
3. For each mzML, we generate in parallel the corresponding **theoretical spectral library** using the following command:
```
diann `cat library_config.cfg` \
--fasta Homo-sapiens-uniprot-reviewed-isoforms-contaminants-decoy-202105.fasta \
--fasta-search \
--f 20181116_QEHFX3_BP_RSLCcap_hHeart_DIA_58.mzML \
--out-lib 20181116_QEHFX3_BP_RSLCcap_hHeart_DIA_58_lib.tsv \
--min-pr-mz 350 \
--max-pr-mz 1650 \
\
\
--missed-cleavages 2 \
--min-pep-len 6 \
--max-pep-len 40 \
--min-pr-charge 2 \
--max-pr-charge 4 \
--var-mods 3 \
--threads 25 \
--predictor \
--verbose 3 \
> diann.log
```
**Note**: This step is extremely slow. In addition, @vdemichev as commented me that generating a spectral library by raw file will give you wrong results in the next step (step 4) when searching all the data together and perform the quantification (**q-value** wrong).
4. The quantification step the pipeline uses all the mzMLs and libraries generated in the individual steps to perform the quantification and statistical assessment:
```
diann `cat diann_config.cfg` \
--lib N294-1_lib.tsv--lib N294-2_lib.tsv--lib N295-1_lib.tsv--lib N295-2_lib.tsv--lib N296-1_lib.tsv--lib N299-1_lib.tsv--lib N296-2_lib.tsv--lib N299-2_lib.tsv \
--relaxed-prot-inf \
--fasta Homo-sapiens-uniprot-reviewed-isoforms-contaminants-decoy-202105.fasta \
\
\
\
\
--threads 25 \
--missed-cleavages 2 \
--min-pep-len 6 \
--max-pep-len 40 \
--min-pr-charge 2 \
--max-pr-charge 4 \
--var-mods 3 \
--matrix-spec-q 0.01 \
\
--reannotate \
\
--out diann_report.tsv \
--verbose 3 \
> diann.log
```
Discussing with @vdemichev about performance, he mentioned that this approach will give you wrong statistical results.
@vdemichev
- can you suggest which changes can be done to parallelize DIANN?
- can you suggest which parameters can improve the performance of all the steps?
Lets discuss the ideas in this thread in details.
| 1.0 | DIANN changes in the current pipeline - ### Description of feature
Dear @daichengxin @vdemichev @jpfeuffer:
I have started testing the DIANN pipeline in quantms. The main problem I found is that still the pipeline is **really slow**. However, after talking to @vdemichev some other issues should be solved. First two ideas about what do we want to solve between DIANN and quantms:
quantms + diann should be able to:
- reanalyze DIA-based data in PRIDE Archive annotated in SDRFs and Uniprot FASTA files.
- export the results into standard file formats including mzTab + mzML (still pending issue here #119 that we will continue discussing)
- use the capabilities of quantms to run DIANN distributed and cloud as much as possible.
The current pipeline/parallelization does the following :
1. Generate the "generic config files" for the analysis including the PTMs, Enzyme rules, etc. For that, we use the following command in one node:
```
prepare_diann_parameters.py generate \
--enzyme "Trypsin" \
--fix_mod "" \
--var_mod "Oxidation (M),Carbamidomethyl (C)" \
--precursor_tolerence 10 \
--precursor_tolerence_unit ppm \
--fragment_tolerence 0.05 \
--fragment_tolerence_unit Da \
> GENERATE_DIANN_CFG.log
```
This actually generates a file (**diann_config.cfg**) like containing the following information:
```
--dir ./mzMLs --cut K*,R*,!*P --var-mod Oxidation,15.994915,M --var-mod Carbamidomethyl,57.021464,C --mass-acc 10 --mass-acc-ms1 20 --matrices --report-lib-info
```
2. Each raw file in the SDRF is converted to mzML; or the mzML that was converted outside the pipeline (ABSiex case) is used for the analysis.
3. For each mzML, we generate in parallel the corresponding **theoretical spectral library** using the following command:
```
diann `cat library_config.cfg` \
--fasta Homo-sapiens-uniprot-reviewed-isoforms-contaminants-decoy-202105.fasta \
--fasta-search \
--f 20181116_QEHFX3_BP_RSLCcap_hHeart_DIA_58.mzML \
--out-lib 20181116_QEHFX3_BP_RSLCcap_hHeart_DIA_58_lib.tsv \
--min-pr-mz 350 \
--max-pr-mz 1650 \
\
\
--missed-cleavages 2 \
--min-pep-len 6 \
--max-pep-len 40 \
--min-pr-charge 2 \
--max-pr-charge 4 \
--var-mods 3 \
--threads 25 \
--predictor \
--verbose 3 \
> diann.log
```
**Note**: This step is extremely slow. In addition, @vdemichev as commented me that generating a spectral library by raw file will give you wrong results in the next step (step 4) when searching all the data together and perform the quantification (**q-value** wrong).
4. The quantification step the pipeline uses all the mzMLs and libraries generated in the individual steps to perform the quantification and statistical assessment:
```
diann `cat diann_config.cfg` \
--lib N294-1_lib.tsv--lib N294-2_lib.tsv--lib N295-1_lib.tsv--lib N295-2_lib.tsv--lib N296-1_lib.tsv--lib N299-1_lib.tsv--lib N296-2_lib.tsv--lib N299-2_lib.tsv \
--relaxed-prot-inf \
--fasta Homo-sapiens-uniprot-reviewed-isoforms-contaminants-decoy-202105.fasta \
\
\
\
\
--threads 25 \
--missed-cleavages 2 \
--min-pep-len 6 \
--max-pep-len 40 \
--min-pr-charge 2 \
--max-pr-charge 4 \
--var-mods 3 \
--matrix-spec-q 0.01 \
\
--reannotate \
\
--out diann_report.tsv \
--verbose 3 \
> diann.log
```
Discussing with @vdemichev about performance, he mentioned that this approach will give you wrong statistical results.
@vdemichev
- can you suggest which changes can be done to parallelize DIANN?
- can you suggest which parameters can improve the performance of all the steps?
Lets discuss the ideas in this thread in details.
| priority | diann changes in the current pipeline description of feature dear daichengxin vdemichev jpfeuffer i have started testing the diann pipeline in quantms the main problem i found is that still the pipeline is really slow however after talking to vdemichev some other issues should be solved first two ideas about what do we want to solve between diann and quantms quantms diann should be able to reanalyze dia based data in pride archive annotated in sdrfs and uniprot fasta files export the results into standard file formats including mztab mzml still pending issue here that we will continue discussing use the capabilities of quantms to run diann distributed and cloud as much as possible the current pipeline parallelization does the following generate the generic config files for the analysis including the ptms enzyme rules etc for that we use the following command in one node prepare diann parameters py generate enzyme trypsin fix mod var mod oxidation m carbamidomethyl c precursor tolerence precursor tolerence unit ppm fragment tolerence fragment tolerence unit da generate diann cfg log this actually generates a file diann config cfg like containing the following information dir mzmls cut k r p var mod oxidation m var mod carbamidomethyl c mass acc mass acc matrices report lib info each raw file in the sdrf is converted to mzml or the mzml that was converted outside the pipeline absiex case is used for the analysis for each mzml we generate in parallel the corresponding theoretical spectral library using the following command diann cat library config cfg fasta homo sapiens uniprot reviewed isoforms contaminants decoy fasta fasta search f bp rslccap hheart dia mzml out lib bp rslccap hheart dia lib tsv min pr mz max pr mz missed cleavages min pep len max pep len min pr charge max pr charge var mods threads predictor verbose diann log note this step is extremely slow in addition vdemichev as commented me that generating a spectral library by raw file will give you wrong results in the next step step when searching all the data together and perform the quantification q value wrong the quantification step the pipeline uses all the mzmls and libraries generated in the individual steps to perform the quantification and statistical assessment diann cat diann config cfg lib lib tsv lib lib tsv lib lib tsv lib lib tsv lib lib tsv lib lib tsv lib lib tsv lib lib tsv relaxed prot inf fasta homo sapiens uniprot reviewed isoforms contaminants decoy fasta threads missed cleavages min pep len max pep len min pr charge max pr charge var mods matrix spec q reannotate out diann report tsv verbose diann log discussing with vdemichev about performance he mentioned that this approach will give you wrong statistical results vdemichev can you suggest which changes can be done to parallelize diann can you suggest which parameters can improve the performance of all the steps lets discuss the ideas in this thread in details | 1 |
679,195 | 23,223,952,427 | IssuesEvent | 2022-08-02 21:13:38 | netxs-group/vtm | https://api.github.com/repos/netxs-group/vtm | closed | Run applications as standalone processes | enhancement Terminal high-priority | The goal is to free the memory allocated by the application along with its termination.
Desktopio process can signal with OSC to use IPC instead of TTY (something like extended alternate screen). | 1.0 | Run applications as standalone processes - The goal is to free the memory allocated by the application along with its termination.
Desktopio process can signal with OSC to use IPC instead of TTY (something like extended alternate screen). | priority | run applications as standalone processes the goal is to free the memory allocated by the application along with its termination desktopio process can signal with osc to use ipc instead of tty something like extended alternate screen | 1 |
605,715 | 18,739,415,233 | IssuesEvent | 2021-11-04 11:50:46 | lima-vm/lima | https://api.github.com/repos/lima-vm/lima | closed | `lima busybox nslookup storage.googleapis.com 192.168.5.3` fails with `Can't find storage.googleapis.com: Parse error` | bug priority/high | `lima busybox nslookup storage.googleapis.com 192.168.5.3` fails with `Can't find storage.googleapis.com: Parse error`
```console
$ lima busybox nslookup storage.googleapis.com 192.168.5.3
Server: 192.168.5.3
Address: 192.168.5.3:53
Non-authoritative answer:
*** Can't find storage.googleapis.com: Parse error
Non-authoritative answer:
```
But it works with the real `nslookup`
```console
$ lima nslookup storage.googleapis.com 192.168.5.3
Server: 192.168.5.3
Address: 192.168.5.3#53
Non-authoritative answer:
Name: storage.googleapis.com
Address: 172.217.175.240
Name: storage.googleapis.com
Address: 142.250.196.112
Name: storage.googleapis.com
Address: 216.58.220.144
Name: storage.googleapis.com
Address: 172.217.175.16
Name: storage.googleapis.com
Address: 172.217.175.48
Name: storage.googleapis.com
Address: 172.217.175.80
Name: storage.googleapis.com
Address: 172.217.175.112
Name: storage.googleapis.com
Address: 216.58.197.208
Name: storage.googleapis.com
Address: 216.58.197.240
Name: storage.googleapis.com
Address: 142.250.199.112
Name: storage.googleapis.com
Address: 142.250.207.16
Name: storage.googleapis.com
Address: 142.250.207.48
Name: storage.googleapis.com
Address: 172.217.31.176
Name: storage.googleapis.com
Address: 172.217.161.48
Name: storage.googleapis.com
Address: 172.217.174.112
Name: storage.googleapis.com
Address: 216.58.220.112
```
Also, `busybox nerdctl` with 8.8.8.8 works, too
```console
$ lima busybox nslookup storage.googleapis.com 8.8.8.8
Server: 8.8.8.8
Address: 8.8.8.8:53
Non-authoritative answer:
Name: storage.googleapis.com
Address: 142.250.196.144
Name: storage.googleapis.com
Address: 142.251.42.144
Name: storage.googleapis.com
Address: 142.251.42.176
Name: storage.googleapis.com
Address: 172.217.31.144
Name: storage.googleapis.com
Address: 172.217.161.80
Name: storage.googleapis.com
Address: 216.58.220.144
Name: storage.googleapis.com
Address: 172.217.175.16
Name: storage.googleapis.com
Address: 172.217.175.48
Name: storage.googleapis.com
Address: 172.217.175.80
Name: storage.googleapis.com
Address: 172.217.175.112
Name: storage.googleapis.com
Address: 216.58.197.208
Name: storage.googleapis.com
Address: 216.58.197.240
Name: storage.googleapis.com
Address: 172.217.25.80
Name: storage.googleapis.com
Address: 172.217.25.112
Name: storage.googleapis.com
Address: 142.250.199.112
Name: storage.googleapis.com
Address: 142.250.207.16
Non-authoritative answer:
Name: storage.googleapis.com
Address: 2404:6800:4004:822::2010
Name: storage.googleapis.com
Address: 2404:6800:4004:825::2010
Name: storage.googleapis.com
Address: 2404:6800:4004:826::2010
Name: storage.googleapis.com
Address: 2404:6800:4004:808::2010
```
---
Lima 0f9483b3ba568c86823cada0d6a9681d3e0d1f66 with the default Ubuntu 21.10 template
# Workaround
Set `useHostResolver: false` in the YAML
| 1.0 | `lima busybox nslookup storage.googleapis.com 192.168.5.3` fails with `Can't find storage.googleapis.com: Parse error` - `lima busybox nslookup storage.googleapis.com 192.168.5.3` fails with `Can't find storage.googleapis.com: Parse error`
```console
$ lima busybox nslookup storage.googleapis.com 192.168.5.3
Server: 192.168.5.3
Address: 192.168.5.3:53
Non-authoritative answer:
*** Can't find storage.googleapis.com: Parse error
Non-authoritative answer:
```
But it works with the real `nslookup`
```console
$ lima nslookup storage.googleapis.com 192.168.5.3
Server: 192.168.5.3
Address: 192.168.5.3#53
Non-authoritative answer:
Name: storage.googleapis.com
Address: 172.217.175.240
Name: storage.googleapis.com
Address: 142.250.196.112
Name: storage.googleapis.com
Address: 216.58.220.144
Name: storage.googleapis.com
Address: 172.217.175.16
Name: storage.googleapis.com
Address: 172.217.175.48
Name: storage.googleapis.com
Address: 172.217.175.80
Name: storage.googleapis.com
Address: 172.217.175.112
Name: storage.googleapis.com
Address: 216.58.197.208
Name: storage.googleapis.com
Address: 216.58.197.240
Name: storage.googleapis.com
Address: 142.250.199.112
Name: storage.googleapis.com
Address: 142.250.207.16
Name: storage.googleapis.com
Address: 142.250.207.48
Name: storage.googleapis.com
Address: 172.217.31.176
Name: storage.googleapis.com
Address: 172.217.161.48
Name: storage.googleapis.com
Address: 172.217.174.112
Name: storage.googleapis.com
Address: 216.58.220.112
```
Also, `busybox nerdctl` with 8.8.8.8 works, too
```console
$ lima busybox nslookup storage.googleapis.com 8.8.8.8
Server: 8.8.8.8
Address: 8.8.8.8:53
Non-authoritative answer:
Name: storage.googleapis.com
Address: 142.250.196.144
Name: storage.googleapis.com
Address: 142.251.42.144
Name: storage.googleapis.com
Address: 142.251.42.176
Name: storage.googleapis.com
Address: 172.217.31.144
Name: storage.googleapis.com
Address: 172.217.161.80
Name: storage.googleapis.com
Address: 216.58.220.144
Name: storage.googleapis.com
Address: 172.217.175.16
Name: storage.googleapis.com
Address: 172.217.175.48
Name: storage.googleapis.com
Address: 172.217.175.80
Name: storage.googleapis.com
Address: 172.217.175.112
Name: storage.googleapis.com
Address: 216.58.197.208
Name: storage.googleapis.com
Address: 216.58.197.240
Name: storage.googleapis.com
Address: 172.217.25.80
Name: storage.googleapis.com
Address: 172.217.25.112
Name: storage.googleapis.com
Address: 142.250.199.112
Name: storage.googleapis.com
Address: 142.250.207.16
Non-authoritative answer:
Name: storage.googleapis.com
Address: 2404:6800:4004:822::2010
Name: storage.googleapis.com
Address: 2404:6800:4004:825::2010
Name: storage.googleapis.com
Address: 2404:6800:4004:826::2010
Name: storage.googleapis.com
Address: 2404:6800:4004:808::2010
```
---
Lima 0f9483b3ba568c86823cada0d6a9681d3e0d1f66 with the default Ubuntu 21.10 template
# Workaround
Set `useHostResolver: false` in the YAML
| priority | lima busybox nslookup storage googleapis com fails with can t find storage googleapis com parse error lima busybox nslookup storage googleapis com fails with can t find storage googleapis com parse error console lima busybox nslookup storage googleapis com server address non authoritative answer can t find storage googleapis com parse error non authoritative answer but it works with the real nslookup console lima nslookup storage googleapis com server address non authoritative answer name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address also busybox nerdctl with works too console lima busybox nslookup storage googleapis com server address non authoritative answer name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address non authoritative answer name storage googleapis com address name storage googleapis com address name storage googleapis com address name storage googleapis com address lima with the default ubuntu template workaround set usehostresolver false in the yaml | 1 |
140,037 | 5,396,234,381 | IssuesEvent | 2017-02-27 11:03:05 | Cxbx-Reloaded/Cxbx-Reloaded | https://api.github.com/repos/Cxbx-Reloaded/Cxbx-Reloaded | opened | Try to reserve memory ranges via PE sections | enhancement help wanted high-priority kernel needs-developer-discussion | Research if there's a method by which memory ranges can be reserved, by linking in sections at absolute addresses. This would allow us to reserve an address range for the kernel, contiguous memory, etc.
Surely, VirtualAllocEx can be used, but that's unreliable. I wonder if we could (ab)use the PE loader for this.
(unreliable in a sense that the preferred address might not be claimable)
One lead is here :
http://stackoverflow.com/questions/33400783/how-can-i-declare-a-variable-at-an-absolute-address-with-gcc | 1.0 | Try to reserve memory ranges via PE sections - Research if there's a method by which memory ranges can be reserved, by linking in sections at absolute addresses. This would allow us to reserve an address range for the kernel, contiguous memory, etc.
Surely, VirtualAllocEx can be used, but that's unreliable. I wonder if we could (ab)use the PE loader for this.
(unreliable in a sense that the preferred address might not be claimable)
One lead is here :
http://stackoverflow.com/questions/33400783/how-can-i-declare-a-variable-at-an-absolute-address-with-gcc | priority | try to reserve memory ranges via pe sections research if there s a method by which memory ranges can be reserved by linking in sections at absolute addresses this would allow us to reserve an address range for the kernel contiguous memory etc surely virtualallocex can be used but that s unreliable i wonder if we could ab use the pe loader for this unreliable in a sense that the preferred address might not be claimable one lead is here | 1 |
226,467 | 7,519,581,639 | IssuesEvent | 2018-04-12 12:06:41 | boissierflorian/projet_webservices | https://api.github.com/repos/boissierflorian/projet_webservices | closed | Accès à la base de données | app/config core/model discuss high priority in progress todo | - Chargement de la connexion à la base données.
- Création d'un fichier de configuration ? | 1.0 | Accès à la base de données - - Chargement de la connexion à la base données.
- Création d'un fichier de configuration ? | priority | accès à la base de données chargement de la connexion à la base données création d un fichier de configuration | 1 |
121,540 | 4,817,811,586 | IssuesEvent | 2016-11-04 14:46:05 | aaronang/cong-the-ripper | https://api.github.com/repos/aaronang/cong-the-ripper | opened | Job.runningTasks is not updated | Priority: High | This part, and probably others too, do not call `decreaseRunningTasks`, actually, this function is not used anywhere in our code...
```Go
case addr := <-m.heartbeatMissChan:
// moved the scheduled tasks back to new tasks to be re-scheduled
for i := range m.instances[addr].tasks {
task := m.instances[addr].tasks[i]
m.scheduledTasks = removeTaskFrom(m.scheduledTasks, task.JobID, task.ID)
m.newTasks = append([]*lib.Task{task}, m.newTasks...)
}
delete(m.instances, addr)
``` | 1.0 | Job.runningTasks is not updated - This part, and probably others too, do not call `decreaseRunningTasks`, actually, this function is not used anywhere in our code...
```Go
case addr := <-m.heartbeatMissChan:
// moved the scheduled tasks back to new tasks to be re-scheduled
for i := range m.instances[addr].tasks {
task := m.instances[addr].tasks[i]
m.scheduledTasks = removeTaskFrom(m.scheduledTasks, task.JobID, task.ID)
m.newTasks = append([]*lib.Task{task}, m.newTasks...)
}
delete(m.instances, addr)
``` | priority | job runningtasks is not updated this part and probably others too do not call decreaserunningtasks actually this function is not used anywhere in our code go case addr m heartbeatmisschan moved the scheduled tasks back to new tasks to be re scheduled for i range m instances tasks task m instances tasks m scheduledtasks removetaskfrom m scheduledtasks task jobid task id m newtasks append lib task task m newtasks delete m instances addr | 1 |
731,502 | 25,219,396,223 | IssuesEvent | 2022-11-14 11:36:19 | talent-connect/connect | https://api.github.com/repos/talent-connect/connect | closed | [CON:] Update contact information on page and emails | Area/frontend [react] Area/email [mjml] Task Ready Priority: High | ## Context/background
When a new mentor or mentee signs up for ReDI Connect, they receive information about contact people (both on the website and emails). With recent changes in the team, it is necessary to update such info so the mentors and mentees are referred to the correct team members.
## What needs to be done?
1) PAGE
Change contact information on the page (confirmation upon signing up): replace current info (screenshot) with the following:
ReDI Career Support Team
General - [career@redi-school.org](mailto:career@redi-school.org)
Mentorship program - [hadeer@redi-school.org](mailto:hadeer@redi-school.org)

2) EMAILS
Please to the same for the emails. There are several confirmation and follow up emails sent to users - "verify your email address", "checking in: we would love to hear about your mentorship", "XX has accepted your application". All of them need to be updated - the signature must contain Hadeer's contact information.
ReDI Career Support Team
General - [career@redi-school.org](mailto:career@redi-school.org)
Mentorship program - [hadeer@redi-school.org](mailto:hadeer@redi-school.org)
<img width="644" alt="Screenshot 2022-11-08 at 16 48 48" src="https://user-images.githubusercontent.com/114068119/200615790-cf968850-2a21-49ed-97fc-00776430cb6a.png">
<img width="644" alt="Screenshot 2022-11-08 at 16 48 58" src="https://user-images.githubusercontent.com/114068119/200615834-3d0f056a-3688-4202-851b-6c71784f5af8.png">
| 1.0 | [CON:] Update contact information on page and emails - ## Context/background
When a new mentor or mentee signs up for ReDI Connect, they receive information about contact people (both on the website and emails). With recent changes in the team, it is necessary to update such info so the mentors and mentees are referred to the correct team members.
## What needs to be done?
1) PAGE
Change contact information on the page (confirmation upon signing up): replace current info (screenshot) with the following:
ReDI Career Support Team
General - [career@redi-school.org](mailto:career@redi-school.org)
Mentorship program - [hadeer@redi-school.org](mailto:hadeer@redi-school.org)

2) EMAILS
Please to the same for the emails. There are several confirmation and follow up emails sent to users - "verify your email address", "checking in: we would love to hear about your mentorship", "XX has accepted your application". All of them need to be updated - the signature must contain Hadeer's contact information.
ReDI Career Support Team
General - [career@redi-school.org](mailto:career@redi-school.org)
Mentorship program - [hadeer@redi-school.org](mailto:hadeer@redi-school.org)
<img width="644" alt="Screenshot 2022-11-08 at 16 48 48" src="https://user-images.githubusercontent.com/114068119/200615790-cf968850-2a21-49ed-97fc-00776430cb6a.png">
<img width="644" alt="Screenshot 2022-11-08 at 16 48 58" src="https://user-images.githubusercontent.com/114068119/200615834-3d0f056a-3688-4202-851b-6c71784f5af8.png">
| priority | update contact information on page and emails context background when a new mentor or mentee signs up for redi connect they receive information about contact people both on the website and emails with recent changes in the team it is necessary to update such info so the mentors and mentees are referred to the correct team members what needs to be done page change contact information on the page confirmation upon signing up replace current info screenshot with the following redi career support team general mailto career redi school org mentorship program mailto hadeer redi school org emails please to the same for the emails there are several confirmation and follow up emails sent to users verify your email address checking in we would love to hear about your mentorship xx has accepted your application all of them need to be updated the signature must contain hadeer s contact information redi career support team general mailto career redi school org mentorship program mailto hadeer redi school org img width alt screenshot at src img width alt screenshot at src | 1 |
701,145 | 24,088,141,365 | IssuesEvent | 2022-09-19 12:46:53 | GlodoUK/helm-charts | https://api.github.com/repos/GlodoUK/helm-charts | closed | Change pullPolicy default | good first issue priority: high breaking change | `pullPolicy` has historically been to `Always`.
This is somewhat wasteful given how we tag images internally (usually `$ODOO_VERSION-$timestamp`), when we forget to change the policy to `IfNotPresent`.
If third parties have any concerns, please raise this with us over the next 6 weeks.
| 1.0 | Change pullPolicy default - `pullPolicy` has historically been to `Always`.
This is somewhat wasteful given how we tag images internally (usually `$ODOO_VERSION-$timestamp`), when we forget to change the policy to `IfNotPresent`.
If third parties have any concerns, please raise this with us over the next 6 weeks.
| priority | change pullpolicy default pullpolicy has historically been to always this is somewhat wasteful given how we tag images internally usually odoo version timestamp when we forget to change the policy to ifnotpresent if third parties have any concerns please raise this with us over the next weeks | 1 |
64,423 | 3,211,551,812 | IssuesEvent | 2015-10-06 11:27:08 | CoderDojo/community-platform | https://api.github.com/repos/CoderDojo/community-platform | closed | Docklands Dojo not searchable on map | bug high priority | I searched for "Docklands" and "CHQ" but I can't find it.

When I zoom in to the location also I cannot find it - I just get a cluster of "2" and when I click on "2" nothing happens:

Dojo listing is here: http://zen.coderdojo.com/dashboard/dojo/ie/the-chq-building-ifsc-dublin-docklands-dublin-1-ireland/dublin-docklands-chq
Are we indexing the "name" field for the map search? If not I think we need to be. | 1.0 | Docklands Dojo not searchable on map - I searched for "Docklands" and "CHQ" but I can't find it.

When I zoom in to the location also I cannot find it - I just get a cluster of "2" and when I click on "2" nothing happens:

Dojo listing is here: http://zen.coderdojo.com/dashboard/dojo/ie/the-chq-building-ifsc-dublin-docklands-dublin-1-ireland/dublin-docklands-chq
Are we indexing the "name" field for the map search? If not I think we need to be. | priority | docklands dojo not searchable on map i searched for docklands and chq but i can t find it when i zoom in to the location also i cannot find it i just get a cluster of and when i click on nothing happens dojo listing is here are we indexing the name field for the map search if not i think we need to be | 1 |
215,850 | 7,298,714,812 | IssuesEvent | 2018-02-26 17:47:37 | zom/Zom-iOS | https://api.github.com/repos/zom/Zom-iOS | closed | Add setup options in 'me' tab if no account is setup | FOR REVIEW high-priority | Add the options to 'Create an account' or 'Sign into an existing account' in the me tab if there is no account. | 1.0 | Add setup options in 'me' tab if no account is setup - Add the options to 'Create an account' or 'Sign into an existing account' in the me tab if there is no account. | priority | add setup options in me tab if no account is setup add the options to create an account or sign into an existing account in the me tab if there is no account | 1 |
246,598 | 7,895,420,599 | IssuesEvent | 2018-06-29 03:10:02 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Average value query doesn't take into account the actual data | Likelihood: 3 - Occasional OS: All Priority: High Severity: 4 - Crash / Wrong Results Support Group: Any bug version: 2.10.0 | Carly Whitmore is working with an SPH dataset and when she did an average value query that doesn't appear to be taking into account what is actually being displayed. In her case she has a portion of the mesh that isn't moving and a portion of the mesh that is moving at a constant speed. When she selects just the portion that is moving she expects to get the speed of just the portion that is moving, instead she is getting a value that is lower and is probably averaging all the values.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 09/15/2016 03:03 pm
Original update: 08/02/2017 04:11 pm
Ticket number: 2680 | 1.0 | Average value query doesn't take into account the actual data - Carly Whitmore is working with an SPH dataset and when she did an average value query that doesn't appear to be taking into account what is actually being displayed. In her case she has a portion of the mesh that isn't moving and a portion of the mesh that is moving at a constant speed. When she selects just the portion that is moving she expects to get the speed of just the portion that is moving, instead she is getting a value that is lower and is probably averaging all the values.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 09/15/2016 03:03 pm
Original update: 08/02/2017 04:11 pm
Ticket number: 2680 | priority | average value query doesn t take into account the actual data carly whitmore is working with an sph dataset and when she did an average value query that doesn t appear to be taking into account what is actually being displayed in her case she has a portion of the mesh that isn t moving and a portion of the mesh that is moving at a constant speed when she selects just the portion that is moving she expects to get the speed of just the portion that is moving instead she is getting a value that is lower and is probably averaging all the values redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author eric brugger original creation pm original update pm ticket number | 1 |
675,228 | 23,085,263,563 | IssuesEvent | 2022-07-26 10:48:18 | fyusuf-a/ft_transcendence | https://api.github.com/repos/fyusuf-a/ft_transcendence | closed | Frontend won't build in development (and in production) | bug frontend ci-cd HIGH PRIORITY | # To reproduce
### Env
```
# Build values
NODE_IMAGE=lts-alpine
NGINX_IMAGE=stable-alpine
BACKEND_DOCKERFILE=Dockerfile
FRONTEND_DOCKERFILE=Dockerfile
```
### Command
`docker-compose --profile debug --profile frontend up`
### Error
```
Build finish
ERROR in src/dtos/matches/match.dto.ts:3:22
ft_transcendence-frontend-1 | TS7016: Could not find a declaration file for module 'class-transformer'. '/app/node_modules/class-transformer/cjs/index.js' implicitly has an 'any' type.
ft_transcendence-frontend-1 | Try `npm i --save-dev @types/class-transformer` if it exists or add a new declaration (.d.ts) file containing `declare module 'class-transformer';`
ft_transcendence-frontend-1 | 1 | import { ApiProperty } from '@nestjs/swagger';
ft_transcendence-frontend-1 | 2 | import { IsDate, IsInt, IsPositive, IsEnum } from 'class-validator';
ft_transcendence-frontend-1 | > 3 | import { Type } from 'class-transformer';
ft_transcendence-frontend-1 | | ^^^^^^^^^^^^^^^^^^^
ft_transcendence-frontend-1 | 4 |
ft_transcendence-frontend-1 | 5 | export enum MatchStatusType {
ft_transcendence-frontend-1 | 6 | HOME = 'HOME',
ft_transcendence-frontend-1 |
ft_transcendence-frontend-1 | ERROR in src/dtos/pages/page-options.dto.ts:4:22
ft_transcendence-frontend-1 | TS7016: Could not find a declaration file for module 'class-transformer'. '/app/node_modules/class-transformer/cjs/index.js' implicitly has an 'any' type.
ft_transcendence-frontend-1 | Try `npm i --save-dev @types/class-transformer` if it exists or add a new declaration (.d.ts) file containing `declare module 'class-transformer';`
ft_transcendence-frontend-1 | 2 |
ft_transcendence-frontend-1 | 3 | import { ApiPropertyOptional } from '@nestjs/swagger';
ft_transcendence-frontend-1 | > 4 | import { Type } from 'class-transformer';
ft_transcendence-frontend-1 | | ^^^^^^^^^^^^^^^^^^^
ft_transcendence-frontend-1 | 5 | import { Min, Max, IsEnum, IsInt, IsOptional } from 'class-validator';
ft_transcendence-frontend-1 | 6 |
ft_transcendence-frontend-1 | 7 | export enum Order {
ft_transcendence-frontend-1 |
```
### Notes
- running `npm i --save-dev @types/class-transformer` as suggested gives the error that `'@types/class-transformer@*' is not in this registry.`
- #166 does not seem to affect this error | 1.0 | Frontend won't build in development (and in production) - # To reproduce
### Env
```
# Build values
NODE_IMAGE=lts-alpine
NGINX_IMAGE=stable-alpine
BACKEND_DOCKERFILE=Dockerfile
FRONTEND_DOCKERFILE=Dockerfile
```
### Command
`docker-compose --profile debug --profile frontend up`
### Error
```
Build finish
ERROR in src/dtos/matches/match.dto.ts:3:22
ft_transcendence-frontend-1 | TS7016: Could not find a declaration file for module 'class-transformer'. '/app/node_modules/class-transformer/cjs/index.js' implicitly has an 'any' type.
ft_transcendence-frontend-1 | Try `npm i --save-dev @types/class-transformer` if it exists or add a new declaration (.d.ts) file containing `declare module 'class-transformer';`
ft_transcendence-frontend-1 | 1 | import { ApiProperty } from '@nestjs/swagger';
ft_transcendence-frontend-1 | 2 | import { IsDate, IsInt, IsPositive, IsEnum } from 'class-validator';
ft_transcendence-frontend-1 | > 3 | import { Type } from 'class-transformer';
ft_transcendence-frontend-1 | | ^^^^^^^^^^^^^^^^^^^
ft_transcendence-frontend-1 | 4 |
ft_transcendence-frontend-1 | 5 | export enum MatchStatusType {
ft_transcendence-frontend-1 | 6 | HOME = 'HOME',
ft_transcendence-frontend-1 |
ft_transcendence-frontend-1 | ERROR in src/dtos/pages/page-options.dto.ts:4:22
ft_transcendence-frontend-1 | TS7016: Could not find a declaration file for module 'class-transformer'. '/app/node_modules/class-transformer/cjs/index.js' implicitly has an 'any' type.
ft_transcendence-frontend-1 | Try `npm i --save-dev @types/class-transformer` if it exists or add a new declaration (.d.ts) file containing `declare module 'class-transformer';`
ft_transcendence-frontend-1 | 2 |
ft_transcendence-frontend-1 | 3 | import { ApiPropertyOptional } from '@nestjs/swagger';
ft_transcendence-frontend-1 | > 4 | import { Type } from 'class-transformer';
ft_transcendence-frontend-1 | | ^^^^^^^^^^^^^^^^^^^
ft_transcendence-frontend-1 | 5 | import { Min, Max, IsEnum, IsInt, IsOptional } from 'class-validator';
ft_transcendence-frontend-1 | 6 |
ft_transcendence-frontend-1 | 7 | export enum Order {
ft_transcendence-frontend-1 |
```
### Notes
- running `npm i --save-dev @types/class-transformer` as suggested gives the error that `'@types/class-transformer@*' is not in this registry.`
- #166 does not seem to affect this error | priority | frontend won t build in development and in production to reproduce env build values node image lts alpine nginx image stable alpine backend dockerfile dockerfile frontend dockerfile dockerfile command docker compose profile debug profile frontend up error build finish error in src dtos matches match dto ts ft transcendence frontend could not find a declaration file for module class transformer app node modules class transformer cjs index js implicitly has an any type ft transcendence frontend try npm i save dev types class transformer if it exists or add a new declaration d ts file containing declare module class transformer ft transcendence frontend import apiproperty from nestjs swagger ft transcendence frontend import isdate isint ispositive isenum from class validator ft transcendence frontend import type from class transformer ft transcendence frontend ft transcendence frontend ft transcendence frontend export enum matchstatustype ft transcendence frontend home home ft transcendence frontend ft transcendence frontend error in src dtos pages page options dto ts ft transcendence frontend could not find a declaration file for module class transformer app node modules class transformer cjs index js implicitly has an any type ft transcendence frontend try npm i save dev types class transformer if it exists or add a new declaration d ts file containing declare module class transformer ft transcendence frontend ft transcendence frontend import apipropertyoptional from nestjs swagger ft transcendence frontend import type from class transformer ft transcendence frontend ft transcendence frontend import min max isenum isint isoptional from class validator ft transcendence frontend ft transcendence frontend export enum order ft transcendence frontend notes running npm i save dev types class transformer as suggested gives the error that types class transformer is not in this registry does not seem to affect this error | 1 |
438,540 | 12,640,891,421 | IssuesEvent | 2020-06-16 04:31:32 | CHOMPStation2/CHOMPStation2 | https://api.github.com/repos/CHOMPStation2/CHOMPStation2 | closed | Belt Miner Bugs/Issues | Bug High Priority Map Edit | Mostly a reminder for @Rykka-Stormheart to deal with.
Smol quips:
- Belter Outpost
- - Outpost power seems to bleed out a fair bit quickly.
- - Wiring from solars to the airlock to have space plating instead
- - Runtimes when shuttle leaves the transition area as shown in https://discordapp.com/channels/180538463655821312/405257702663782400/669761697845608450
- - Maybe a suit cycler room as well as two extra voidsuits as reserve.
- - _Lessen the power draw from atmospherics in the main room_
- - 1 light switch per area is _fine_
- - Not enough solar panels to give power to the whole outpost.
- POI issues.
- - Crates POI are empty crates, treasure/reward thingy not set, as said in deadchat.
- - Frost spider POI has regular frost spooders and space variations spawning at once.
- - Another spooder POI https://discordapp.com/channels/180538463655821312/405257702663782400/669753224760131590, intended per discord chatter to change into carp spawns, I believe.
- - Mobs in other areas seem to be dying: bats, hounds, bears, space variation of frost spooders.
- - Faction issues with mobs spawned inside a POI being at odds with each other. https://cdn.discordapp.com/attachments/405257702663782400/670325354287595530/unknown.png for refference.
#### Code Revision
- Server revision: master - 2020-01-23
a0b222307faf01b029bf5567cb33b061ce8607ec
#### Anything else you may wish to add:
- (Location if it's a mapping issue, screenshots, sprites, etc.)
| 1.0 | Belt Miner Bugs/Issues - Mostly a reminder for @Rykka-Stormheart to deal with.
Smol quips:
- Belter Outpost
- - Outpost power seems to bleed out a fair bit quickly.
- - Wiring from solars to the airlock to have space plating instead
- - Runtimes when shuttle leaves the transition area as shown in https://discordapp.com/channels/180538463655821312/405257702663782400/669761697845608450
- - Maybe a suit cycler room as well as two extra voidsuits as reserve.
- - _Lessen the power draw from atmospherics in the main room_
- - 1 light switch per area is _fine_
- - Not enough solar panels to give power to the whole outpost.
- POI issues.
- - Crates POI are empty crates, treasure/reward thingy not set, as said in deadchat.
- - Frost spider POI has regular frost spooders and space variations spawning at once.
- - Another spooder POI https://discordapp.com/channels/180538463655821312/405257702663782400/669753224760131590, intended per discord chatter to change into carp spawns, I believe.
- - Mobs in other areas seem to be dying: bats, hounds, bears, space variation of frost spooders.
- - Faction issues with mobs spawned inside a POI being at odds with each other. https://cdn.discordapp.com/attachments/405257702663782400/670325354287595530/unknown.png for refference.
#### Code Revision
- Server revision: master - 2020-01-23
a0b222307faf01b029bf5567cb33b061ce8607ec
#### Anything else you may wish to add:
- (Location if it's a mapping issue, screenshots, sprites, etc.)
| priority | belt miner bugs issues mostly a reminder for rykka stormheart to deal with smol quips belter outpost outpost power seems to bleed out a fair bit quickly wiring from solars to the airlock to have space plating instead runtimes when shuttle leaves the transition area as shown in maybe a suit cycler room as well as two extra voidsuits as reserve lessen the power draw from atmospherics in the main room light switch per area is fine not enough solar panels to give power to the whole outpost poi issues crates poi are empty crates treasure reward thingy not set as said in deadchat frost spider poi has regular frost spooders and space variations spawning at once another spooder poi intended per discord chatter to change into carp spawns i believe mobs in other areas seem to be dying bats hounds bears space variation of frost spooders faction issues with mobs spawned inside a poi being at odds with each other for refference code revision server revision master anything else you may wish to add location if it s a mapping issue screenshots sprites etc | 1 |
316,163 | 9,637,624,060 | IssuesEvent | 2019-05-16 09:13:43 | HGustavs/LenaSYS | https://api.github.com/repos/HGustavs/LenaSYS | closed | Diagram: Not be able to enable menu-items that are disabled | Diagram gruppA2019 highPriority | Right now you can enable menu-items that are disabled. This should not work.
<img width="288" alt="Skärmavbild 2019-05-08 kl 09 55 52" src="https://user-images.githubusercontent.com/37792795/57359235-7cc76500-7177-11e9-8ba2-9c928fac35ba.png">
| 1.0 | Diagram: Not be able to enable menu-items that are disabled - Right now you can enable menu-items that are disabled. This should not work.
<img width="288" alt="Skärmavbild 2019-05-08 kl 09 55 52" src="https://user-images.githubusercontent.com/37792795/57359235-7cc76500-7177-11e9-8ba2-9c928fac35ba.png">
| priority | diagram not be able to enable menu items that are disabled right now you can enable menu items that are disabled this should not work img width alt skärmavbild kl src | 1 |
45,107 | 2,920,424,015 | IssuesEvent | 2015-06-24 18:51:58 | rh-lab-q/rpg | https://api.github.com/repos/rh-lab-q/rpg | opened | spec: add `buildrequires_files` and `requires` files attr | high_priority | all plugins should append the required files to these attrs instead of spec.(Build)Requires. Change it in `files_to_pkgs` plugin too | 1.0 | spec: add `buildrequires_files` and `requires` files attr - all plugins should append the required files to these attrs instead of spec.(Build)Requires. Change it in `files_to_pkgs` plugin too | priority | spec add buildrequires files and requires files attr all plugins should append the required files to these attrs instead of spec build requires change it in files to pkgs plugin too | 1 |
140,361 | 5,400,680,839 | IssuesEvent | 2017-02-27 22:40:30 | vmware/vic | https://api.github.com/repos/vmware/vic | closed | Delete vsan dom objects in vic-machine delete and image/volume delete | component/vic-machine kind/bug priority/high | As a customer, I don't want to leak storage resources after use VCH, no matter it's docker rmi, docker volume rm or vic-machine delete. I need the storage used by VCH is deleted cleanly.
Related to this story, we need to add following code:
- [ ] in vic-machine delete, call vsan dom cache to clean all orphan dom objects, which mean vmdk is deleted, but dom still exists.
- [ ] in storage portlayer, call vsan dom cache to delete dom object related to specific vmdk file
- [ ] move image storage parent namespace creation from portlayer to vic-machine, like volume did, and record the namespace path, instead of the dir alias, which is not retrievable in vsan dom cache.
depends on PR #3929, additional work for #3787 | 1.0 | Delete vsan dom objects in vic-machine delete and image/volume delete - As a customer, I don't want to leak storage resources after use VCH, no matter it's docker rmi, docker volume rm or vic-machine delete. I need the storage used by VCH is deleted cleanly.
Related to this story, we need to add following code:
- [ ] in vic-machine delete, call vsan dom cache to clean all orphan dom objects, which mean vmdk is deleted, but dom still exists.
- [ ] in storage portlayer, call vsan dom cache to delete dom object related to specific vmdk file
- [ ] move image storage parent namespace creation from portlayer to vic-machine, like volume did, and record the namespace path, instead of the dir alias, which is not retrievable in vsan dom cache.
depends on PR #3929, additional work for #3787 | priority | delete vsan dom objects in vic machine delete and image volume delete as a customer i don t want to leak storage resources after use vch no matter it s docker rmi docker volume rm or vic machine delete i need the storage used by vch is deleted cleanly related to this story we need to add following code in vic machine delete call vsan dom cache to clean all orphan dom objects which mean vmdk is deleted but dom still exists in storage portlayer call vsan dom cache to delete dom object related to specific vmdk file move image storage parent namespace creation from portlayer to vic machine like volume did and record the namespace path instead of the dir alias which is not retrievable in vsan dom cache depends on pr additional work for | 1 |
270,675 | 8,468,385,677 | IssuesEvent | 2018-10-23 19:35:32 | spacetelescope/specviz | https://api.github.com/repos/spacetelescope/specviz | closed | Specviz Crashes when trying to set a ROI | bug high-priority scope-5 | ROI in generel seems to be buggy. All it does when clicking on it is creating one big ROI for the entire spectrum. And sometimes when I click on it it crashes the entire program with the error message:
(specviz) Larry:~ nluetzge$ specviz
Traceback (most recent call last):
File "/Users/nluetzge/anaconda2/envs/specviz/lib/python3.6/site-packages/specviz-0.6.dev1513-py3.6.egg/specviz/plugins/statistics/statistics_widget.py", line 235, in update_statistics
spectral_region = clip_region(spec, spectral_region)
File "/Users/nluetzge/anaconda2/envs/specviz/lib/python3.6/site-packages/specviz-0.6.dev1513-py3.6.egg/specviz/plugins/statistics/statistics_widget.py", line 45, in clip_region
region.lower = spectrum.spectral_axis.min()
AttributeError: can't set attribute
Abort trap: 6
| 1.0 | Specviz Crashes when trying to set a ROI - ROI in generel seems to be buggy. All it does when clicking on it is creating one big ROI for the entire spectrum. And sometimes when I click on it it crashes the entire program with the error message:
(specviz) Larry:~ nluetzge$ specviz
Traceback (most recent call last):
File "/Users/nluetzge/anaconda2/envs/specviz/lib/python3.6/site-packages/specviz-0.6.dev1513-py3.6.egg/specviz/plugins/statistics/statistics_widget.py", line 235, in update_statistics
spectral_region = clip_region(spec, spectral_region)
File "/Users/nluetzge/anaconda2/envs/specviz/lib/python3.6/site-packages/specviz-0.6.dev1513-py3.6.egg/specviz/plugins/statistics/statistics_widget.py", line 45, in clip_region
region.lower = spectrum.spectral_axis.min()
AttributeError: can't set attribute
Abort trap: 6
| priority | specviz crashes when trying to set a roi roi in generel seems to be buggy all it does when clicking on it is creating one big roi for the entire spectrum and sometimes when i click on it it crashes the entire program with the error message specviz larry nluetzge specviz traceback most recent call last file users nluetzge envs specviz lib site packages specviz egg specviz plugins statistics statistics widget py line in update statistics spectral region clip region spec spectral region file users nluetzge envs specviz lib site packages specviz egg specviz plugins statistics statistics widget py line in clip region region lower spectrum spectral axis min attributeerror can t set attribute abort trap | 1 |
187,966 | 6,763,076,534 | IssuesEvent | 2017-10-25 10:11:20 | metasfresh/metasfresh | https://api.github.com/repos/metasfresh/metasfresh | closed | Source HU Automatism for Manufacturing Issue | branch:master branch:release priority:high | ### Is this a bug or feature request?
Feature Request
### What is the current behavior?
Currently, it is not possible to select a Source HU as Default Action Issue Source when fitting.
#### Which are the steps to reproduce?
Open, check and see.
### What is the expected or desired behavior?
Allow defining Source HU's for Action issue and a given Manufacturing Ressource/ Plant. Something similar was already developed in Picking. | 1.0 | Source HU Automatism for Manufacturing Issue - ### Is this a bug or feature request?
Feature Request
### What is the current behavior?
Currently, it is not possible to select a Source HU as Default Action Issue Source when fitting.
#### Which are the steps to reproduce?
Open, check and see.
### What is the expected or desired behavior?
Allow defining Source HU's for Action issue and a given Manufacturing Ressource/ Plant. Something similar was already developed in Picking. | priority | source hu automatism for manufacturing issue is this a bug or feature request feature request what is the current behavior currently it is not possible to select a source hu as default action issue source when fitting which are the steps to reproduce open check and see what is the expected or desired behavior allow defining source hu s for action issue and a given manufacturing ressource plant something similar was already developed in picking | 1 |
294,890 | 9,050,027,115 | IssuesEvent | 2019-02-12 07:19:04 | richelbilderbeek/pirouette | https://api.github.com/repos/richelbilderbeek/pirouette | closed | Add "twinning_params" in "create_twin_tree" call in "pir_run" | high priority | When creating the twin tree in pir_run (currently R.43) the argument "twinning_params" of the function "create_twin_tree" calls the default option.
I believe it is more correct to call the actual "twinning_params" provided by "pir_params". If so we can add an option to recall the "pir_params$alignment_params$rng_seed" if "pir_params$twinning_params$rng_seed" is set to something like "same_seed". This could provide the solution to issue #84 | 1.0 | Add "twinning_params" in "create_twin_tree" call in "pir_run" - When creating the twin tree in pir_run (currently R.43) the argument "twinning_params" of the function "create_twin_tree" calls the default option.
I believe it is more correct to call the actual "twinning_params" provided by "pir_params". If so we can add an option to recall the "pir_params$alignment_params$rng_seed" if "pir_params$twinning_params$rng_seed" is set to something like "same_seed". This could provide the solution to issue #84 | priority | add twinning params in create twin tree call in pir run when creating the twin tree in pir run currently r the argument twinning params of the function create twin tree calls the default option i believe it is more correct to call the actual twinning params provided by pir params if so we can add an option to recall the pir params alignment params rng seed if pir params twinning params rng seed is set to something like same seed this could provide the solution to issue | 1 |
428,206 | 12,404,720,898 | IssuesEvent | 2020-05-21 16:00:39 | UC-Davis-molecular-computing/scadnano | https://api.github.com/repos/UC-Davis-molecular-computing/scadnano | opened | Modification font_size and display_connector should be in AppUIState | high priority invalid | Currently, font_size and display_connector (whether to visually offset the modification from the base) are specified in the Modification JSON itself. This is ugly and should be removed since these do not describe a DNA design itself.
Make these UI options in the web interface instead. | 1.0 | Modification font_size and display_connector should be in AppUIState - Currently, font_size and display_connector (whether to visually offset the modification from the base) are specified in the Modification JSON itself. This is ugly and should be removed since these do not describe a DNA design itself.
Make these UI options in the web interface instead. | priority | modification font size and display connector should be in appuistate currently font size and display connector whether to visually offset the modification from the base are specified in the modification json itself this is ugly and should be removed since these do not describe a dna design itself make these ui options in the web interface instead | 1 |
152,089 | 5,832,704,097 | IssuesEvent | 2017-05-08 22:36:37 | igvteam/juicebox.js | https://api.github.com/repos/igvteam/juicebox.js | closed | Tracks Should Be Customizable | high priority | The tracks cannot be customized, for instance changing color. (For signal tracks, the customizations in the jbox application, such as mean v max, min, and esp. max value, are critical).
| 1.0 | Tracks Should Be Customizable - The tracks cannot be customized, for instance changing color. (For signal tracks, the customizations in the jbox application, such as mean v max, min, and esp. max value, are critical).
| priority | tracks should be customizable the tracks cannot be customized for instance changing color for signal tracks the customizations in the jbox application such as mean v max min and esp max value are critical | 1 |
499,645 | 14,475,344,721 | IssuesEvent | 2020-12-10 01:23:30 | operate-first/apps | https://api.github.com/repos/operate-first/apps | closed | One component per namespace, or one namespace for all components? | high-priority | Continuing discussions from here #3
There is a bit of disagreement on which method to follow, please air your concerns here once more so we can continue the discussion. | 1.0 | One component per namespace, or one namespace for all components? - Continuing discussions from here #3
There is a bit of disagreement on which method to follow, please air your concerns here once more so we can continue the discussion. | priority | one component per namespace or one namespace for all components continuing discussions from here there is a bit of disagreement on which method to follow please air your concerns here once more so we can continue the discussion | 1 |
488,959 | 14,099,944,748 | IssuesEvent | 2020-11-06 02:46:29 | CMPUT301F20T41/boromi | https://api.github.com/repos/CMPUT301F20T41/boromi | opened | Make It Pretty: The sequel | Length: Less than 5 Hours Priority: High Status: In Progress Type: Enhancement | ## Description
I'll probably make some aesthetic changes to the following
* The search bar in the search tab
* The cards (there's a little too much elevation)
* The request button in the search results
* Make all the pictures circular | 1.0 | Make It Pretty: The sequel - ## Description
I'll probably make some aesthetic changes to the following
* The search bar in the search tab
* The cards (there's a little too much elevation)
* The request button in the search results
* Make all the pictures circular | priority | make it pretty the sequel description i ll probably make some aesthetic changes to the following the search bar in the search tab the cards there s a little too much elevation the request button in the search results make all the pictures circular | 1 |
789,938 | 27,810,348,665 | IssuesEvent | 2023-03-18 03:17:26 | ImranR98/Obtainium | https://api.github.com/repos/ImranR98/Obtainium | closed | Duplicate Downloads, Name Errors | bug high priority | Sometimes, clicking the update button doesn't immediately start a download. That seems to be the case for me on [this repo](https://github.com/NoName-exe/revanced-extended/).
The download button doesn't really go away, so I just think the click didn't work. I tap it a few times for good measure, and now I have Obtainium downloading the same file multiple times, all hogging my bandwidth, slowing down the downloads.
At the end of it all, if I wait patiently, I'll get a bunch of errors telling me something about names. I suppose that's because the first download to finish already put the apk there, and downloading the same thing again can neither replace that, nor have a numbered name for duplication.
But at least one download works, which lets me update the app.
I figure that this could be easily solved if the download/update button/icon disappeared after the first click and was replaced with a "trying" or "buffering" or "loading" icon. Like those dots that go round and round.
This should tell the user that their click has been registered and that Obtainium is trying to start the download. | 1.0 | Duplicate Downloads, Name Errors - Sometimes, clicking the update button doesn't immediately start a download. That seems to be the case for me on [this repo](https://github.com/NoName-exe/revanced-extended/).
The download button doesn't really go away, so I just think the click didn't work. I tap it a few times for good measure, and now I have Obtainium downloading the same file multiple times, all hogging my bandwidth, slowing down the downloads.
At the end of it all, if I wait patiently, I'll get a bunch of errors telling me something about names. I suppose that's because the first download to finish already put the apk there, and downloading the same thing again can neither replace that, nor have a numbered name for duplication.
But at least one download works, which lets me update the app.
I figure that this could be easily solved if the download/update button/icon disappeared after the first click and was replaced with a "trying" or "buffering" or "loading" icon. Like those dots that go round and round.
This should tell the user that their click has been registered and that Obtainium is trying to start the download. | priority | duplicate downloads name errors sometimes clicking the update button doesn t immediately start a download that seems to be the case for me on the download button doesn t really go away so i just think the click didn t work i tap it a few times for good measure and now i have obtainium downloading the same file multiple times all hogging my bandwidth slowing down the downloads at the end of it all if i wait patiently i ll get a bunch of errors telling me something about names i suppose that s because the first download to finish already put the apk there and downloading the same thing again can neither replace that nor have a numbered name for duplication but at least one download works which lets me update the app i figure that this could be easily solved if the download update button icon disappeared after the first click and was replaced with a trying or buffering or loading icon like those dots that go round and round this should tell the user that their click has been registered and that obtainium is trying to start the download | 1 |
547,380 | 16,042,058,486 | IssuesEvent | 2021-04-22 09:06:43 | sopra-fs21-group-05/group-05-server | https://api.github.com/repos/sopra-fs21-group-05/group-05-server | closed | Create game room endpoint & respective method in controller class(es) | high priority task | - [x] Create mappings to the requests from the client
- [x] Write a createGameroom() method that creates the game room in the database
Time estimate: 4h
This task is part of user story #1
| 1.0 | Create game room endpoint & respective method in controller class(es) - - [x] Create mappings to the requests from the client
- [x] Write a createGameroom() method that creates the game room in the database
Time estimate: 4h
This task is part of user story #1
| priority | create game room endpoint respective method in controller class es create mappings to the requests from the client write a creategameroom method that creates the game room in the database time estimate this task is part of user story | 1 |
828,803 | 31,842,672,092 | IssuesEvent | 2023-09-14 17:27:43 | eclipse/jetty.project | https://api.github.com/repos/eclipse/jetty.project | closed | `HttpCookieStore` incorrectly rejects cookies for domains that are an IPv6 address | High Priority Bug | **Jetty version(s)**
12
**Description**
```java
HttpCookieStore.add(
URI.create("http://[::1]"),
HttpCookie.build("n", "v").domain("[::1]").build()
);
```
returns `false` while it does return `true` if an IPv4 is used:
```java
HttpCookieStore.add(
URI.create("http://127.0.0.1"),
HttpCookie.build("n", "v").domain("127.0.0.1").build()
);
```
This breaks cookies handling when Jetty is accessed via an URL using an IPv6 as the hostname. | 1.0 | `HttpCookieStore` incorrectly rejects cookies for domains that are an IPv6 address - **Jetty version(s)**
12
**Description**
```java
HttpCookieStore.add(
URI.create("http://[::1]"),
HttpCookie.build("n", "v").domain("[::1]").build()
);
```
returns `false` while it does return `true` if an IPv4 is used:
```java
HttpCookieStore.add(
URI.create("http://127.0.0.1"),
HttpCookie.build("n", "v").domain("127.0.0.1").build()
);
```
This breaks cookies handling when Jetty is accessed via an URL using an IPv6 as the hostname. | priority | httpcookiestore incorrectly rejects cookies for domains that are an address jetty version s description java httpcookiestore add uri create http httpcookie build n v domain build returns false while it does return true if an is used java httpcookiestore add uri create httpcookie build n v domain build this breaks cookies handling when jetty is accessed via an url using an as the hostname | 1 |
101,435 | 4,117,585,749 | IssuesEvent | 2016-06-08 08:08:20 | OpenSRP/opensrp-server | https://api.github.com/repos/OpenSRP/opensrp-server | opened | Two way sync between OpenSRP and OpenMRS. | enhancement High Priority | Handle two sync between OpenSRP and OpenMRS.
The decision was to use bahmni-atomfeed module for the purpose. The module was worked on and tested as a proof of concept.
Further changes should be made to handle Pakistan`s EndTB requirements. | 1.0 | Two way sync between OpenSRP and OpenMRS. - Handle two sync between OpenSRP and OpenMRS.
The decision was to use bahmni-atomfeed module for the purpose. The module was worked on and tested as a proof of concept.
Further changes should be made to handle Pakistan`s EndTB requirements. | priority | two way sync between opensrp and openmrs handle two sync between opensrp and openmrs the decision was to use bahmni atomfeed module for the purpose the module was worked on and tested as a proof of concept further changes should be made to handle pakistan s endtb requirements | 1 |
395,306 | 11,683,559,312 | IssuesEvent | 2020-03-05 03:48:48 | StudioTBA/CoronaIO | https://api.github.com/repos/StudioTBA/CoronaIO | opened | Creation of human agent | Character development Priority: High | **Is your feature request related to a problem? Please describe.**
There is no human class/prefab for everyone to work on it separately.
**Describe the solution you would like**
There is no human class/prefab for everyone to work on it separately. | 1.0 | Creation of human agent - **Is your feature request related to a problem? Please describe.**
There is no human class/prefab for everyone to work on it separately.
**Describe the solution you would like**
There is no human class/prefab for everyone to work on it separately. | priority | creation of human agent is your feature request related to a problem please describe there is no human class prefab for everyone to work on it separately describe the solution you would like there is no human class prefab for everyone to work on it separately | 1 |
637,505 | 20,670,300,463 | IssuesEvent | 2022-03-10 00:55:20 | 501stLegionA3/FiveOhFirstDataCore | https://api.github.com/repos/501stLegionA3/FiveOhFirstDataCore | closed | Admin Access keeps being lost | bug data priority-high | # Describe the bug
I can no longer access any of the admin pages, despite logging out and logging in, closing the tab and reopening it, etc.
This has happened multiple times, reliability issue?
# To Reproduce
1. Log in
2. Try to access any admin page i.e /admin
3. "You do not have permission to access this page"
# Expected behavior
Access to da page
# Screenshots

| 1.0 | Admin Access keeps being lost - # Describe the bug
I can no longer access any of the admin pages, despite logging out and logging in, closing the tab and reopening it, etc.
This has happened multiple times, reliability issue?
# To Reproduce
1. Log in
2. Try to access any admin page i.e /admin
3. "You do not have permission to access this page"
# Expected behavior
Access to da page
# Screenshots

| priority | admin access keeps being lost describe the bug i can no longer access any of the admin pages despite logging out and logging in closing the tab and reopening it etc this has happened multiple times reliability issue to reproduce log in try to access any admin page i e admin you do not have permission to access this page expected behavior access to da page screenshots | 1 |
774,811 | 27,212,194,863 | IssuesEvent | 2023-02-20 17:30:01 | medic/cht-core | https://api.github.com/repos/medic/cht-core | closed | API Crashes on malformed translation docs | Type: Bug Priority: 1 - High | **Describe the bug**
API fails to start when any of the translation docs `messages-xx` is missing some properties that are expected.
This can happen for a number of reasons, and has been the case for many releases.
Since 4.0, this can be triggered very easily when deleting languages.
**To Reproduce**
Steps to reproduce the behavior:
1. Make sure Sentinel is up and up to date with the queue.
2. Create a new language (can do from admin app)
3. Delete this new language. (can do from admin app)
4. Wait for Sentinel to process this deletion.
5. Restart API.
6. See error.
**Expected behavior**
API should not crash when languages are deleted.
**Logs**
```
2023-01-23 15:31:19 ERROR: Fatal error initialising medic-api
2023-01-23 15:31:19 ERROR: TypeError: Cannot convert undefined or null to object
at Function.keys (<anonymous>)
at Object.loadTranslations (/home/diana/projects/medic/shared-libs/translation-utils/src/translation-utils.js:5:12)
at /home/diana/projects/medic/api/src/services/config-watcher.js:32:55
at Array.forEach (<anonymous>)
at /home/diana/projects/medic/api/src/services/config-watcher.js:27:23
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async /home/diana/projects/medic/api/server.js:38:5 {
[stack]: 'TypeError: Cannot convert undefined or null to object\n' +
' at Function.keys (<anonymous>)\n' +
' at Object.loadTranslations (/home/diana/projects/medic/shared-libs/translation-utils/src/translation-utils.js:5:12)\n' +
' at /home/diana/projects/medic/api/src/services/config-watcher.js:32:55\n' +
' at Array.forEach (<anonymous>)\n' +
' at /home/diana/projects/medic/api/src/services/config-watcher.js:27:23\n' +
' at processTicksAndRejections (node:internal/process/task_queues:96:5)\n' +
' at async /home/diana/projects/medic/api/server.js:38:5',
[message]: 'Cannot convert undefined or null to object'
}
```
**Environment**
- Instance: local
- App: api
- Version: 3.4.0+ would crash on malformed translation docs, but after 4.0 api will crash when languages are deleted because that will produce the "malformed" docs (tombstones of deleted translation docs) and serve them to translations utils.
The workaround for 4.x projects encountering this is to delete all `messages-xx` tombstones and restart API. | 1.0 | API Crashes on malformed translation docs - **Describe the bug**
API fails to start when any of the translation docs `messages-xx` is missing some properties that are expected.
This can happen for a number of reasons, and has been the case for many releases.
Since 4.0, this can be triggered very easily when deleting languages.
**To Reproduce**
Steps to reproduce the behavior:
1. Make sure Sentinel is up and up to date with the queue.
2. Create a new language (can do from admin app)
3. Delete this new language. (can do from admin app)
4. Wait for Sentinel to process this deletion.
5. Restart API.
6. See error.
**Expected behavior**
API should not crash when languages are deleted.
**Logs**
```
2023-01-23 15:31:19 ERROR: Fatal error initialising medic-api
2023-01-23 15:31:19 ERROR: TypeError: Cannot convert undefined or null to object
at Function.keys (<anonymous>)
at Object.loadTranslations (/home/diana/projects/medic/shared-libs/translation-utils/src/translation-utils.js:5:12)
at /home/diana/projects/medic/api/src/services/config-watcher.js:32:55
at Array.forEach (<anonymous>)
at /home/diana/projects/medic/api/src/services/config-watcher.js:27:23
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async /home/diana/projects/medic/api/server.js:38:5 {
[stack]: 'TypeError: Cannot convert undefined or null to object\n' +
' at Function.keys (<anonymous>)\n' +
' at Object.loadTranslations (/home/diana/projects/medic/shared-libs/translation-utils/src/translation-utils.js:5:12)\n' +
' at /home/diana/projects/medic/api/src/services/config-watcher.js:32:55\n' +
' at Array.forEach (<anonymous>)\n' +
' at /home/diana/projects/medic/api/src/services/config-watcher.js:27:23\n' +
' at processTicksAndRejections (node:internal/process/task_queues:96:5)\n' +
' at async /home/diana/projects/medic/api/server.js:38:5',
[message]: 'Cannot convert undefined or null to object'
}
```
**Environment**
- Instance: local
- App: api
- Version: 3.4.0+ would crash on malformed translation docs, but after 4.0 api will crash when languages are deleted because that will produce the "malformed" docs (tombstones of deleted translation docs) and serve them to translations utils.
The workaround for 4.x projects encountering this is to delete all `messages-xx` tombstones and restart API. | priority | api crashes on malformed translation docs describe the bug api fails to start when any of the translation docs messages xx is missing some properties that are expected this can happen for a number of reasons and has been the case for many releases since this can be triggered very easily when deleting languages to reproduce steps to reproduce the behavior make sure sentinel is up and up to date with the queue create a new language can do from admin app delete this new language can do from admin app wait for sentinel to process this deletion restart api see error expected behavior api should not crash when languages are deleted logs error fatal error initialising medic api error typeerror cannot convert undefined or null to object at function keys at object loadtranslations home diana projects medic shared libs translation utils src translation utils js at home diana projects medic api src services config watcher js at array foreach at home diana projects medic api src services config watcher js at processticksandrejections node internal process task queues at async home diana projects medic api server js typeerror cannot convert undefined or null to object n at function keys n at object loadtranslations home diana projects medic shared libs translation utils src translation utils js n at home diana projects medic api src services config watcher js n at array foreach n at home diana projects medic api src services config watcher js n at processticksandrejections node internal process task queues n at async home diana projects medic api server js cannot convert undefined or null to object environment instance local app api version would crash on malformed translation docs but after api will crash when languages are deleted because that will produce the malformed docs tombstones of deleted translation docs and serve them to translations utils the workaround for x projects encountering this is to delete all messages xx tombstones and restart api | 1 |
64,456 | 3,211,941,008 | IssuesEvent | 2015-10-06 13:32:15 | Metaswitch/sprout | https://api.github.com/repos/Metaswitch/sprout | closed | Bad BGCF config makes sprout cyclically crash | bug critical high-priority | I was editing my BGCF config and I made a typo:
```
{
"routes" : [
{
"name": "Default route",
"domain": "*",
"routes": [ "sip:ajh-dev.cw-ngv.com:5080;transport=tcp;lr" ]
}
]
}
```
The third key in the object was "routes" and it is supposed to be "route". When I reloaded sprout it immediately crashed with the following stack.
```
Thread 1 (Thread 0x7fb520895780 (LWP 17314)):
#0 0x00007fb51d524b99 in __libc_waitpid (pid=17458, stat_loc=stat_loc@entry=0x7fffc1984c20, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
#1 0x00007fb51d4aa2e2 in do_system (line=<optimized out>) at ../sysdeps/posix/system.c:148
#2 0x00000000004d2fc3 in Logger::backtrace (this=0x1396be0, data=<optimized out>) at /var/lib/jenkins/workspace/sprout/modules/cpp-common/src/logger.cpp:288
#3 0x000000000058642d in Log::backtrace (fmt=fmt@entry=0x71ff48 "Signal %d caught") at /var/lib/jenkins/workspace/sprout/modules/cpp-common/src/log.cpp:185
#4 0x00000000005e7b6c in signal_handler (sig=6) at main.cpp:1084
#5 <signal handler called>
#6 0x00007fb51d49acc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#7 0x00007fb51d49e0d8 in __GI_abort () at abort.c:89
#8 0x00007fb51d493b86 in __assert_fail_base (fmt=0x7fb51d5e4830 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x6eb375 "false", file=file@entry=0x6eb790 "/var/lib/jenkins/workspace/sprout/modules/rapidjson/include/rapidjson/document.h", line=line@entry=821, function=function@entry=0x6ec220 <rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >& rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >::operator[]<rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >(rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> > const&)::__PRETTY_FUNCTION__> "rapidjson::GenericValue<Encoding, Allocator>& rapidjson::GenericValue<Encoding, Allocator>::operator[](const rapidjson::GenericValue<Encoding, SourceAllocator>&) [with SourceAllocator = rapidjson::Mem"...) at assert.c:92
#9 0x00007fb51d493c32 in __GI___assert_fail (assertion=0x6eb375 "false", file=0x6eb790 "/var/lib/jenkins/workspace/sprout/modules/rapidjson/include/rapidjson/document.h", line=821, function=0x6ec220 <rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >& rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >::operator[]<rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >(rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> > const&)::__PRETTY_FUNCTION__> "rapidjson::GenericValue<Encoding, Allocator>& rapidjson::GenericValue<Encoding, Allocator>::operator[](const rapidjson::GenericValue<Encoding, SourceAllocator>&) [with SourceAllocator = rapidjson::Mem"...) at assert.c:101
#10 0x0000000000512d29 in operator[]<rapidjson::MemoryPoolAllocator<> > (name=<synthetic pointer>, this=0x5) at /var/lib/jenkins/workspace/sprout/modules/rapidjson/include/rapidjson/document.h:821
#11 rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >::operator[] (this=this@entry=0x1b1cd84, name=name@entry=0x7097df "route") at /var/lib/jenkins/workspace/sprout/modules/rapidjson/include/rapidjson/document.h:809
#12 0x000000000057cdc9 in operator[] (name=0x7097df "route", this=0x1b1cd84) at /var/lib/jenkins/workspace/sprout/modules/rapidjson/include/rapidjson/document.h:811
#13 BgcfService::update_routes (this=this@entry=0x1b0b530) at bgcfservice.cpp:124
#14 0x000000000057bcaf in operator() (__p=0x1b0b530, this=<synthetic pointer>) at /usr/include/c++/4.8/bits/stl_function.h:551
#15 Updater (run_on_start=true, signal_waiter=<optimized out>, myFunctor=..., pointer=0x1b0b530, this=0x1b166c0) at /var/lib/jenkins/workspace/sprout/modules/cpp-common/include/updater.h:61
``` | 1.0 | Bad BGCF config makes sprout cyclically crash - I was editing my BGCF config and I made a typo:
```
{
"routes" : [
{
"name": "Default route",
"domain": "*",
"routes": [ "sip:ajh-dev.cw-ngv.com:5080;transport=tcp;lr" ]
}
]
}
```
The third key in the object was "routes" and it is supposed to be "route". When I reloaded sprout it immediately crashed with the following stack.
```
Thread 1 (Thread 0x7fb520895780 (LWP 17314)):
#0 0x00007fb51d524b99 in __libc_waitpid (pid=17458, stat_loc=stat_loc@entry=0x7fffc1984c20, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
#1 0x00007fb51d4aa2e2 in do_system (line=<optimized out>) at ../sysdeps/posix/system.c:148
#2 0x00000000004d2fc3 in Logger::backtrace (this=0x1396be0, data=<optimized out>) at /var/lib/jenkins/workspace/sprout/modules/cpp-common/src/logger.cpp:288
#3 0x000000000058642d in Log::backtrace (fmt=fmt@entry=0x71ff48 "Signal %d caught") at /var/lib/jenkins/workspace/sprout/modules/cpp-common/src/log.cpp:185
#4 0x00000000005e7b6c in signal_handler (sig=6) at main.cpp:1084
#5 <signal handler called>
#6 0x00007fb51d49acc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#7 0x00007fb51d49e0d8 in __GI_abort () at abort.c:89
#8 0x00007fb51d493b86 in __assert_fail_base (fmt=0x7fb51d5e4830 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x6eb375 "false", file=file@entry=0x6eb790 "/var/lib/jenkins/workspace/sprout/modules/rapidjson/include/rapidjson/document.h", line=line@entry=821, function=function@entry=0x6ec220 <rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >& rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >::operator[]<rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >(rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> > const&)::__PRETTY_FUNCTION__> "rapidjson::GenericValue<Encoding, Allocator>& rapidjson::GenericValue<Encoding, Allocator>::operator[](const rapidjson::GenericValue<Encoding, SourceAllocator>&) [with SourceAllocator = rapidjson::Mem"...) at assert.c:92
#9 0x00007fb51d493c32 in __GI___assert_fail (assertion=0x6eb375 "false", file=0x6eb790 "/var/lib/jenkins/workspace/sprout/modules/rapidjson/include/rapidjson/document.h", line=821, function=0x6ec220 <rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >& rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >::operator[]<rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >(rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> > const&)::__PRETTY_FUNCTION__> "rapidjson::GenericValue<Encoding, Allocator>& rapidjson::GenericValue<Encoding, Allocator>::operator[](const rapidjson::GenericValue<Encoding, SourceAllocator>&) [with SourceAllocator = rapidjson::Mem"...) at assert.c:101
#10 0x0000000000512d29 in operator[]<rapidjson::MemoryPoolAllocator<> > (name=<synthetic pointer>, this=0x5) at /var/lib/jenkins/workspace/sprout/modules/rapidjson/include/rapidjson/document.h:821
#11 rapidjson::GenericValue<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >::operator[] (this=this@entry=0x1b1cd84, name=name@entry=0x7097df "route") at /var/lib/jenkins/workspace/sprout/modules/rapidjson/include/rapidjson/document.h:809
#12 0x000000000057cdc9 in operator[] (name=0x7097df "route", this=0x1b1cd84) at /var/lib/jenkins/workspace/sprout/modules/rapidjson/include/rapidjson/document.h:811
#13 BgcfService::update_routes (this=this@entry=0x1b0b530) at bgcfservice.cpp:124
#14 0x000000000057bcaf in operator() (__p=0x1b0b530, this=<synthetic pointer>) at /usr/include/c++/4.8/bits/stl_function.h:551
#15 Updater (run_on_start=true, signal_waiter=<optimized out>, myFunctor=..., pointer=0x1b0b530, this=0x1b166c0) at /var/lib/jenkins/workspace/sprout/modules/cpp-common/include/updater.h:61
``` | priority | bad bgcf config makes sprout cyclically crash i was editing my bgcf config and i made a typo routes name default route domain routes the third key in the object was routes and it is supposed to be route when i reloaded sprout it immediately crashed with the following stack thread thread lwp in libc waitpid pid stat loc stat loc entry options options entry at sysdeps unix sysv linux waitpid c in do system line at sysdeps posix system c in logger backtrace this data at var lib jenkins workspace sprout modules cpp common src logger cpp in log backtrace fmt fmt entry signal d caught at var lib jenkins workspace sprout modules cpp common src log cpp in signal handler sig at main cpp in gi raise sig sig entry at nptl sysdeps unix sysv linux raise c in gi abort at abort c in assert fail base fmt s s s u s sassertion s failed n n assertion assertion entry false file file entry var lib jenkins workspace sprout modules rapidjson include rapidjson document h line line entry function function entry rapidjson memorypoolallocator rapidjson genericvalue rapidjson memorypoolallocator operator rapidjson genericvalue rapidjson memorypoolallocator const pretty function rapidjson genericvalue rapidjson genericvalue operator const rapidjson genericvalue with sourceallocator rapidjson mem at assert c in gi assert fail assertion false file var lib jenkins workspace sprout modules rapidjson include rapidjson document h line function rapidjson memorypoolallocator rapidjson genericvalue rapidjson memorypoolallocator operator rapidjson genericvalue rapidjson memorypoolallocator const pretty function rapidjson genericvalue rapidjson genericvalue operator const rapidjson genericvalue with sourceallocator rapidjson mem at assert c in operator name this at var lib jenkins workspace sprout modules rapidjson include rapidjson document h rapidjson genericvalue rapidjson memorypoolallocator operator this this entry name name entry route at var lib jenkins workspace sprout modules rapidjson include rapidjson document h in operator name route this at var lib jenkins workspace sprout modules rapidjson include rapidjson document h bgcfservice update routes this this entry at bgcfservice cpp in operator p this at usr include c bits stl function h updater run on start true signal waiter myfunctor pointer this at var lib jenkins workspace sprout modules cpp common include updater h | 1 |
555,727 | 16,464,239,082 | IssuesEvent | 2021-05-22 04:25:03 | xournalpp/xournalpp | https://api.github.com/repos/xournalpp/xournalpp | closed | Crash when scrolling over slides with different format | Crash Zoom bug confirmed priority::high | Affects versions :
OS: Linux 20.04
(Linux only) Desktop environment: Gnome 3.36, X11
Which version of libgtk do you use: 3.24.20
Version of Xournal++: 1.1.0+dev
Installation method: sudo apt
**Describe the bug**
I work on Powerpoint Slides. When I insert a new blank page this has another Format than the Powerpoint slides.
If I let Xournal maximize the slides and scroll up and down through the different size slides it crashes. And it really crashes, it unfortunately does not save the current state (as it normally does :-) ), and when you restart you can easily recover it, as a pop up just gives you the option. It really crashes and then you can only search for the recovery file yourself, which is often missing a lot, since it was autosaved a minute before it crashed.
**To Reproduce**
Steps to reproduce the behavior:
1. Open the file I uploaded
[Crash-Test-Slides.zip](https://github.com/xournalpp/xournalpp/files/6309858/Crash-Test-Slides.zip)
2. Make sure 'Zoom fit to screen' is activated
3. Scroll up and down through the empty slides with different formats
4. For me it crashes every time
**Expected behavior**
Normal Scrolling from slide to slide. Changing automaically the Zoom fit to screen.
**Additional context**
Is there an option to make Xournal insert new empty pages with the same size as the page before?
(At the moment this can only be done with first duplicating page and then apply 'Plain' to current page. This is more of a feature request, but I think it would be in general very nice if it just inserted pages of the same size as a standard.
| 1.0 | Crash when scrolling over slides with different format - Affects versions :
OS: Linux 20.04
(Linux only) Desktop environment: Gnome 3.36, X11
Which version of libgtk do you use: 3.24.20
Version of Xournal++: 1.1.0+dev
Installation method: sudo apt
**Describe the bug**
I work on Powerpoint Slides. When I insert a new blank page this has another Format than the Powerpoint slides.
If I let Xournal maximize the slides and scroll up and down through the different size slides it crashes. And it really crashes, it unfortunately does not save the current state (as it normally does :-) ), and when you restart you can easily recover it, as a pop up just gives you the option. It really crashes and then you can only search for the recovery file yourself, which is often missing a lot, since it was autosaved a minute before it crashed.
**To Reproduce**
Steps to reproduce the behavior:
1. Open the file I uploaded
[Crash-Test-Slides.zip](https://github.com/xournalpp/xournalpp/files/6309858/Crash-Test-Slides.zip)
2. Make sure 'Zoom fit to screen' is activated
3. Scroll up and down through the empty slides with different formats
4. For me it crashes every time
**Expected behavior**
Normal Scrolling from slide to slide. Changing automaically the Zoom fit to screen.
**Additional context**
Is there an option to make Xournal insert new empty pages with the same size as the page before?
(At the moment this can only be done with first duplicating page and then apply 'Plain' to current page. This is more of a feature request, but I think it would be in general very nice if it just inserted pages of the same size as a standard.
| priority | crash when scrolling over slides with different format affects versions os linux linux only desktop environment gnome which version of libgtk do you use version of xournal dev installation method sudo apt describe the bug i work on powerpoint slides when i insert a new blank page this has another format than the powerpoint slides if i let xournal maximize the slides and scroll up and down through the different size slides it crashes and it really crashes it unfortunately does not save the current state as it normally does and when you restart you can easily recover it as a pop up just gives you the option it really crashes and then you can only search for the recovery file yourself which is often missing a lot since it was autosaved a minute before it crashed to reproduce steps to reproduce the behavior open the file i uploaded make sure zoom fit to screen is activated scroll up and down through the empty slides with different formats for me it crashes every time expected behavior normal scrolling from slide to slide changing automaically the zoom fit to screen additional context is there an option to make xournal insert new empty pages with the same size as the page before at the moment this can only be done with first duplicating page and then apply plain to current page this is more of a feature request but i think it would be in general very nice if it just inserted pages of the same size as a standard | 1 |
168,449 | 6,375,795,259 | IssuesEvent | 2017-08-02 04:49:24 | coup-de-foudre/FM_transmitter | https://api.github.com/repos/coup-de-foudre/FM_transmitter | opened | Pre "dress rehearsal" prep | hardware help wanted priority:high | We need to have a session to work on a few final things once we decide that development is "done":
- [ ] Decide on a buffer size... either 1024, 512, or maybe 256 samples.
- [ ] Test code in detail
- [ ] Create 3 raspbian slim images. Then for each one,
- [ ] Create unique hostname (`raspberry-[1-3]`?)
- [ ] Set static IP addresses for ethernet, if possible (`192.168.0.[10,20,30]`?)
- [ ] Configure images for low latency
- [ ] Load and install newest code (`make install`)
- [ ] Tune installs to individual center frequencies, write down (on paper!) how each one is tuned
| 1.0 | Pre "dress rehearsal" prep - We need to have a session to work on a few final things once we decide that development is "done":
- [ ] Decide on a buffer size... either 1024, 512, or maybe 256 samples.
- [ ] Test code in detail
- [ ] Create 3 raspbian slim images. Then for each one,
- [ ] Create unique hostname (`raspberry-[1-3]`?)
- [ ] Set static IP addresses for ethernet, if possible (`192.168.0.[10,20,30]`?)
- [ ] Configure images for low latency
- [ ] Load and install newest code (`make install`)
- [ ] Tune installs to individual center frequencies, write down (on paper!) how each one is tuned
| priority | pre dress rehearsal prep we need to have a session to work on a few final things once we decide that development is done decide on a buffer size either or maybe samples test code in detail create raspbian slim images then for each one create unique hostname raspberry set static ip addresses for ethernet if possible configure images for low latency load and install newest code make install tune installs to individual center frequencies write down on paper how each one is tuned | 1 |
717,124 | 24,662,633,407 | IssuesEvent | 2022-10-18 07:57:45 | IAmTamal/Milan | https://api.github.com/repos/IAmTamal/Milan | closed | Misplaced vector images, | 🟧 priority: high 🛠 goal: fix 🛠 status : under development hacktoberfest | ### Description
On the home page, the second div has 2 image vectors along with written content which is not aligned.
### Screenshots

### Additional information
_No response_
### 🥦 Browser
Google Chrome
### 👀 Have you checked if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Contributing Guidelines?
- [X] I have read the [Contributing Guidelines](https://github.com/IAmTamal/Milan/blob/main/CONTRIBUTING.md)
### Are you willing to work on this issue ?
Yes I am willing to submit a PR! | 1.0 | Misplaced vector images, - ### Description
On the home page, the second div has 2 image vectors along with written content which is not aligned.
### Screenshots

### Additional information
_No response_
### 🥦 Browser
Google Chrome
### 👀 Have you checked if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Contributing Guidelines?
- [X] I have read the [Contributing Guidelines](https://github.com/IAmTamal/Milan/blob/main/CONTRIBUTING.md)
### Are you willing to work on this issue ?
Yes I am willing to submit a PR! | priority | misplaced vector images description on the home page the second div has image vectors along with written content which is not aligned screenshots additional information no response 🥦 browser google chrome 👀 have you checked if this issue has been raised before i checked and didn t find similar issue 🏢 have you read the contributing guidelines i have read the are you willing to work on this issue yes i am willing to submit a pr | 1 |
770,691 | 27,051,046,067 | IssuesEvent | 2023-02-13 13:16:13 | epicmaxco/vuestic-ui | https://api.github.com/repos/epicmaxco/vuestic-ui | closed | Landing buttons fix | BUG docs HIGH PRIORITY | 1. For github button here use Secondary button, not Plain (Secondary has styles on hover)


2. For Admin block - fix buttons and stars component according to design (only for this block, not for the Header)
NOW:

DESIGN:

| 1.0 | Landing buttons fix - 1. For github button here use Secondary button, not Plain (Secondary has styles on hover)


2. For Admin block - fix buttons and stars component according to design (only for this block, not for the Header)
NOW:

DESIGN:

| priority | landing buttons fix for github button here use secondary button not plain secondary has styles on hover for admin block fix buttons and stars component according to design only for this block not for the header now design | 1 |
406,426 | 11,893,120,617 | IssuesEvent | 2020-03-29 09:57:17 | AY1920S2-CS2103-W14-2/main | https://api.github.com/repos/AY1920S2-CS2103-W14-2/main | closed | Add ability to undo commands | priority.High | ### Description
As a clumsy user, I want to be able to undo commands so that I can fix mistakes in commands such as typos.
### Tasks
- [x] Implement undo function with tests
- [x] Add command to undo
- [x] Add command word in parser
- [x] Display alert | 1.0 | Add ability to undo commands - ### Description
As a clumsy user, I want to be able to undo commands so that I can fix mistakes in commands such as typos.
### Tasks
- [x] Implement undo function with tests
- [x] Add command to undo
- [x] Add command word in parser
- [x] Display alert | priority | add ability to undo commands description as a clumsy user i want to be able to undo commands so that i can fix mistakes in commands such as typos tasks implement undo function with tests add command to undo add command word in parser display alert | 1 |
600,642 | 18,348,082,038 | IssuesEvent | 2021-10-08 09:03:37 | AY2122S1-CS2103T-T10-2/tp | https://api.github.com/repos/AY2122S1-CS2103T-T10-2/tp | opened | The same participant will map to different IDs in the event and participant lists | priority.High type.Bug severity.High | When a participant is added to an event and Managera is restarted, the implemented hashmap will count the participant in the event and the participant in the participant list as separate participants.
e.g., The participant list contains a participant with ID alexyeo1 but the event with this participant has a participant with ID alexyeo2. | 1.0 | The same participant will map to different IDs in the event and participant lists - When a participant is added to an event and Managera is restarted, the implemented hashmap will count the participant in the event and the participant in the participant list as separate participants.
e.g., The participant list contains a participant with ID alexyeo1 but the event with this participant has a participant with ID alexyeo2. | priority | the same participant will map to different ids in the event and participant lists when a participant is added to an event and managera is restarted the implemented hashmap will count the participant in the event and the participant in the participant list as separate participants e g the participant list contains a participant with id but the event with this participant has a participant with id | 1 |
185,072 | 6,718,670,033 | IssuesEvent | 2017-10-15 15:32:37 | juanmbellini/PowerUp | https://api.github.com/repos/juanmbellini/PowerUp | opened | Test fails randomly in Travis | Bug High Priority | Random numbers generated in [this test](https://github.com/juanmbellini/PowerUp/blob/master/paw-webapp/model/src/test/java/ar/edu/itba/paw/webapp/model/ReviewTest.java#L702) are sometimes failing, forcing us to re-run certain Travis builds, eg. https://travis-ci.org/juanmbellini/PowerUp/builds/288225596#L2661
We should either use appropriate random numbers or stick to arbitrary but FIXED numbers for tests. | 1.0 | Test fails randomly in Travis - Random numbers generated in [this test](https://github.com/juanmbellini/PowerUp/blob/master/paw-webapp/model/src/test/java/ar/edu/itba/paw/webapp/model/ReviewTest.java#L702) are sometimes failing, forcing us to re-run certain Travis builds, eg. https://travis-ci.org/juanmbellini/PowerUp/builds/288225596#L2661
We should either use appropriate random numbers or stick to arbitrary but FIXED numbers for tests. | priority | test fails randomly in travis random numbers generated in are sometimes failing forcing us to re run certain travis builds eg we should either use appropriate random numbers or stick to arbitrary but fixed numbers for tests | 1 |
223,096 | 7,446,707,727 | IssuesEvent | 2018-03-28 09:58:36 | commons-app/apps-android-commons | https://api.github.com/repos/commons-app/apps-android-commons | closed | Translation updates on hold | assigned beginner friendly bug high priority | Hi, I've paused translation updates due to unresolved parsing issue I commented in https://github.com/commons-app/apps-android-commons/commit/96173e26cf07ef0d3f23cb8bb9ca4b21820eddf5#r27787891
Let's try to find a quick way to solve this. | 1.0 | Translation updates on hold - Hi, I've paused translation updates due to unresolved parsing issue I commented in https://github.com/commons-app/apps-android-commons/commit/96173e26cf07ef0d3f23cb8bb9ca4b21820eddf5#r27787891
Let's try to find a quick way to solve this. | priority | translation updates on hold hi i ve paused translation updates due to unresolved parsing issue i commented in let s try to find a quick way to solve this | 1 |
29,292 | 2,714,448,705 | IssuesEvent | 2015-04-10 03:48:35 | OpenConceptLab/oclapi | https://api.github.com/repos/OpenConceptLab/oclapi | opened | Switch source exports to background process and cache | enhancement high-priority | It is not currently possible to request a full export of a source or to fetch other large datasets due to the extended processing time. Even if the processing time were reduced, handling very long-running processes as web requests is not a great approach. This ticket focuses on fetching full exports of sources only -- fetching other large datasets is out of scope for this phase.
Rough Draft Export API Specs here: https://github.com/OpenConceptLab/oclapi/wiki/Export-API
### COMBINING TEXT FROM EMAILS HERE --- WILL CHANGE
Preparation of an export will fire up a celery process.
The background process would adjust the API to support the
* request export
* get export status
* download latest export
<hr>
A last updated date is included in the response header of files, which can be used to fetch only the diff between the export and the latest version of the source or collection.
How is a diff represented? Specifically, how do I differentiate between a create vs. an update vs. a delete?
If new mappings are added to the latest version of a source and then an export is recreated, will the recreated export be different than the original one? i.e. include the new mappings?
How are mappings stored in source versions?
I would think so. Mappings are represented within a source version as a list of Object IDs.
What is the correct format/filetype field to include in the request/response headers?
I've been using 'application/zip'.
What is the best way to handle a request that has too many results that its going to break? How do we fail gracefully?
Is this still a concern? Do you mean if the attachment is too large to transmit over HTTP?
How do we allow an administrator to still export a diff if its too large to export via an HTTP request? e.g. can we still trigger this to be processed in the celery environment? Can we do this via the API? Creating diffs between versions is important and not currently supported.
I believe the new design addresses this.
How do we get the latest version info for a source only? e.g. GET /orgs/CIEL/sources/CIEL/versions/?limit=1 Is there a way to sort this?
/orgs/CIEL/sources/CIEL/latest/
Supported HTTP Verbs:
POST - triggers OCL to create / recreate export file
GET - returns the file if its ready; triggers creation of file if it doesn't exist; returns different status/error codes based on the result
HEAD - returns whether file exists?
DELETE - deletes the export file
I'm not sure this is the best approach. I think the POST should return a token unique to that export, and GET and HEAD should accept the token. This way, there is no ambiguity between a POST and subsequent GET/HEAD calls.
Request header:
Authorization: Token ...
Compress: true
Format/filetype - ??
The "Compress" header shouldn't be necessary, right? We'll always compress?
Parameters - NONE? Should "includeRetired" be supported?
I would think all the same parameters as a standard GET call. | 1.0 | Switch source exports to background process and cache - It is not currently possible to request a full export of a source or to fetch other large datasets due to the extended processing time. Even if the processing time were reduced, handling very long-running processes as web requests is not a great approach. This ticket focuses on fetching full exports of sources only -- fetching other large datasets is out of scope for this phase.
Rough Draft Export API Specs here: https://github.com/OpenConceptLab/oclapi/wiki/Export-API
### COMBINING TEXT FROM EMAILS HERE --- WILL CHANGE
Preparation of an export will fire up a celery process.
The background process would adjust the API to support the
* request export
* get export status
* download latest export
<hr>
A last updated date is included in the response header of files, which can be used to fetch only the diff between the export and the latest version of the source or collection.
How is a diff represented? Specifically, how do I differentiate between a create vs. an update vs. a delete?
If new mappings are added to the latest version of a source and then an export is recreated, will the recreated export be different than the original one? i.e. include the new mappings?
How are mappings stored in source versions?
I would think so. Mappings are represented within a source version as a list of Object IDs.
What is the correct format/filetype field to include in the request/response headers?
I've been using 'application/zip'.
What is the best way to handle a request that has too many results that its going to break? How do we fail gracefully?
Is this still a concern? Do you mean if the attachment is too large to transmit over HTTP?
How do we allow an administrator to still export a diff if its too large to export via an HTTP request? e.g. can we still trigger this to be processed in the celery environment? Can we do this via the API? Creating diffs between versions is important and not currently supported.
I believe the new design addresses this.
How do we get the latest version info for a source only? e.g. GET /orgs/CIEL/sources/CIEL/versions/?limit=1 Is there a way to sort this?
/orgs/CIEL/sources/CIEL/latest/
Supported HTTP Verbs:
POST - triggers OCL to create / recreate export file
GET - returns the file if its ready; triggers creation of file if it doesn't exist; returns different status/error codes based on the result
HEAD - returns whether file exists?
DELETE - deletes the export file
I'm not sure this is the best approach. I think the POST should return a token unique to that export, and GET and HEAD should accept the token. This way, there is no ambiguity between a POST and subsequent GET/HEAD calls.
Request header:
Authorization: Token ...
Compress: true
Format/filetype - ??
The "Compress" header shouldn't be necessary, right? We'll always compress?
Parameters - NONE? Should "includeRetired" be supported?
I would think all the same parameters as a standard GET call. | priority | switch source exports to background process and cache it is not currently possible to request a full export of a source or to fetch other large datasets due to the extended processing time even if the processing time were reduced handling very long running processes as web requests is not a great approach this ticket focuses on fetching full exports of sources only fetching other large datasets is out of scope for this phase rough draft export api specs here combining text from emails here will change preparation of an export will fire up a celery process the background process would adjust the api to support the request export get export status download latest export a last updated date is included in the response header of files which can be used to fetch only the diff between the export and the latest version of the source or collection how is a diff represented specifically how do i differentiate between a create vs an update vs a delete if new mappings are added to the latest version of a source and then an export is recreated will the recreated export be different than the original one i e include the new mappings how are mappings stored in source versions i would think so mappings are represented within a source version as a list of object ids what is the correct format filetype field to include in the request response headers i ve been using application zip what is the best way to handle a request that has too many results that its going to break how do we fail gracefully is this still a concern do you mean if the attachment is too large to transmit over http how do we allow an administrator to still export a diff if its too large to export via an http request e g can we still trigger this to be processed in the celery environment can we do this via the api creating diffs between versions is important and not currently supported i believe the new design addresses this how do we get the latest version info for a source only e g get orgs ciel sources ciel versions limit is there a way to sort this orgs ciel sources ciel latest supported http verbs post triggers ocl to create recreate export file get returns the file if its ready triggers creation of file if it doesn t exist returns different status error codes based on the result head returns whether file exists delete deletes the export file i m not sure this is the best approach i think the post should return a token unique to that export and get and head should accept the token this way there is no ambiguity between a post and subsequent get head calls request header authorization token compress true format filetype the compress header shouldn t be necessary right we ll always compress parameters none should includeretired be supported i would think all the same parameters as a standard get call | 1 |
162,156 | 6,148,383,925 | IssuesEvent | 2017-06-27 17:42:58 | RPI-HASS/rpi_csdt_community | https://api.github.com/repos/RPI-HASS/rpi_csdt_community | closed | Password reset option on community site | high priority | Porting Issues:
chuck211991 commented on Sep 1, 2016
"A number of students in Cleveland forgot their password over the course of our three days. Neither they nor I could find a password reset button. Charles told me I could log into the admin panel and reset user passwords that way but I was unable to log in to the admin panel. We should give users the option of resetting passwords themselves anyway (we do collect their emails after all)." -- CSnap/CSnap#29
@chuck211991 chuck211991 referenced this issue in CSnap/CSnap on Sep 1, 2016
Closed
Medium priority: Password reset option on community site #29
@chuck211991
Member
chuck211991 commented on Sep 6, 2016
This issue is benched pending the answers to #13 | 1.0 | Password reset option on community site - Porting Issues:
chuck211991 commented on Sep 1, 2016
"A number of students in Cleveland forgot their password over the course of our three days. Neither they nor I could find a password reset button. Charles told me I could log into the admin panel and reset user passwords that way but I was unable to log in to the admin panel. We should give users the option of resetting passwords themselves anyway (we do collect their emails after all)." -- CSnap/CSnap#29
@chuck211991 chuck211991 referenced this issue in CSnap/CSnap on Sep 1, 2016
Closed
Medium priority: Password reset option on community site #29
@chuck211991
Member
chuck211991 commented on Sep 6, 2016
This issue is benched pending the answers to #13 | priority | password reset option on community site porting issues commented on sep a number of students in cleveland forgot their password over the course of our three days neither they nor i could find a password reset button charles told me i could log into the admin panel and reset user passwords that way but i was unable to log in to the admin panel we should give users the option of resetting passwords themselves anyway we do collect their emails after all csnap csnap referenced this issue in csnap csnap on sep closed medium priority password reset option on community site member commented on sep this issue is benched pending the answers to | 1 |
132,003 | 5,167,772,361 | IssuesEvent | 2017-01-17 19:43:02 | jmickle66666666/wad-js | https://api.github.com/repos/jmickle66666666/wad-js | closed | Text panels no longer scroll | Bug Priority: High | Large text files, such as most DECORATE code files, no longer scroll in the element. | 1.0 | Text panels no longer scroll - Large text files, such as most DECORATE code files, no longer scroll in the element. | priority | text panels no longer scroll large text files such as most decorate code files no longer scroll in the element | 1 |
240,698 | 7,804,725,498 | IssuesEvent | 2018-06-11 08:30:37 | ddalthorp/GenEst | https://api.github.com/repos/ddalthorp/GenEst | closed | output summary tables | High priority | How can the user output summary tables, e.g., detection probability, M, etc. to excel?
I tried copying and pasting into excel, but that didn't work... | 1.0 | output summary tables - How can the user output summary tables, e.g., detection probability, M, etc. to excel?
I tried copying and pasting into excel, but that didn't work... | priority | output summary tables how can the user output summary tables e g detection probability m etc to excel i tried copying and pasting into excel but that didn t work | 1 |
469,148 | 13,502,041,210 | IssuesEvent | 2020-09-13 06:09:42 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | No conditional function for failing auth flow and redirect to the redirect URL | Complexity/High Component/Adaptive Auth Priority/High gateway improvement | **Description:**
The [sendError function](https://docs.wso2.com/display/IS570/Adaptive+Authentication+JS+API+Reference#AdaptiveAuthenticationJSAPIReference-sendError(url,parameters)) only ends the authentication and redirects to an error page. A function is required to end the authentication flow as failed and to redirect to a redirect URL.
1. Using sendError function to redirect to a desired redirect URI is not the intended purpose of the function.
2. sendError function is used to redirect to an error page other than a redirect URL.
3. According to [OIDC specification](https://openid.net/specs/openid-connect-core-1_0.html#rfc.section.3.1.2.6), any extra parameter other than the parameters specified in [oAuth specification](https://tools.ietf.org/html/rfc6749#section-4.1.2.1) should not be appended to the redirect URL when the authentication is failed.
4. But when using sendError method is used to redirect to a redirect URL (which is not the standard practice), two extra parameters named "sp" and "tenantDomain" are appended, violating the above specs. The purpose of passing the tenant domain and SP in the error URL generated by this method would be to pass information to custom error pages that are themed based on tenant or application. So we cannot remove appending those parameters from sendError function.
5. There is no function in the framework to end the authentication flow as failed and redirect to a redirect URL with required parameters appended to the URL.
**Steps to reproduce:**
1. In adaptive authentication script, use the sendError function to redirect to a redirect URL.
2. The "sp" and "tenantDomain" parameters will be appended to the redirect URL. | 1.0 | No conditional function for failing auth flow and redirect to the redirect URL - **Description:**
The [sendError function](https://docs.wso2.com/display/IS570/Adaptive+Authentication+JS+API+Reference#AdaptiveAuthenticationJSAPIReference-sendError(url,parameters)) only ends the authentication and redirects to an error page. A function is required to end the authentication flow as failed and to redirect to a redirect URL.
1. Using sendError function to redirect to a desired redirect URI is not the intended purpose of the function.
2. sendError function is used to redirect to an error page other than a redirect URL.
3. According to [OIDC specification](https://openid.net/specs/openid-connect-core-1_0.html#rfc.section.3.1.2.6), any extra parameter other than the parameters specified in [oAuth specification](https://tools.ietf.org/html/rfc6749#section-4.1.2.1) should not be appended to the redirect URL when the authentication is failed.
4. But when using sendError method is used to redirect to a redirect URL (which is not the standard practice), two extra parameters named "sp" and "tenantDomain" are appended, violating the above specs. The purpose of passing the tenant domain and SP in the error URL generated by this method would be to pass information to custom error pages that are themed based on tenant or application. So we cannot remove appending those parameters from sendError function.
5. There is no function in the framework to end the authentication flow as failed and redirect to a redirect URL with required parameters appended to the URL.
**Steps to reproduce:**
1. In adaptive authentication script, use the sendError function to redirect to a redirect URL.
2. The "sp" and "tenantDomain" parameters will be appended to the redirect URL. | priority | no conditional function for failing auth flow and redirect to the redirect url description the only ends the authentication and redirects to an error page a function is required to end the authentication flow as failed and to redirect to a redirect url using senderror function to redirect to a desired redirect uri is not the intended purpose of the function senderror function is used to redirect to an error page other than a redirect url according to any extra parameter other than the parameters specified in should not be appended to the redirect url when the authentication is failed but when using senderror method is used to redirect to a redirect url which is not the standard practice two extra parameters named sp and tenantdomain are appended violating the above specs the purpose of passing the tenant domain and sp in the error url generated by this method would be to pass information to custom error pages that are themed based on tenant or application so we cannot remove appending those parameters from senderror function there is no function in the framework to end the authentication flow as failed and redirect to a redirect url with required parameters appended to the url steps to reproduce in adaptive authentication script use the senderror function to redirect to a redirect url the sp and tenantdomain parameters will be appended to the redirect url | 1 |
765,977 | 26,867,507,742 | IssuesEvent | 2023-02-04 03:16:15 | ATLauncher/ATLauncher | https://api.github.com/repos/ATLauncher/ATLauncher | opened | Universal modpack search/homepage | enhancement high-priority | I'd like a universal modpack search as well as a universal homepage for packs, showing the featured/top packs from all platforms.
How they're all weighed against each other and shown 🤷🏻 but it's the ideal endstate of the packs tab, I think with this it'd be mostly complete for me and I'd be happy with it.
It's been bought up multiple times that packs can be hard to find, if someone comes to ATLauncher and is looking for a pack on CurseForge, they might just go to Packs tab, then search instantly (which only searches ATLauncher packs) so this would help UX a lot. | 1.0 | Universal modpack search/homepage - I'd like a universal modpack search as well as a universal homepage for packs, showing the featured/top packs from all platforms.
How they're all weighed against each other and shown 🤷🏻 but it's the ideal endstate of the packs tab, I think with this it'd be mostly complete for me and I'd be happy with it.
It's been bought up multiple times that packs can be hard to find, if someone comes to ATLauncher and is looking for a pack on CurseForge, they might just go to Packs tab, then search instantly (which only searches ATLauncher packs) so this would help UX a lot. | priority | universal modpack search homepage i d like a universal modpack search as well as a universal homepage for packs showing the featured top packs from all platforms how they re all weighed against each other and shown 🤷🏻 but it s the ideal endstate of the packs tab i think with this it d be mostly complete for me and i d be happy with it it s been bought up multiple times that packs can be hard to find if someone comes to atlauncher and is looking for a pack on curseforge they might just go to packs tab then search instantly which only searches atlauncher packs so this would help ux a lot | 1 |
389,065 | 11,496,856,170 | IssuesEvent | 2020-02-12 08:56:05 | arcticicestudio/nord-jetbrains | https://api.github.com/repos/arcticicestudio/nord-jetbrains | opened | Random editor scheme highlight breakage | context-syntax priority-high scope-compatibility scope-stability scope-ux status-tracking type-proposal | This is a meta-issue to collect and aggregate all information regarding the problems related to “randomly breaking syntax highlighting“. There is an continuously increasing amount of issues related this bug were the root cause is still a mystery.
The following timeline shows the problem based on reported issues in this repository.
#### 2019-07-31 — First breakages of Go & JavaScript syntax since IDE versions 2019.2.x
The first cases are documented in #69 & #77 where the syntax highlighting of some Go & JavaScript elements were wrong after updating to IntelliJ version _2019.2.0_, the update that [introduced support for 20+ languages][rl-2019-2] out-of-the-box by integrating [TextMate][tm] schemes.
It resulted in a change for some Go & JavaScript editor color scheme keys that previously inherited the best matching global keys, but used the attributes defined by the parent theme _Darcula_ after the update instead. Therefore Nord's highlighting for Go & JavaScript broke and required to explicitly define the values for the some attributes (merged in #70 & #78) in order to achieve the same highlight like in previous versions:
A [comparison of the changes between Nord plugin version 0.6.0 and 0.7.0][comp-time-v0.6.0-v0.7.0] shows that there were absolutely no changes to the editor color scheme related to the highlighting of Go & JavaScript code. To this time the guess was that the root cause was the integration of _TextMate_ themes and the “fixes“ have been released in [version 0.8.0][rl-v0.8.0].
#### 2019-12-02 — Second breakage of Go syntax since IDE versions 2019.3.x
The second case is documented in #108 where the syntax highlighting of some Go elements were wrong again after updating to IntelliJ version _2019.3.0_. A [comparison of the changes between Nord plugin version 0.8.0 and the time the issue was created (2019-12-02)][comp-time-v0.6.0-v0.7.0][comp-time-v0.8.0-2019-12-02] shows again that there were no changes to the editor color scheme related to the highlighting of Go code.
An **interesting observation** was that the **wrong highlighting could be fixed by disabling and enabling the Nord plugin again without restarting the IDE** (deny/postpone to later when the question dialog shows up).
Again, the “fixes“ were then released in a the new plugin version [version 0.9.0][rl-v0.9.0].
#### 2020-01-28 — Another breakage of JavaScript & TypeScript syntax in IDE versions 2019.3.x
On 2020-01-28 a new issue has been created that describes the breakage of the syntax highlighting for JavaScript as well as TypeScript (which inherits values from the JavaScript editor scheme keys) in #115. This is **really strange** since the [affected elements were fixed in #78][gh-78-js-changes] to **mitigate the first breakage**!
To fix the problem again, the [color definitions were then defined explicitly in #116][gh-116-js-changes] **instead of relying on the non-working inheritance of other theme keys**.
The [comparison of the changes between Nord plugin version 0.9.0 and the time the issue was created (2020-01-28)][comp-time-v0.9.0-2020-01-28] again showing that there were **no changes to the highlighting of JavaScript or TypeScript syntax elements in the editor color scheme**.
It was also **possible again to temporarily work around the problem by re-enabling or even re-installing the plugin**. This definitely shows that the root cause must be somewhere in the way the IDE loads themes and how editor color scheme keys are inherited from other keys.
Again, the “fixes“ were then released in a the new plugin version [version 0.10.0][rl-v0.10.0] **on 2020-02-11**.
#### 2020-02-11 — Now PHP and general _markup_ languages are also broken…
The **latest case occurred only several hours after** [Nord plugin version 0.10.0][rl-v0.10.0] was deployed and made public through the _JetBrains Plugin Marketplace_. This time the highlighting of strings and comments in PHP, Markdown font styles (bold & italic) as well as other elements of _markup_ languages are broken.
Again, a [comparison of the changes between Nord plugin version 0.9.0 and 0.10.0][comp-v0.9.0-v0.10.0] there no changes to the highlighting for editor color scheme keys for PHP, Markdown or any _markup_ languages or “Language Default“ styles.
**During the testing of Nord plugin version 0.10.0**, that was **released to fix the problems of the broken JavaScript & TypeScript highlighting**, there were **no problems regarding _markup_ styles** and all elements were working fine in Markdown files. Right after deploying the plugin to the _JetBrains Plugin Marketplace_, the **highlighting suddenly broke “out of nowhere“ after updating the plugin for my IntelliJ**.
## Conclusion
These random breakages “drive me nuts“ and it's **frustrating as a theme author to no being able to track down the root cause**. Since the **problem occurs randomly**, but **can also be temporarily mitigated through one or more plugin re-activation or re-installations**, the **problem origin must be a bug in the IDE itself**.
## Next Steps
1. **Implement workaround for other possible breakages** — To prevent more styles from breaking, I'll replace all editor color scheme keys that inherit values from other keys with the explicit style definitions instead. This causes the code of the editor scheme to increase drastically due to duplicate and repeated styles, but it currently the only way to work around this non-working style inheritance in the IDE theme API.
2. **Submit a bug report to the JetBrains IntelliJ bug tracker** — In order to get rid of this annoying bug, I'll submit a detailed bug report to the JetBrains bug tracker for the IntelliJ IDE including the details of this issue.
## Feedback Always Welcome!
If you're also affected by this problem, I'll really appreciate every feedback so this can be used to find the root cause and fix this bug permanently :rocket:
[comp-time-v0.6.0-v0.7.0]: https://github.com/arcticicestudio/nord-jetbrains/compare/master@%7B2019-05-23%7D...master@%7B2019-07-16%7D
[comp-time-v0.8.0-2019-12-02]: https://github.com/arcticicestudio/nord-jetbrains/compare/v0.8.0...develop@%7B2019-12-02%7D
[comp-time-v0.9.0-2020-01-28]: https://github.com/arcticicestudio/nord-jetbrains/compare/v0.9.0...develop@%7B2020-01-28%7D
[comp-v0.9.0-v0.10.0]: https://github.com/arcticicestudio/nord-jetbrains/compare/v0.9.0...v0.10.0
[gh-116-js-changes]: https://github.com/arcticicestudio/nord-jetbrains/pull/116/files#diff-1146aace8d65c51b72c60139418ad4d0R1118-R1133
[gh-78-js-changes]: https://github.com/arcticicestudio/nord-jetbrains/pull/78/files#diff-1146aace8d65c51b72c60139418ad4d0R1016-R1018
[rl-2019-2]: https://www.jetbrains.com/idea/whatsnew/#v2019-2-editor
[rl-v0.10.0]: https://github.com/arcticicestudio/nord-jetbrains/releases/tag/v0.10.0
[rl-v0.8.0]: https://github.com/arcticicestudio/nord-jetbrains/releases/tag/v0.8.0
[rl-v0.9.0]: https://github.com/arcticicestudio/nord-jetbrains/releases/tag/v0.9.0
[tm]: https://macromates.com
| 1.0 | Random editor scheme highlight breakage - This is a meta-issue to collect and aggregate all information regarding the problems related to “randomly breaking syntax highlighting“. There is an continuously increasing amount of issues related this bug were the root cause is still a mystery.
The following timeline shows the problem based on reported issues in this repository.
#### 2019-07-31 — First breakages of Go & JavaScript syntax since IDE versions 2019.2.x
The first cases are documented in #69 & #77 where the syntax highlighting of some Go & JavaScript elements were wrong after updating to IntelliJ version _2019.2.0_, the update that [introduced support for 20+ languages][rl-2019-2] out-of-the-box by integrating [TextMate][tm] schemes.
It resulted in a change for some Go & JavaScript editor color scheme keys that previously inherited the best matching global keys, but used the attributes defined by the parent theme _Darcula_ after the update instead. Therefore Nord's highlighting for Go & JavaScript broke and required to explicitly define the values for the some attributes (merged in #70 & #78) in order to achieve the same highlight like in previous versions:
A [comparison of the changes between Nord plugin version 0.6.0 and 0.7.0][comp-time-v0.6.0-v0.7.0] shows that there were absolutely no changes to the editor color scheme related to the highlighting of Go & JavaScript code. To this time the guess was that the root cause was the integration of _TextMate_ themes and the “fixes“ have been released in [version 0.8.0][rl-v0.8.0].
#### 2019-12-02 — Second breakage of Go syntax since IDE versions 2019.3.x
The second case is documented in #108 where the syntax highlighting of some Go elements were wrong again after updating to IntelliJ version _2019.3.0_. A [comparison of the changes between Nord plugin version 0.8.0 and the time the issue was created (2019-12-02)][comp-time-v0.6.0-v0.7.0][comp-time-v0.8.0-2019-12-02] shows again that there were no changes to the editor color scheme related to the highlighting of Go code.
An **interesting observation** was that the **wrong highlighting could be fixed by disabling and enabling the Nord plugin again without restarting the IDE** (deny/postpone to later when the question dialog shows up).
Again, the “fixes“ were then released in a the new plugin version [version 0.9.0][rl-v0.9.0].
#### 2020-01-28 — Another breakage of JavaScript & TypeScript syntax in IDE versions 2019.3.x
On 2020-01-28 a new issue has been created that describes the breakage of the syntax highlighting for JavaScript as well as TypeScript (which inherits values from the JavaScript editor scheme keys) in #115. This is **really strange** since the [affected elements were fixed in #78][gh-78-js-changes] to **mitigate the first breakage**!
To fix the problem again, the [color definitions were then defined explicitly in #116][gh-116-js-changes] **instead of relying on the non-working inheritance of other theme keys**.
The [comparison of the changes between Nord plugin version 0.9.0 and the time the issue was created (2020-01-28)][comp-time-v0.9.0-2020-01-28] again showing that there were **no changes to the highlighting of JavaScript or TypeScript syntax elements in the editor color scheme**.
It was also **possible again to temporarily work around the problem by re-enabling or even re-installing the plugin**. This definitely shows that the root cause must be somewhere in the way the IDE loads themes and how editor color scheme keys are inherited from other keys.
Again, the “fixes“ were then released in a the new plugin version [version 0.10.0][rl-v0.10.0] **on 2020-02-11**.
#### 2020-02-11 — Now PHP and general _markup_ languages are also broken…
The **latest case occurred only several hours after** [Nord plugin version 0.10.0][rl-v0.10.0] was deployed and made public through the _JetBrains Plugin Marketplace_. This time the highlighting of strings and comments in PHP, Markdown font styles (bold & italic) as well as other elements of _markup_ languages are broken.
Again, a [comparison of the changes between Nord plugin version 0.9.0 and 0.10.0][comp-v0.9.0-v0.10.0] there no changes to the highlighting for editor color scheme keys for PHP, Markdown or any _markup_ languages or “Language Default“ styles.
**During the testing of Nord plugin version 0.10.0**, that was **released to fix the problems of the broken JavaScript & TypeScript highlighting**, there were **no problems regarding _markup_ styles** and all elements were working fine in Markdown files. Right after deploying the plugin to the _JetBrains Plugin Marketplace_, the **highlighting suddenly broke “out of nowhere“ after updating the plugin for my IntelliJ**.
## Conclusion
These random breakages “drive me nuts“ and it's **frustrating as a theme author to no being able to track down the root cause**. Since the **problem occurs randomly**, but **can also be temporarily mitigated through one or more plugin re-activation or re-installations**, the **problem origin must be a bug in the IDE itself**.
## Next Steps
1. **Implement workaround for other possible breakages** — To prevent more styles from breaking, I'll replace all editor color scheme keys that inherit values from other keys with the explicit style definitions instead. This causes the code of the editor scheme to increase drastically due to duplicate and repeated styles, but it currently the only way to work around this non-working style inheritance in the IDE theme API.
2. **Submit a bug report to the JetBrains IntelliJ bug tracker** — In order to get rid of this annoying bug, I'll submit a detailed bug report to the JetBrains bug tracker for the IntelliJ IDE including the details of this issue.
## Feedback Always Welcome!
If you're also affected by this problem, I'll really appreciate every feedback so this can be used to find the root cause and fix this bug permanently :rocket:
[comp-time-v0.6.0-v0.7.0]: https://github.com/arcticicestudio/nord-jetbrains/compare/master@%7B2019-05-23%7D...master@%7B2019-07-16%7D
[comp-time-v0.8.0-2019-12-02]: https://github.com/arcticicestudio/nord-jetbrains/compare/v0.8.0...develop@%7B2019-12-02%7D
[comp-time-v0.9.0-2020-01-28]: https://github.com/arcticicestudio/nord-jetbrains/compare/v0.9.0...develop@%7B2020-01-28%7D
[comp-v0.9.0-v0.10.0]: https://github.com/arcticicestudio/nord-jetbrains/compare/v0.9.0...v0.10.0
[gh-116-js-changes]: https://github.com/arcticicestudio/nord-jetbrains/pull/116/files#diff-1146aace8d65c51b72c60139418ad4d0R1118-R1133
[gh-78-js-changes]: https://github.com/arcticicestudio/nord-jetbrains/pull/78/files#diff-1146aace8d65c51b72c60139418ad4d0R1016-R1018
[rl-2019-2]: https://www.jetbrains.com/idea/whatsnew/#v2019-2-editor
[rl-v0.10.0]: https://github.com/arcticicestudio/nord-jetbrains/releases/tag/v0.10.0
[rl-v0.8.0]: https://github.com/arcticicestudio/nord-jetbrains/releases/tag/v0.8.0
[rl-v0.9.0]: https://github.com/arcticicestudio/nord-jetbrains/releases/tag/v0.9.0
[tm]: https://macromates.com
| priority | random editor scheme highlight breakage this is a meta issue to collect and aggregate all information regarding the problems related to “randomly breaking syntax highlighting“ there is an continuously increasing amount of issues related this bug were the root cause is still a mystery the following timeline shows the problem based on reported issues in this repository — first breakages of go javascript syntax since ide versions x the first cases are documented in where the syntax highlighting of some go javascript elements were wrong after updating to intellij version the update that out of the box by integrating schemes it resulted in a change for some go javascript editor color scheme keys that previously inherited the best matching global keys but used the attributes defined by the parent theme darcula after the update instead therefore nord s highlighting for go javascript broke and required to explicitly define the values for the some attributes merged in in order to achieve the same highlight like in previous versions a shows that there were absolutely no changes to the editor color scheme related to the highlighting of go javascript code to this time the guess was that the root cause was the integration of textmate themes and the “fixes“ have been released in — second breakage of go syntax since ide versions x the second case is documented in where the syntax highlighting of some go elements were wrong again after updating to intellij version a shows again that there were no changes to the editor color scheme related to the highlighting of go code an interesting observation was that the wrong highlighting could be fixed by disabling and enabling the nord plugin again without restarting the ide deny postpone to later when the question dialog shows up again the “fixes“ were then released in a the new plugin version — another breakage of javascript typescript syntax in ide versions x on a new issue has been created that describes the breakage of the syntax highlighting for javascript as well as typescript which inherits values from the javascript editor scheme keys in this is really strange since the to mitigate the first breakage to fix the problem again the instead of relying on the non working inheritance of other theme keys the again showing that there were no changes to the highlighting of javascript or typescript syntax elements in the editor color scheme it was also possible again to temporarily work around the problem by re enabling or even re installing the plugin this definitely shows that the root cause must be somewhere in the way the ide loads themes and how editor color scheme keys are inherited from other keys again the “fixes“ were then released in a the new plugin version on — now php and general markup languages are also broken… the latest case occurred only several hours after was deployed and made public through the jetbrains plugin marketplace this time the highlighting of strings and comments in php markdown font styles bold italic as well as other elements of markup languages are broken again a there no changes to the highlighting for editor color scheme keys for php markdown or any markup languages or “language default“ styles during the testing of nord plugin version that was released to fix the problems of the broken javascript typescript highlighting there were no problems regarding markup styles and all elements were working fine in markdown files right after deploying the plugin to the jetbrains plugin marketplace the highlighting suddenly broke “out of nowhere“ after updating the plugin for my intellij conclusion these random breakages “drive me nuts“ and it s frustrating as a theme author to no being able to track down the root cause since the problem occurs randomly but can also be temporarily mitigated through one or more plugin re activation or re installations the problem origin must be a bug in the ide itself next steps implement workaround for other possible breakages — to prevent more styles from breaking i ll replace all editor color scheme keys that inherit values from other keys with the explicit style definitions instead this causes the code of the editor scheme to increase drastically due to duplicate and repeated styles but it currently the only way to work around this non working style inheritance in the ide theme api submit a bug report to the jetbrains intellij bug tracker — in order to get rid of this annoying bug i ll submit a detailed bug report to the jetbrains bug tracker for the intellij ide including the details of this issue feedback always welcome if you re also affected by this problem i ll really appreciate every feedback so this can be used to find the root cause and fix this bug permanently rocket | 1 |
738,051 | 25,543,270,338 | IssuesEvent | 2022-11-29 16:45:25 | FrozenBlock/WilderWild | https://api.github.com/repos/FrozenBlock/WilderWild | closed | Palm leaves decay | enhancement todo block worldgen high priority | Ok, so, for palm trees don't look like shit, we need to figure out how to make the vertical leaves not disappear. I have two suggestions:
1. Add vertical leaves support
2. Make leaves that will depend on the block (pic), this block does not allow the leaves to decaying in a radius of 6x6x6, drop saplings or coconuts\bananas

| 1.0 | Palm leaves decay - Ok, so, for palm trees don't look like shit, we need to figure out how to make the vertical leaves not disappear. I have two suggestions:
1. Add vertical leaves support
2. Make leaves that will depend on the block (pic), this block does not allow the leaves to decaying in a radius of 6x6x6, drop saplings or coconuts\bananas

| priority | palm leaves decay ok so for palm trees don t look like shit we need to figure out how to make the vertical leaves not disappear i have two suggestions add vertical leaves support make leaves that will depend on the block pic this block does not allow the leaves to decaying in a radius of drop saplings or coconuts bananas | 1 |
482,932 | 13,915,977,094 | IssuesEvent | 2020-10-21 02:07:10 | phetsims/sun | https://api.github.com/repos/phetsims/sun | closed | how to add enabled/disabled feature to UI components? | priority:2-high status:ready-for-review | Developer consensus was to use this pattern for implementing enabled/disabled feature: https://github.com/phetsims/sun/issues/241#issuecomment-234031982. But as @samreid hinted at in https://github.com/phetsims/sun/issues/241#issuecomment-234323014, adding this feature currently requires duplicate/boilerplate code. I've recently added this feature to ComboBox and NumberControl, and thought we should discuss other methods of adding the feature.
Here's the relevant code that needs to be added:
``` js
function MyComponent( ..., options ) {
options = _.extend( {
enabledProperty: new Property( true ),
disabledOpacity: 0.5, // {number} opacity used to make the control look disabled
}, options );
// validate options
assert && assert( options.disabledOpacity >= 0 && options.disabledOpacity <= 1,
'invalid disabledOpacity: ' + options.disabledOpacity );
// @public
this.enabledProperty = options.enabledProperty;
// enable/disable the component
var enabledObserver = function( enabled ) {
self.pickable = enabled;
self.opacity = enabled ? 1.0 : options.disabledOpacity;
};
this.enabledProperty.link( enabledObserver );
// @private called by dispose
this.disposeMyComponent = function() {
self.enabledProperty.unlink( enabledObserver );
};
}
return inherit( Supertype, MyComponent, {
// @public
dispose: function() {
this.disposeMyComponent();
},
// @public
setEnabled: function( enabled ) { this.enabledProperty.value = enabled; },
set enabled( value ) { this.setEnabled( value ); },
// @public
getEnabled: function() { return this.enabledProperty.value; },
get enabled() { return this.getEnabled(); }
} );
```
| 1.0 | how to add enabled/disabled feature to UI components? - Developer consensus was to use this pattern for implementing enabled/disabled feature: https://github.com/phetsims/sun/issues/241#issuecomment-234031982. But as @samreid hinted at in https://github.com/phetsims/sun/issues/241#issuecomment-234323014, adding this feature currently requires duplicate/boilerplate code. I've recently added this feature to ComboBox and NumberControl, and thought we should discuss other methods of adding the feature.
Here's the relevant code that needs to be added:
``` js
function MyComponent( ..., options ) {
options = _.extend( {
enabledProperty: new Property( true ),
disabledOpacity: 0.5, // {number} opacity used to make the control look disabled
}, options );
// validate options
assert && assert( options.disabledOpacity >= 0 && options.disabledOpacity <= 1,
'invalid disabledOpacity: ' + options.disabledOpacity );
// @public
this.enabledProperty = options.enabledProperty;
// enable/disable the component
var enabledObserver = function( enabled ) {
self.pickable = enabled;
self.opacity = enabled ? 1.0 : options.disabledOpacity;
};
this.enabledProperty.link( enabledObserver );
// @private called by dispose
this.disposeMyComponent = function() {
self.enabledProperty.unlink( enabledObserver );
};
}
return inherit( Supertype, MyComponent, {
// @public
dispose: function() {
this.disposeMyComponent();
},
// @public
setEnabled: function( enabled ) { this.enabledProperty.value = enabled; },
set enabled( value ) { this.setEnabled( value ); },
// @public
getEnabled: function() { return this.enabledProperty.value; },
get enabled() { return this.getEnabled(); }
} );
```
| priority | how to add enabled disabled feature to ui components developer consensus was to use this pattern for implementing enabled disabled feature but as samreid hinted at in adding this feature currently requires duplicate boilerplate code i ve recently added this feature to combobox and numbercontrol and thought we should discuss other methods of adding the feature here s the relevant code that needs to be added js function mycomponent options options extend enabledproperty new property true disabledopacity number opacity used to make the control look disabled options validate options assert assert options disabledopacity options disabledopacity invalid disabledopacity options disabledopacity public this enabledproperty options enabledproperty enable disable the component var enabledobserver function enabled self pickable enabled self opacity enabled options disabledopacity this enabledproperty link enabledobserver private called by dispose this disposemycomponent function self enabledproperty unlink enabledobserver return inherit supertype mycomponent public dispose function this disposemycomponent public setenabled function enabled this enabledproperty value enabled set enabled value this setenabled value public getenabled function return this enabledproperty value get enabled return this getenabled | 1 |
518,054 | 15,023,264,675 | IssuesEvent | 2021-02-01 18:00:14 | OpenEnergyDashboard/OED | https://api.github.com/repos/OpenEnergyDashboard/OED | opened | allow multiple users with roles | p-high-priority t-enhancement | We are adding the ability to receive data from Obvius and CSV files along with downloading the raw meter data. We need to have a user and password associated with these actions to make sure only authorized people do them. Having them linked to the admin risks that password and limits granularity of access. Thus, users need to be enhanced to have a role. We need an admin page to edit users and modify the current code to change login check for admin role. Also, the current setup in Obvius, CSV and export will all check this user (for export it will only be for large downloads). This is related to issues #265 and #231.
On a related note, it would also be nice to allow the installer to set the email & password for the initial admin rather than using defaults that are then reset after the install. Maybe this can be address as part of this or made a separate issue if that does not happen. | 1.0 | allow multiple users with roles - We are adding the ability to receive data from Obvius and CSV files along with downloading the raw meter data. We need to have a user and password associated with these actions to make sure only authorized people do them. Having them linked to the admin risks that password and limits granularity of access. Thus, users need to be enhanced to have a role. We need an admin page to edit users and modify the current code to change login check for admin role. Also, the current setup in Obvius, CSV and export will all check this user (for export it will only be for large downloads). This is related to issues #265 and #231.
On a related note, it would also be nice to allow the installer to set the email & password for the initial admin rather than using defaults that are then reset after the install. Maybe this can be address as part of this or made a separate issue if that does not happen. | priority | allow multiple users with roles we are adding the ability to receive data from obvius and csv files along with downloading the raw meter data we need to have a user and password associated with these actions to make sure only authorized people do them having them linked to the admin risks that password and limits granularity of access thus users need to be enhanced to have a role we need an admin page to edit users and modify the current code to change login check for admin role also the current setup in obvius csv and export will all check this user for export it will only be for large downloads this is related to issues and on a related note it would also be nice to allow the installer to set the email password for the initial admin rather than using defaults that are then reset after the install maybe this can be address as part of this or made a separate issue if that does not happen | 1 |
493,717 | 14,237,001,496 | IssuesEvent | 2020-11-18 16:40:22 | bounswe/bounswe2020group4 | https://api.github.com/repos/bounswe/bounswe2020group4 | closed | (AND) Product detail page | Android Coding Effort: Medium Priority: High Status: Needs Review | Milestone 1 - Android Implementation

Deadline: 22.11.2020 Sunday 18.30
| 1.0 | (AND) Product detail page - Milestone 1 - Android Implementation

Deadline: 22.11.2020 Sunday 18.30
| priority | and product detail page milestone android implementation deadline sunday | 1 |
648,207 | 21,178,480,567 | IssuesEvent | 2022-04-08 04:33:03 | ballerina-platform/graphql-tools | https://api.github.com/repos/ballerina-platform/graphql-tools | closed | [Improvement] Add support for GraphQL tool to pack with Ballerina distribution | Priority/Highest Type/Improvement Team/Connector ConnectorTools/GraphQL | **Description:**
We need to add support for the GraphQL tool to pack with the Ballerina distribution | 1.0 | [Improvement] Add support for GraphQL tool to pack with Ballerina distribution - **Description:**
We need to add support for the GraphQL tool to pack with the Ballerina distribution | priority | add support for graphql tool to pack with ballerina distribution description we need to add support for the graphql tool to pack with the ballerina distribution | 1 |
403,525 | 11,842,347,897 | IssuesEvent | 2020-03-23 22:50:40 | ChainSafe/gossamer | https://api.github.com/repos/ChainSafe/gossamer | opened | ignore BlockResponses that don't match aren't requested by us | Priority: 2 - High network | in the network state, we shouldn't send BlockResponses to core that aren't requested by us | 1.0 | ignore BlockResponses that don't match aren't requested by us - in the network state, we shouldn't send BlockResponses to core that aren't requested by us | priority | ignore blockresponses that don t match aren t requested by us in the network state we shouldn t send blockresponses to core that aren t requested by us | 1 |
539,690 | 15,793,214,677 | IssuesEvent | 2021-04-02 08:34:46 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Role validation fails after running the profile optimization in APIM 3.2.0 | API-M 4.0.0 Affected/3.2.0 Priority/Highest Type/Bug | We were able to observe that the users can log in to the devportal with only internal/publisher and internal/creator roles when the server is optimized as api-devportal and run as the profile.
Steps :
Get a wso2am-3.2.0 pack
Run the sh profileSetup.sh -Dprofile=api-devportal
Start the server sh wso2server.sh -Dprofile=api-devportal
Create a user with internal/publisher and internal/creator roles
Log in to the devportal.
This seems to be a bug. Kindly let us know how we can proceed further. | 1.0 | Role validation fails after running the profile optimization in APIM 3.2.0 - We were able to observe that the users can log in to the devportal with only internal/publisher and internal/creator roles when the server is optimized as api-devportal and run as the profile.
Steps :
Get a wso2am-3.2.0 pack
Run the sh profileSetup.sh -Dprofile=api-devportal
Start the server sh wso2server.sh -Dprofile=api-devportal
Create a user with internal/publisher and internal/creator roles
Log in to the devportal.
This seems to be a bug. Kindly let us know how we can proceed further. | priority | role validation fails after running the profile optimization in apim we were able to observe that the users can log in to the devportal with only internal publisher and internal creator roles when the server is optimized as api devportal and run as the profile steps get a pack run the sh profilesetup sh dprofile api devportal start the server sh sh dprofile api devportal create a user with internal publisher and internal creator roles log in to the devportal this seems to be a bug kindly let us know how we can proceed further | 1 |
105,195 | 4,232,236,374 | IssuesEvent | 2016-07-04 21:14:19 | ubc/acj-versus | https://api.github.com/repos/ubc/acj-versus | closed | Refactor database | developer suggestion enhancement high priority | Ongoing issue...
- [x] Discuss workflow (e.g., criteria)
- [x] Resolve issue #189
- [x] Redesign database
- [x] Rename tables and columns
- [x] Update related references in code
- [x] Rename models (issue #178)
- [x] Write migration script (pull request #300) | 1.0 | Refactor database - Ongoing issue...
- [x] Discuss workflow (e.g., criteria)
- [x] Resolve issue #189
- [x] Redesign database
- [x] Rename tables and columns
- [x] Update related references in code
- [x] Rename models (issue #178)
- [x] Write migration script (pull request #300) | priority | refactor database ongoing issue discuss workflow e g criteria resolve issue redesign database rename tables and columns update related references in code rename models issue write migration script pull request | 1 |
170,677 | 6,469,076,279 | IssuesEvent | 2017-08-17 04:02:26 | ocf/slackbridge | https://api.github.com/repos/ocf/slackbridge | closed | Fix Slack -> IRC mirroring crash on attempting to mirror URLs/images | bug high-priority | ```
2017-08-16 15:32:25-0700 [-] {'type': 'message', 'channel': 'C02EV1RRE', 'ts': '1502922743.000102', 'hidden': True, 'subtype': 'message_changed', 'message': {'icons': {'image_48': 'https://s3-us-west-2.amazonaws.com/slack-files2/bot_icons/2016-05-25/45780940772_48.png'}, 'attachments': [{'fallback': 'Washington Post: Analysis | Leaving downtown at rush hour in America’s largest cities', 'title': 'Analysis | Leaving downtown at rush hour in America’s largest cities', 'service_name': 'Washington Post', 'image_height': 250, 'image_url': 'https://www.washingtonpost.com/graphics/2017/national/escape-time/img/2300-escape-time_promo.jpg', 'title_link': 'https://www.washingtonpost.com/graphics/2017/national/escape-time/', 'id': 1, 'service_icon': 'http://www.washingtonpost.com/wp-srv/graphics/templates/mobile/ico/apple-touch-icon-144-precomposed.png', 'image_width': 375, 'image_bytes': 355239, 'from_url': 'https://www.washingtonpost.com/graphics/2017/national/escape-time/', 'text': 'Here’s how far you can get f
2017-08-16 15:32:25-0700 [-] Unhandled error in Deferred:
2017-08-16 15:32:25-0700 [-] Unhandled Error
Traceback (most recent call last):
File "/opt/slackbridge/venv/lib/python3.5/site-packages/twisted/internet/base.py", line 1199, in run
self.mainLoop()
File "/opt/slackbridge/venv/lib/python3.5/site-packages/twisted/internet/base.py", line 1208, in mainLoop
self.runUntilCurrent()
File "/opt/slackbridge/venv/lib/python3.5/site-packages/twisted/internet/base.py", line 828, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/opt/slackbridge/venv/lib/python3.5/site-packages/twisted/internet/task.py", line 239, in __call__
d = defer.maybeDeferred(self.f, *self.a, **self.kw)
--- <exception caught here> ---
File "/opt/slackbridge/venv/lib/python3.5/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/opt/slackbridge/slackbridge/bots.py", line 75, in check_slack_rtm
if user_bot.user_id == message['user']:
builtins.KeyError: 'user'
```
Another example is at https://i.fluffy.cc/xwKkVwSzBFm6WHzXDt6PcGLrpqbbNDwL.html | 1.0 | Fix Slack -> IRC mirroring crash on attempting to mirror URLs/images - ```
2017-08-16 15:32:25-0700 [-] {'type': 'message', 'channel': 'C02EV1RRE', 'ts': '1502922743.000102', 'hidden': True, 'subtype': 'message_changed', 'message': {'icons': {'image_48': 'https://s3-us-west-2.amazonaws.com/slack-files2/bot_icons/2016-05-25/45780940772_48.png'}, 'attachments': [{'fallback': 'Washington Post: Analysis | Leaving downtown at rush hour in America’s largest cities', 'title': 'Analysis | Leaving downtown at rush hour in America’s largest cities', 'service_name': 'Washington Post', 'image_height': 250, 'image_url': 'https://www.washingtonpost.com/graphics/2017/national/escape-time/img/2300-escape-time_promo.jpg', 'title_link': 'https://www.washingtonpost.com/graphics/2017/national/escape-time/', 'id': 1, 'service_icon': 'http://www.washingtonpost.com/wp-srv/graphics/templates/mobile/ico/apple-touch-icon-144-precomposed.png', 'image_width': 375, 'image_bytes': 355239, 'from_url': 'https://www.washingtonpost.com/graphics/2017/national/escape-time/', 'text': 'Here’s how far you can get f
2017-08-16 15:32:25-0700 [-] Unhandled error in Deferred:
2017-08-16 15:32:25-0700 [-] Unhandled Error
Traceback (most recent call last):
File "/opt/slackbridge/venv/lib/python3.5/site-packages/twisted/internet/base.py", line 1199, in run
self.mainLoop()
File "/opt/slackbridge/venv/lib/python3.5/site-packages/twisted/internet/base.py", line 1208, in mainLoop
self.runUntilCurrent()
File "/opt/slackbridge/venv/lib/python3.5/site-packages/twisted/internet/base.py", line 828, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/opt/slackbridge/venv/lib/python3.5/site-packages/twisted/internet/task.py", line 239, in __call__
d = defer.maybeDeferred(self.f, *self.a, **self.kw)
--- <exception caught here> ---
File "/opt/slackbridge/venv/lib/python3.5/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/opt/slackbridge/slackbridge/bots.py", line 75, in check_slack_rtm
if user_bot.user_id == message['user']:
builtins.KeyError: 'user'
```
Another example is at https://i.fluffy.cc/xwKkVwSzBFm6WHzXDt6PcGLrpqbbNDwL.html | priority | fix slack irc mirroring crash on attempting to mirror urls images type message channel ts hidden true subtype message changed message icons image attachments fallback washington post analysis leaving downtown at rush hour in america’s largest cities title analysis leaving downtown at rush hour in america’s largest cities service name washington post image height image url title link id service icon image width image bytes from url text here’s how far you can get f unhandled error in deferred unhandled error traceback most recent call last file opt slackbridge venv lib site packages twisted internet base py line in run self mainloop file opt slackbridge venv lib site packages twisted internet base py line in mainloop self rununtilcurrent file opt slackbridge venv lib site packages twisted internet base py line in rununtilcurrent call func call args call kw file opt slackbridge venv lib site packages twisted internet task py line in call d defer maybedeferred self f self a self kw file opt slackbridge venv lib site packages twisted internet defer py line in maybedeferred result f args kw file opt slackbridge slackbridge bots py line in check slack rtm if user bot user id message builtins keyerror user another example is at | 1 |
681,052 | 23,295,677,979 | IssuesEvent | 2022-08-06 14:28:20 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | opened | buck-build-and-test fails to download jdk dependency | high priority module: ci | ### 🐛 Describe the bug
See https://github.com/pytorch/pytorch/runs/7698710864?check_suite_focus=true
### Versions
CI | 1.0 | buck-build-and-test fails to download jdk dependency - ### 🐛 Describe the bug
See https://github.com/pytorch/pytorch/runs/7698710864?check_suite_focus=true
### Versions
CI | priority | buck build and test fails to download jdk dependency 🐛 describe the bug see versions ci | 1 |
544,848 | 15,930,130,476 | IssuesEvent | 2021-04-14 00:08:16 | airctic/icevision | https://api.github.com/repos/airctic/icevision | opened | tests failing on effdet | bug priority-high | ## 🐛 Bug
Tests are failing on effdet because the weights on [model_zoo](https://github.com/airctic/model_zoo/releases/tag/m2) were updated
| 1.0 | tests failing on effdet - ## 🐛 Bug
Tests are failing on effdet because the weights on [model_zoo](https://github.com/airctic/model_zoo/releases/tag/m2) were updated
| priority | tests failing on effdet 🐛 bug tests are failing on effdet because the weights on were updated | 1 |
523,717 | 15,188,436,400 | IssuesEvent | 2021-02-15 15:06:16 | VocaDB/vocadb | https://api.github.com/repos/VocaDB/vocadb | closed | .NET Core conversion | backend complexity-epic high-priority | Model project should be converted to .NET Standard 2.0 first.
Then add new Web frontend project based on ASP.NET Core / .NET Core.
Then change Model to .NET Core.
What to do about the WCF interface? It is used by the Wiki integration and MikuBot. Could it be converted to gRPC or just use the REST APIs?
There shouldn't be too many .NET Framework / legacy ASP.NET dependencies.
- [X] Refactor JavaScript compilation to use Webpack (#5)
- [X] VocaDB.Model into .NET Standard (#597)
- [x] Add Web frontend based on ASP.NET Core (#598)
- [x] Find a replacement for WCF (#713)
- [ ] VocaDB.Model into .NET Core (#765) | 1.0 | .NET Core conversion - Model project should be converted to .NET Standard 2.0 first.
Then add new Web frontend project based on ASP.NET Core / .NET Core.
Then change Model to .NET Core.
What to do about the WCF interface? It is used by the Wiki integration and MikuBot. Could it be converted to gRPC or just use the REST APIs?
There shouldn't be too many .NET Framework / legacy ASP.NET dependencies.
- [X] Refactor JavaScript compilation to use Webpack (#5)
- [X] VocaDB.Model into .NET Standard (#597)
- [x] Add Web frontend based on ASP.NET Core (#598)
- [x] Find a replacement for WCF (#713)
- [ ] VocaDB.Model into .NET Core (#765) | priority | net core conversion model project should be converted to net standard first then add new web frontend project based on asp net core net core then change model to net core what to do about the wcf interface it is used by the wiki integration and mikubot could it be converted to grpc or just use the rest apis there shouldn t be too many net framework legacy asp net dependencies refactor javascript compilation to use webpack vocadb model into net standard add web frontend based on asp net core find a replacement for wcf vocadb model into net core | 1 |
236,456 | 7,749,305,982 | IssuesEvent | 2018-05-30 11:01:26 | Gloirin/m2gTest | https://api.github.com/repos/Gloirin/m2gTest | closed | 0003730:
support CONDSTORE extension for quick flag sync | Feature Request Felamimail high priority | **Reported by pschuele on 12 Jan 2011 08:57**
**Version:** git master
support CONDSTORE extension for quick flag sync
**Additional information:** see https://tools.ietf.org/html/rfc4551
| 1.0 | 0003730:
support CONDSTORE extension for quick flag sync - **Reported by pschuele on 12 Jan 2011 08:57**
**Version:** git master
support CONDSTORE extension for quick flag sync
**Additional information:** see https://tools.ietf.org/html/rfc4551
| priority | support condstore extension for quick flag sync reported by pschuele on jan version git master support condstore extension for quick flag sync additional information see | 1 |
514,286 | 14,936,634,348 | IssuesEvent | 2021-01-25 13:38:53 | feelpp/feelpp | https://api.github.com/repos/feelpp/feelpp | closed | Improve support for slepc | module:alg priority:High status:in development type:feature | @romainhild we are going to improve the support for slepc and its option in feel++ and have something like we have for standard linear algebra problems | 1.0 | Improve support for slepc - @romainhild we are going to improve the support for slepc and its option in feel++ and have something like we have for standard linear algebra problems | priority | improve support for slepc romainhild we are going to improve the support for slepc and its option in feel and have something like we have for standard linear algebra problems | 1 |
186,071 | 6,733,291,106 | IssuesEvent | 2017-10-18 14:25:16 | metasfresh/metasfresh | https://api.github.com/repos/metasfresh/metasfresh | closed | Businesspartner Import for different Partner with the same address | branch:master branch:release priority:high type:bug | ### Is this a bug or feature request?
bug
### What is the current behavior?
If two partner with different bp values and the same parnernames (organisation name) but with identical addresses are imported, the location is not added for the imported partner.
There are also cases where partner with the same partner name and same address which are importet correctly.
#### Which are the steps to reproduce?
1. add two different partner (name and bp value) with same address to a importfile
2. start import of business partner
3. partner are imported correctly but addresses are not added to partner
### What is the expected or desired behavior?
If two partner with the same address are imported with the import assistant, the location should be added for both partner, even if it is exactly the same address. This is especially important for appartement complexes with the exact same address.
| 1.0 | Businesspartner Import for different Partner with the same address - ### Is this a bug or feature request?
bug
### What is the current behavior?
If two partner with different bp values and the same parnernames (organisation name) but with identical addresses are imported, the location is not added for the imported partner.
There are also cases where partner with the same partner name and same address which are importet correctly.
#### Which are the steps to reproduce?
1. add two different partner (name and bp value) with same address to a importfile
2. start import of business partner
3. partner are imported correctly but addresses are not added to partner
### What is the expected or desired behavior?
If two partner with the same address are imported with the import assistant, the location should be added for both partner, even if it is exactly the same address. This is especially important for appartement complexes with the exact same address.
| priority | businesspartner import for different partner with the same address is this a bug or feature request bug what is the current behavior if two partner with different bp values and the same parnernames organisation name but with identical addresses are imported the location is not added for the imported partner there are also cases where partner with the same partner name and same address which are importet correctly which are the steps to reproduce add two different partner name and bp value with same address to a importfile start import of business partner partner are imported correctly but addresses are not added to partner what is the expected or desired behavior if two partner with the same address are imported with the import assistant the location should be added for both partner even if it is exactly the same address this is especially important for appartement complexes with the exact same address | 1 |
260,571 | 8,211,841,063 | IssuesEvent | 2018-09-04 14:47:52 | hpcugent/vsc_user_docs | https://api.github.com/repos/hpcugent/vsc_user_docs | closed | review/update documentation on copying files | Jasper (HPC-UGent student intern) priority:high | * see http://hpc.ugent.be/userwiki/index.php/User:VscCopy + http://hpc.ugent.be/userwiki/index.php/User:DataTransfer)
* check also what's documented in VSC website
* https://www.vscentrum.be/client/windows/winscp
* https://www.vscentrum.be/client/macosx/data-cyberduck
* https://www.vscentrum.be/client/linux/data-openssh | 1.0 | review/update documentation on copying files - * see http://hpc.ugent.be/userwiki/index.php/User:VscCopy + http://hpc.ugent.be/userwiki/index.php/User:DataTransfer)
* check also what's documented in VSC website
* https://www.vscentrum.be/client/windows/winscp
* https://www.vscentrum.be/client/macosx/data-cyberduck
* https://www.vscentrum.be/client/linux/data-openssh | priority | review update documentation on copying files see check also what s documented in vsc website | 1 |
782,491 | 27,498,240,776 | IssuesEvent | 2023-03-05 11:57:34 | dkdace/dmgr-server | https://api.github.com/repos/dkdace/dmgr-server | closed | [Feature] Lombok 전체 적용 | 🛠 Refactoring ⚠ Priority: High | ## ℹ Description
전체 코드에 Lombok 적용
## ✅ Tasks
- [x] Lombok 적용
## 💬 Comment
--
| 1.0 | [Feature] Lombok 전체 적용 - ## ℹ Description
전체 코드에 Lombok 적용
## ✅ Tasks
- [x] Lombok 적용
## 💬 Comment
--
| priority | lombok 전체 적용 ℹ description 전체 코드에 lombok 적용 ✅ tasks lombok 적용 💬 comment | 1 |
493,271 | 14,230,124,003 | IssuesEvent | 2020-11-18 07:33:31 | boostcamp-2020/Project01-C-User-Event-Collector | https://api.github.com/repos/boostcamp-2020/Project01-C-User-Event-Collector | opened | feature List 및 백로그 작성 | Web docs ⭐️ high-priority | ## To Do
project scope를 정하고, 페이지 별 기능과 주요 동작을 설명하는 feature List를 작성한다.
구현해야 할 기능들을 정리하여 백로그를 작성한다.
| 1.0 | feature List 및 백로그 작성 - ## To Do
project scope를 정하고, 페이지 별 기능과 주요 동작을 설명하는 feature List를 작성한다.
구현해야 할 기능들을 정리하여 백로그를 작성한다.
| priority | feature list 및 백로그 작성 to do project scope를 정하고 페이지 별 기능과 주요 동작을 설명하는 feature list를 작성한다 구현해야 할 기능들을 정리하여 백로그를 작성한다 | 1 |
261,238 | 8,228,497,779 | IssuesEvent | 2018-09-07 05:40:59 | nnsuite/TAOS-CI | https://api.github.com/repos/nnsuite/TAOS-CI | closed | [Feature] support Doxygen build verifier | priority-high | We have to add a Doxygen build verifier
Sometimes, a Doxygen generates does not create a pdf book because a doxygen code is incorrect.
* For example, (python code)
The ''' comment symbol of Python code results in an abnormal operation of the doxygen command.
```bash
"""
Array with associated photographic information.
...
```
* Action items:
1. A doxygen format/tag checker (done)
2. A doxygen generation/build verifier (todo)
| 1.0 | [Feature] support Doxygen build verifier - We have to add a Doxygen build verifier
Sometimes, a Doxygen generates does not create a pdf book because a doxygen code is incorrect.
* For example, (python code)
The ''' comment symbol of Python code results in an abnormal operation of the doxygen command.
```bash
"""
Array with associated photographic information.
...
```
* Action items:
1. A doxygen format/tag checker (done)
2. A doxygen generation/build verifier (todo)
| priority | support doxygen build verifier we have to add a doxygen build verifier sometimes a doxygen generates does not create a pdf book because a doxygen code is incorrect for example python code the comment symbol of python code results in an abnormal operation of the doxygen command bash array with associated photographic information action items a doxygen format tag checker done a doxygen generation build verifier todo | 1 |
92,071 | 3,865,242,531 | IssuesEvent | 2016-04-08 16:39:21 | AlexisNichel/bioid-mobile-base | https://api.github.com/repos/AlexisNichel/bioid-mobile-base | closed | Obtener Usuarios | high-priority | Crear un método syncUsers en un service sync.js con dependencia de api.js. Lo que va a hacer es primero obtener las huellas de localstorage sino crear un array vacío. Luego guardar las huellas por un lado para que se consulten más rápido con el lector y los datos del usuario por otro.
``` js
//$api.getUsers().then(function(apiUsers){
var users = apiUsers;
var fingerPrints = $localstorage.getObject('fingerPrints ') || [],
userData = $localstorage.getObject('users') || []
angular.forEach(usuarios, function(value) {
fingerPrints[value.IdHuella] = value.Huella;
delete value.Huella;
userData[value.IdHuella] = value;
})
$localstorage.setObject('fingerprints ',fingerprints );
$localstorage.setObject('users', userData );
$localstorage.setObject('lastSync', new Date());
//Llamar a splashScreen para escribir huellas con writeTemplate
})
```
Además, si la sincronización fue errónea dejarlo asentado en el log.
| 1.0 | Obtener Usuarios - Crear un método syncUsers en un service sync.js con dependencia de api.js. Lo que va a hacer es primero obtener las huellas de localstorage sino crear un array vacío. Luego guardar las huellas por un lado para que se consulten más rápido con el lector y los datos del usuario por otro.
``` js
//$api.getUsers().then(function(apiUsers){
var users = apiUsers;
var fingerPrints = $localstorage.getObject('fingerPrints ') || [],
userData = $localstorage.getObject('users') || []
angular.forEach(usuarios, function(value) {
fingerPrints[value.IdHuella] = value.Huella;
delete value.Huella;
userData[value.IdHuella] = value;
})
$localstorage.setObject('fingerprints ',fingerprints );
$localstorage.setObject('users', userData );
$localstorage.setObject('lastSync', new Date());
//Llamar a splashScreen para escribir huellas con writeTemplate
})
```
Además, si la sincronización fue errónea dejarlo asentado en el log.
| priority | obtener usuarios crear un método syncusers en un service sync js con dependencia de api js lo que va a hacer es primero obtener las huellas de localstorage sino crear un array vacío luego guardar las huellas por un lado para que se consulten más rápido con el lector y los datos del usuario por otro js api getusers then function apiusers var users apiusers var fingerprints localstorage getobject fingerprints userdata localstorage getobject users angular foreach usuarios function value fingerprints value huella delete value huella userdata value localstorage setobject fingerprints fingerprints localstorage setobject users userdata localstorage setobject lastsync new date llamar a splashscreen para escribir huellas con writetemplate además si la sincronización fue errónea dejarlo asentado en el log | 1 |
636,834 | 20,610,507,858 | IssuesEvent | 2022-03-07 08:06:00 | AY2122S2-CS2103T-T11-4/tp | https://api.github.com/repos/AY2122S2-CS2103T-T11-4/tp | closed | Update the DG: user stories, glossary, NFRs, use cases | priority.High | Add the following to the DG, based on your project notes from the previous weeks.
Some examples of these can be found in the [AB3 Developer Guide](https://se-education.org/addressbook-level3/DeveloperGuide.html#product-scope).
Target user profile, value proposition, and user stories: Update the target user profile and value proposition to match the project direction you have selected. Give a list of the user stories (and update/delete existing ones, if applicable), including priorities. This can include user stories considered but will not be included in the final product.
Use cases: Give use cases (textual form) for a few representative user stories that need multiple steps to complete. e.g. Adding a tag to a person (assume the user needs to find the person first)
Non-functional requirements: Note: Many of the given project constraints can be considered NFRs. You can add more. e.g. performance requirements, usability requirements, scalability requirements, etc.
Glossary: Define terms that are worth recording.
The above DG sections should cover the full requirements of the product, some of which might not get implemented by the end of this semester. Furthermore, these sections will be graded at the final project evaluation, and any bugs in the content can cost you marks at that point. The panel below gives some relevant DG bug examples you can lookout for:
[see the 2103t website](https://nus-cs2103-ay2122s2.github.io/website/schedule/week7/project.html) | 1.0 | Update the DG: user stories, glossary, NFRs, use cases - Add the following to the DG, based on your project notes from the previous weeks.
Some examples of these can be found in the [AB3 Developer Guide](https://se-education.org/addressbook-level3/DeveloperGuide.html#product-scope).
Target user profile, value proposition, and user stories: Update the target user profile and value proposition to match the project direction you have selected. Give a list of the user stories (and update/delete existing ones, if applicable), including priorities. This can include user stories considered but will not be included in the final product.
Use cases: Give use cases (textual form) for a few representative user stories that need multiple steps to complete. e.g. Adding a tag to a person (assume the user needs to find the person first)
Non-functional requirements: Note: Many of the given project constraints can be considered NFRs. You can add more. e.g. performance requirements, usability requirements, scalability requirements, etc.
Glossary: Define terms that are worth recording.
The above DG sections should cover the full requirements of the product, some of which might not get implemented by the end of this semester. Furthermore, these sections will be graded at the final project evaluation, and any bugs in the content can cost you marks at that point. The panel below gives some relevant DG bug examples you can lookout for:
[see the 2103t website](https://nus-cs2103-ay2122s2.github.io/website/schedule/week7/project.html) | priority | update the dg user stories glossary nfrs use cases add the following to the dg based on your project notes from the previous weeks some examples of these can be found in the target user profile value proposition and user stories update the target user profile and value proposition to match the project direction you have selected give a list of the user stories and update delete existing ones if applicable including priorities this can include user stories considered but will not be included in the final product use cases give use cases textual form for a few representative user stories that need multiple steps to complete e g adding a tag to a person assume the user needs to find the person first non functional requirements note many of the given project constraints can be considered nfrs you can add more e g performance requirements usability requirements scalability requirements etc glossary define terms that are worth recording the above dg sections should cover the full requirements of the product some of which might not get implemented by the end of this semester furthermore these sections will be graded at the final project evaluation and any bugs in the content can cost you marks at that point the panel below gives some relevant dg bug examples you can lookout for | 1 |
594,201 | 18,040,197,905 | IssuesEvent | 2021-09-18 00:19:57 | zulip/zulip-mobile | https://api.github.com/repos/zulip/zulip-mobile | opened | Discard fetches when the active account has changed | a-multi-org a-data-sync P1 high-priority | When we go and fetch some data from the server, we should be sure to associate the resulting data with the right account, and not with some other of the user's accounts. In particular this should be the case even if the fetch took a while, and if in the interim the user went and switched to look at a different account (the new ["active account"](https://github.com/zulip/zulip-mobile/blob/main/docs/glossary.md#active-account).)
We'll need to solve this as part of #5005, where we'll potentially have several fetches going concurrently for several different accounts, and each will need to know which account the results should get stored under.
But pending that, in our current world where we have server data for just the one active account at a time, it'd be enough to discard the fetch's result (and ideally cancel the fetch itself, #4170) if the active account has changed by then.
Currently we do this in the event-queue long-polling loop in `startEventPolling`, which is the most important case, but not most other places -- notably, not in `doInitialFetch`, or the various message-fetching actions in `fetchActions.js`.
Issues #4170 and #4659 cover parts of how we might do this.
Issue #3791 is basically an example symptom of this.
| 1.0 | Discard fetches when the active account has changed - When we go and fetch some data from the server, we should be sure to associate the resulting data with the right account, and not with some other of the user's accounts. In particular this should be the case even if the fetch took a while, and if in the interim the user went and switched to look at a different account (the new ["active account"](https://github.com/zulip/zulip-mobile/blob/main/docs/glossary.md#active-account).)
We'll need to solve this as part of #5005, where we'll potentially have several fetches going concurrently for several different accounts, and each will need to know which account the results should get stored under.
But pending that, in our current world where we have server data for just the one active account at a time, it'd be enough to discard the fetch's result (and ideally cancel the fetch itself, #4170) if the active account has changed by then.
Currently we do this in the event-queue long-polling loop in `startEventPolling`, which is the most important case, but not most other places -- notably, not in `doInitialFetch`, or the various message-fetching actions in `fetchActions.js`.
Issues #4170 and #4659 cover parts of how we might do this.
Issue #3791 is basically an example symptom of this.
| priority | discard fetches when the active account has changed when we go and fetch some data from the server we should be sure to associate the resulting data with the right account and not with some other of the user s accounts in particular this should be the case even if the fetch took a while and if in the interim the user went and switched to look at a different account the new we ll need to solve this as part of where we ll potentially have several fetches going concurrently for several different accounts and each will need to know which account the results should get stored under but pending that in our current world where we have server data for just the one active account at a time it d be enough to discard the fetch s result and ideally cancel the fetch itself if the active account has changed by then currently we do this in the event queue long polling loop in starteventpolling which is the most important case but not most other places notably not in doinitialfetch or the various message fetching actions in fetchactions js issues and cover parts of how we might do this issue is basically an example symptom of this | 1 |
192,296 | 6,848,371,234 | IssuesEvent | 2017-11-13 18:17:07 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | Amp Viewer Does Not Display Text and Overlaps Content | Category: Framework Category: Presentation P1: High Priority Type: Bug | ## What's the issue?
The amp-viewer incorrectly overlaps content and does not display fonts. This is only present on certain iOS / browser versions (see table below).
## How do we reproduce the issue?
*Via search:*
1. Google `not rays pizza brooklyn`
2. Click on the AMP result.
*Directly:*
1. https://www.google.com/amp/s/www.notrayspizza.com/
Notice the buttons `View Full Menu` and `Order Now on Slice` do not have text, sometimes eventually populate, and overlap other content.
**No Text**
<img width="375" alt="png 2" src="https://user-images.githubusercontent.com/8161401/32114551-de2a9e2a-bb11-11e7-9aee-fce689c79079.png">
**Overlapping Content**
<img width="375" alt="png 1" src="https://user-images.githubusercontent.com/8161401/32114553-df8d5c44-bb11-11e7-82ea-246a76b3de4a.png">
**No Text**
<img width="375" alt="png" src="https://user-images.githubusercontent.com/8161401/32114558-e1e3403a-bb11-11e7-8f0f-3b4ab584e945.png">
## What browsers are affected?
iOS version | Chrome Version | Safari Version | buggy/good
-- | -- | -- | --
10.3.2 (14F89) | 61.0.3163.73 | | good
10.3.2 (14F89) | | 10.0 | good
10.3.3 (14G60) | | 10.0 | good
10.3.3 (14G60) | 61.0.3163.73 | 10.0 | good
10.2.1 (14D27) | | 10.0 | good
11.0.3 (15A432) | | 11.0 | buggy
11.0.3 | 61.0.3163 | | buggy
11.0.3 | | 11.0 | buggy
10.2 (14C92) | | 10.0 | buggy
11.0.3 | 61.0.3163.73 | | buggy
## Which AMP version is affected?
Amp Version 1508794187431
| 1.0 | Amp Viewer Does Not Display Text and Overlaps Content - ## What's the issue?
The amp-viewer incorrectly overlaps content and does not display fonts. This is only present on certain iOS / browser versions (see table below).
## How do we reproduce the issue?
*Via search:*
1. Google `not rays pizza brooklyn`
2. Click on the AMP result.
*Directly:*
1. https://www.google.com/amp/s/www.notrayspizza.com/
Notice the buttons `View Full Menu` and `Order Now on Slice` do not have text, sometimes eventually populate, and overlap other content.
**No Text**
<img width="375" alt="png 2" src="https://user-images.githubusercontent.com/8161401/32114551-de2a9e2a-bb11-11e7-9aee-fce689c79079.png">
**Overlapping Content**
<img width="375" alt="png 1" src="https://user-images.githubusercontent.com/8161401/32114553-df8d5c44-bb11-11e7-82ea-246a76b3de4a.png">
**No Text**
<img width="375" alt="png" src="https://user-images.githubusercontent.com/8161401/32114558-e1e3403a-bb11-11e7-8f0f-3b4ab584e945.png">
## What browsers are affected?
iOS version | Chrome Version | Safari Version | buggy/good
-- | -- | -- | --
10.3.2 (14F89) | 61.0.3163.73 | | good
10.3.2 (14F89) | | 10.0 | good
10.3.3 (14G60) | | 10.0 | good
10.3.3 (14G60) | 61.0.3163.73 | 10.0 | good
10.2.1 (14D27) | | 10.0 | good
11.0.3 (15A432) | | 11.0 | buggy
11.0.3 | 61.0.3163 | | buggy
11.0.3 | | 11.0 | buggy
10.2 (14C92) | | 10.0 | buggy
11.0.3 | 61.0.3163.73 | | buggy
## Which AMP version is affected?
Amp Version 1508794187431
| priority | amp viewer does not display text and overlaps content what s the issue the amp viewer incorrectly overlaps content and does not display fonts this is only present on certain ios browser versions see table below how do we reproduce the issue via search google not rays pizza brooklyn click on the amp result directly notice the buttons view full menu and order now on slice do not have text sometimes eventually populate and overlap other content no text img width alt png src overlapping content img width alt png src no text img width alt png src what browsers are affected ios version chrome version safari version buggy good good good good good good buggy buggy buggy buggy buggy which amp version is affected amp version | 1 |
126,507 | 4,996,715,090 | IssuesEvent | 2016-12-09 14:45:59 | viagogo/SQLComparer | https://api.github.com/repos/viagogo/SQLComparer | closed | NRE when searching and DB doesn't exist on server | Priority - High | Occurs when DB exists in some of the environments selected but not all
| 1.0 | NRE when searching and DB doesn't exist on server - Occurs when DB exists in some of the environments selected but not all
| priority | nre when searching and db doesn t exist on server occurs when db exists in some of the environments selected but not all | 1 |
693,677 | 23,785,419,629 | IssuesEvent | 2022-09-02 09:37:22 | redhat-developer/odo | https://api.github.com/repos/redhat-developer/odo | closed | 2nd non-default deploy command defined in devfile causes odo to report error | kind/bug priority/High v2 | /kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Operating System:**
```
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
```
**Output of `odo version`:**
```
$ odo version
odo v2.5.0 (724f16e68)
```
## How did you run odo exactly?
`$ odo deploy`
with the following two deploy commands defined in the devfile:
```
- id: deploy
composite:
commands:
- build-image-stack-provided
- outerloop-deploy
group:
kind: deploy
isDefault: true
- id: deploy-app-image
composite:
commands:
- build-image-app-provided
- outerloop-deploy
group:
kind: deploy
isDefault: false
```
id: deploy is the default commands for the devfile and
id: deploy-app-image is a second non-default command.
## Actual behavior
odo deply reports an error:
```
$ odo deploy
✗ more than one default deploy command found in devfile, should not happen
```
## Expected behavior
i expect to be able to call either:
`$ odo deploy`
or
`$odo deploy-app-image`
to have two slightly different build strategies occur as part of deploy.
## Any logs, error output, etc?
| 1.0 | 2nd non-default deploy command defined in devfile causes odo to report error - /kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Operating System:**
```
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
```
**Output of `odo version`:**
```
$ odo version
odo v2.5.0 (724f16e68)
```
## How did you run odo exactly?
`$ odo deploy`
with the following two deploy commands defined in the devfile:
```
- id: deploy
composite:
commands:
- build-image-stack-provided
- outerloop-deploy
group:
kind: deploy
isDefault: true
- id: deploy-app-image
composite:
commands:
- build-image-app-provided
- outerloop-deploy
group:
kind: deploy
isDefault: false
```
id: deploy is the default commands for the devfile and
id: deploy-app-image is a second non-default command.
## Actual behavior
odo deply reports an error:
```
$ odo deploy
✗ more than one default deploy command found in devfile, should not happen
```
## Expected behavior
i expect to be able to call either:
`$ odo deploy`
or
`$odo deploy-app-image`
to have two slightly different build strategies occur as part of deploy.
## Any logs, error output, etc?
| priority | non default deploy command defined in devfile causes odo to report error kind bug welcome we kindly ask you to fill out the issue template below use the google group if you have a question rather than a bug or feature request the group is at thanks for understanding and for contributing to the project what versions of software are you using operating system distributor id ubuntu description ubuntu lts release codename focal output of odo version odo version odo how did you run odo exactly odo deploy with the following two deploy commands defined in the devfile id deploy composite commands build image stack provided outerloop deploy group kind deploy isdefault true id deploy app image composite commands build image app provided outerloop deploy group kind deploy isdefault false id deploy is the default commands for the devfile and id deploy app image is a second non default command actual behavior odo deply reports an error odo deploy ✗ more than one default deploy command found in devfile should not happen expected behavior i expect to be able to call either odo deploy or odo deploy app image to have two slightly different build strategies occur as part of deploy any logs error output etc | 1 |
101,777 | 4,135,661,251 | IssuesEvent | 2016-06-13 00:17:26 | TotalCore/MusicBot | https://api.github.com/repos/TotalCore/MusicBot | opened | Bot Crash | Bug Bug: Confirmed Bug: Crashing Bug: Major High Priority Issue: Needs PR | Savnith - Today at 7:15 PM
;play savnith abus
Music PugBOT - Today at 7:15 PM
Traceback (most recent call last):
File "C:\Users\tsann\MusicBot\musicbot\bot.py", line 1912, in on_message
response = await handler(**handler_kwargs)
File "C:\Users\tsann\MusicBot\musicbot\bot.py", line 847, in cmd_play
await self.send_typing(channel)
File "C:\Users\tsann\MusicBot\musicbot\bot.py", line 522, in send_typing
return await super().send_typing(destination)
File "C:\Users\tsann\AppData\Local\Programs\Python\Python35\lib\site-packages\discord\client.py", line 905, in send_typing
yield from utils._verify_successful_response(response)
File "C:\Users\tsann\AppData\Local\Programs\Python\Python35\lib\site-packages\discord\utils.py", line 238, in _verify_successful_response
raise HTTPException(response, message, text)
discord.errors.HTTPException: Bad Gateway (status code: 502)
Bot crashed. | 1.0 | Bot Crash - Savnith - Today at 7:15 PM
;play savnith abus
Music PugBOT - Today at 7:15 PM
Traceback (most recent call last):
File "C:\Users\tsann\MusicBot\musicbot\bot.py", line 1912, in on_message
response = await handler(**handler_kwargs)
File "C:\Users\tsann\MusicBot\musicbot\bot.py", line 847, in cmd_play
await self.send_typing(channel)
File "C:\Users\tsann\MusicBot\musicbot\bot.py", line 522, in send_typing
return await super().send_typing(destination)
File "C:\Users\tsann\AppData\Local\Programs\Python\Python35\lib\site-packages\discord\client.py", line 905, in send_typing
yield from utils._verify_successful_response(response)
File "C:\Users\tsann\AppData\Local\Programs\Python\Python35\lib\site-packages\discord\utils.py", line 238, in _verify_successful_response
raise HTTPException(response, message, text)
discord.errors.HTTPException: Bad Gateway (status code: 502)
Bot crashed. | priority | bot crash savnith today at pm play savnith abus music pugbot today at pm traceback most recent call last file c users tsann musicbot musicbot bot py line in on message response await handler handler kwargs file c users tsann musicbot musicbot bot py line in cmd play await self send typing channel file c users tsann musicbot musicbot bot py line in send typing return await super send typing destination file c users tsann appdata local programs python lib site packages discord client py line in send typing yield from utils verify successful response response file c users tsann appdata local programs python lib site packages discord utils py line in verify successful response raise httpexception response message text discord errors httpexception bad gateway status code bot crashed | 1 |
814,796 | 30,522,213,478 | IssuesEvent | 2023-07-19 08:51:07 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | Update webhook notification config error | kind/bug priority/high kind/need-to-verify | <!--
You don't need to remove this comment section, it's invisible on the issues page.
## General remarks
* Attention, please fill out this issues form using English only!
* 注意!GitHub Issue 仅支持英文,中文 Issue 请在 [论坛](https://kubesphere.com.cn/forum/) 提交。
* This form is to report bugs. For general usage questions you can join our Slack channel
[KubeSphere-users](https://join.slack.com/t/kubesphere/shared_invite/enQtNTE3MDIxNzUxNzQ0LTZkNTdkYWNiYTVkMTM5ZThhODY1MjAyZmVlYWEwZmQ3ODQ1NmM1MGVkNWEzZTRhNzk0MzM5MmY4NDc3ZWVhMjE)
-->
https://github.com/kubesphere/issues/issues/241
**Describe the Bug**
A clear and concise description of what the bug is.
For UI issues please also add a screenshot that shows the issue.
**Versions Used**
KubeSphere:
Kubernetes: (If KubeSphere installer used, you can skip this)
**Environment**
How many nodes and their hardware configuration:
For example: CentOS 7.5 / 3 masters: 8cpu/8g; 3 nodes: 8cpu/16g
(and other info are welcomed to help us debugging)
**How To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
| 1.0 | Update webhook notification config error - <!--
You don't need to remove this comment section, it's invisible on the issues page.
## General remarks
* Attention, please fill out this issues form using English only!
* 注意!GitHub Issue 仅支持英文,中文 Issue 请在 [论坛](https://kubesphere.com.cn/forum/) 提交。
* This form is to report bugs. For general usage questions you can join our Slack channel
[KubeSphere-users](https://join.slack.com/t/kubesphere/shared_invite/enQtNTE3MDIxNzUxNzQ0LTZkNTdkYWNiYTVkMTM5ZThhODY1MjAyZmVlYWEwZmQ3ODQ1NmM1MGVkNWEzZTRhNzk0MzM5MmY4NDc3ZWVhMjE)
-->
https://github.com/kubesphere/issues/issues/241
**Describe the Bug**
A clear and concise description of what the bug is.
For UI issues please also add a screenshot that shows the issue.
**Versions Used**
KubeSphere:
Kubernetes: (If KubeSphere installer used, you can skip this)
**Environment**
How many nodes and their hardware configuration:
For example: CentOS 7.5 / 3 masters: 8cpu/8g; 3 nodes: 8cpu/16g
(and other info are welcomed to help us debugging)
**How To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
| priority | update webhook notification config error you don t need to remove this comment section it s invisible on the issues page general remarks attention please fill out this issues form using english only 注意!github issue 仅支持英文,中文 issue 请在 提交。 this form is to report bugs for general usage questions you can join our slack channel describe the bug a clear and concise description of what the bug is for ui issues please also add a screenshot that shows the issue versions used kubesphere kubernetes if kubesphere installer used you can skip this environment how many nodes and their hardware configuration for example centos masters nodes and other info are welcomed to help us debugging how to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior a clear and concise description of what you expected to happen | 1 |
663,997 | 22,217,977,885 | IssuesEvent | 2022-06-08 05:05:29 | OpenMined/PySyft | https://api.github.com/repos/OpenMined/PySyft | closed | Support Zero-knowledge Proofs | Type: New Feature :heavy_plus_sign: Priority: 2 - High :cold_sweat: 0.4 | ## Description
This ticket relates to implementing some kind of MVP for Zero-knowledge proofs integration for educational purposes for the 0.4.0 Milestone.
## Contacts
- @madhavajay
The library for this will be:
https://github.com/spring-epfl/zksk
## TODO
- [ ] Initial Hello World Notebooks
- [x] Serialization and Deserialization of ZKP types
- [ ] 100% of API in AST
- [x] LMW support
- [ ] Flesh out Notebook Examples
## Definition of Done
A notebook is created in examples/zero-knowledge-proofs or something similar which demonstrates how ZKP could be used with Duet sufficiently to achieve the educational goals. | 1.0 | Support Zero-knowledge Proofs - ## Description
This ticket relates to implementing some kind of MVP for Zero-knowledge proofs integration for educational purposes for the 0.4.0 Milestone.
## Contacts
- @madhavajay
The library for this will be:
https://github.com/spring-epfl/zksk
## TODO
- [ ] Initial Hello World Notebooks
- [x] Serialization and Deserialization of ZKP types
- [ ] 100% of API in AST
- [x] LMW support
- [ ] Flesh out Notebook Examples
## Definition of Done
A notebook is created in examples/zero-knowledge-proofs or something similar which demonstrates how ZKP could be used with Duet sufficiently to achieve the educational goals. | priority | support zero knowledge proofs description this ticket relates to implementing some kind of mvp for zero knowledge proofs integration for educational purposes for the milestone contacts madhavajay the library for this will be todo initial hello world notebooks serialization and deserialization of zkp types of api in ast lmw support flesh out notebook examples definition of done a notebook is created in examples zero knowledge proofs or something similar which demonstrates how zkp could be used with duet sufficiently to achieve the educational goals | 1 |
305,899 | 9,378,139,532 | IssuesEvent | 2019-04-04 12:10:56 | babykarte/babykarte.github.io | https://api.github.com/repos/babykarte/babykarte.github.io | opened | Prepairing the release of v.1.1 | high priority status | @Discostu36 wanted the release of a v.1.1
In order to release we need to work on the following ToDo list:
- [ ] Closing #38 | 1.0 | Prepairing the release of v.1.1 - @Discostu36 wanted the release of a v.1.1
In order to release we need to work on the following ToDo list:
- [ ] Closing #38 | priority | prepairing the release of v wanted the release of a v in order to release we need to work on the following todo list closing | 1 |
711,097 | 24,450,088,402 | IssuesEvent | 2022-10-06 21:51:00 | usdigitalresponse/usdr-gost | https://api.github.com/repos/usdigitalresponse/usdr-gost | opened | [ID] Keywords and eligibility codes should NOT filter the My Grants tab | bug ID Tool ux goodness high priority | Current behavior:
- setting new keywords and eligibility codes filters what is seen in both the `Browse Grants` and `My Grants` tabs
Expected behavior:
- setting new keywords and eligibility code should only filter `Browse Grants`, NOT the `My Grants` ta
| 1.0 | [ID] Keywords and eligibility codes should NOT filter the My Grants tab - Current behavior:
- setting new keywords and eligibility codes filters what is seen in both the `Browse Grants` and `My Grants` tabs
Expected behavior:
- setting new keywords and eligibility code should only filter `Browse Grants`, NOT the `My Grants` ta
| priority | keywords and eligibility codes should not filter the my grants tab current behavior setting new keywords and eligibility codes filters what is seen in both the browse grants and my grants tabs expected behavior setting new keywords and eligibility code should only filter browse grants not the my grants ta | 1 |
522,312 | 15,158,361,008 | IssuesEvent | 2021-02-12 00:57:43 | NOAA-GSL/MATS | https://api.github.com/repos/NOAA-GSL/MATS | closed | Irregular model cadences occasionally return init cycles in the wrong order. | Priority: High Project: MATS Status: Closed Type: Bug | ---
Author Name: **molly.b.smith** (@mollybsmith-noaa)
Original Redmine Issue: 77253, https://vlab.ncep.noaa.gov/redmine/issues/77253
Original Date: 2020-04-01
Original Assignee: molly.b.smith
---
Prior to March 17, the allowed model cycles for HRRRE were [00Z, 12Z, 18Z], while after March 17, they’re [00Z, 06Z, 12Z, 18Z]. If a time interval is plotted that includes both sides of the March 17 boundary, MATS takes the union of these two arrays, which for some reason ends up being [06Z, 18Z, 00Z, 12Z], which is…not in the right order. This results in a graph in which all of the points are in the right place, but the line is connecting points it shouldn’t be.
| 1.0 | Irregular model cadences occasionally return init cycles in the wrong order. - ---
Author Name: **molly.b.smith** (@mollybsmith-noaa)
Original Redmine Issue: 77253, https://vlab.ncep.noaa.gov/redmine/issues/77253
Original Date: 2020-04-01
Original Assignee: molly.b.smith
---
Prior to March 17, the allowed model cycles for HRRRE were [00Z, 12Z, 18Z], while after March 17, they’re [00Z, 06Z, 12Z, 18Z]. If a time interval is plotted that includes both sides of the March 17 boundary, MATS takes the union of these two arrays, which for some reason ends up being [06Z, 18Z, 00Z, 12Z], which is…not in the right order. This results in a graph in which all of the points are in the right place, but the line is connecting points it shouldn’t be.
| priority | irregular model cadences occasionally return init cycles in the wrong order author name molly b smith mollybsmith noaa original redmine issue original date original assignee molly b smith prior to march the allowed model cycles for hrrre were while after march they’re if a time interval is plotted that includes both sides of the march boundary mats takes the union of these two arrays which for some reason ends up being which is…not in the right order this results in a graph in which all of the points are in the right place but the line is connecting points it shouldn’t be | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.