Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
254,011 | 8,068,750,602 | IssuesEvent | 2018-08-06 00:31:10 | okTurtles/group-income-simple | https://api.github.com/repos/okTurtles/group-income-simple | closed | Create a cross-platform sandboxed dev environment | Note:Research Note:Security Priority:High | ## Problem
It's insecure to download and run random code form the net.
## Solution
Ideally, a simple way to chroot/sandbox it by specifying:
- what folder it's restricted to
- whitelisting resources it needs outside of that folder
something like containers/[docker](https://blog.docker.com/2016/03/docker-for-mac-windows-beta/)/unikernels.
- https://www.chromium.org/developers/design-documents/sandbox/osx-sandboxing-design
- https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/sandbox-exec.1.html
See if it's possible to design project such that it's a requirement to develop using the sandbox (i.e. project won't function/run otherwise).
| 1.0 | Create a cross-platform sandboxed dev environment - ## Problem
It's insecure to download and run random code form the net.
## Solution
Ideally, a simple way to chroot/sandbox it by specifying:
- what folder it's restricted to
- whitelisting resources it needs outside of that folder
something like containers/[docker](https://blog.docker.com/2016/03/docker-for-mac-windows-beta/)/unikernels.
- https://www.chromium.org/developers/design-documents/sandbox/osx-sandboxing-design
- https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/sandbox-exec.1.html
See if it's possible to design project such that it's a requirement to develop using the sandbox (i.e. project won't function/run otherwise).
| priority | create a cross platform sandboxed dev environment problem it s insecure to download and run random code form the net solution ideally a simple way to chroot sandbox it by specifying what folder it s restricted to whitelisting resources it needs outside of that folder something like containers see if it s possible to design project such that it s a requirement to develop using the sandbox i e project won t function run otherwise | 1 |
283,500 | 8,719,728,935 | IssuesEvent | 2018-12-08 03:43:16 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Unable to run client/server to cielo | bug crash likelihood medium priority reviewed severity high wrong results |
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1825
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Unable to run client/server to cielo
Assigned to: Eric Brugger
Category:
Target version: 2.7.3
Author: Eric Brugger
Start: 04/25/2014
Due date:
% Done: 100
Estimated time: 1.0
Created: 04/25/2014 03:27 pm
Updated: 05/15/2014 11:30 am
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.7.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Comments:
Ping Wang e-mailed that she was unable to run visit on cielo. I investigated and the gateway machine to LANL has changed. It is now red-wtrw.lanl.gov. I changed the host profile for 2.6.3 and 2.7.2 on the closed LC network so she should be good to go. It should also be changed in the repo.
I committed revisions 23324 and 23326 to the 2.7 RC and trunk with thefollowing changes:1) I changed the gateway host for LANL's cielo system to red-wtrw.lanl.gov since the old gateway cluster was retired. I added a blurb to the release notes about the change. This resolves #1825.M resources/help/en_US/relnotes2.7.3.htmlM resources/hosts/llnl_closed/host_lanl_closed_cielo.xml
| 1.0 | Unable to run client/server to cielo -
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1825
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Unable to run client/server to cielo
Assigned to: Eric Brugger
Category:
Target version: 2.7.3
Author: Eric Brugger
Start: 04/25/2014
Due date:
% Done: 100
Estimated time: 1.0
Created: 04/25/2014 03:27 pm
Updated: 05/15/2014 11:30 am
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.7.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Comments:
Ping Wang e-mailed that she was unable to run visit on cielo. I investigated and the gateway machine to LANL has changed. It is now red-wtrw.lanl.gov. I changed the host profile for 2.6.3 and 2.7.2 on the closed LC network so she should be good to go. It should also be changed in the repo.
I committed revisions 23324 and 23326 to the 2.7 RC and trunk with thefollowing changes:1) I changed the gateway host for LANL's cielo system to red-wtrw.lanl.gov since the old gateway cluster was retired. I added a blurb to the release notes about the change. This resolves #1825.M resources/help/en_US/relnotes2.7.3.htmlM resources/hosts/llnl_closed/host_lanl_closed_cielo.xml
| priority | unable to run client server to cielo redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority urgent subject unable to run client server to cielo assigned to eric brugger category target version author eric brugger start due date done estimated time created pm updated am likelihood occasional severity crash wrong results found in version impact expected use os all support group any description comments ping wang e mailed that she was unable to run visit on cielo i investigated and the gateway machine to lanl has changed it is now red wtrw lanl gov i changed the host profile for and on the closed lc network so she should be good to go it should also be changed in the repo i committed revisions and to the rc and trunk with thefollowing changes i changed the gateway host for lanl s cielo system to red wtrw lanl gov since the old gateway cluster was retired i added a blurb to the release notes about the change this resolves m resources help en us htmlm resources hosts llnl closed host lanl closed cielo xml | 1 |
186,224 | 6,734,492,503 | IssuesEvent | 2017-10-18 18:14:30 | GoogleCloudPlatform/google-cloud-eclipse | https://api.github.com/repos/GoogleCloudPlatform/google-cloud-eclipse | closed | Java 8 pom.xml doesn't depend on JSP API | bug high priority | Non-maven projects have this in the runtime.
```
<dependencies>
<!-- Compile/runtime dependencies -->
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.1.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>jstl</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>
``` | 1.0 | Java 8 pom.xml doesn't depend on JSP API - Non-maven projects have this in the runtime.
```
<dependencies>
<!-- Compile/runtime dependencies -->
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.1.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>jstl</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>
``` | priority | java pom xml doesn t depend on jsp api non maven projects have this in the runtime javax servlet javax servlet api provided jstl jstl | 1 |
499,011 | 14,437,468,759 | IssuesEvent | 2020-12-07 11:36:00 | Praful932/Kitabe | https://api.github.com/repos/Praful932/Kitabe | closed | Visual Feedback/Validation after a book is rated | enhancement good first issue priority:high | There should be visual feedback or validation for the user after a book has been rated.

| 1.0 | Visual Feedback/Validation after a book is rated - There should be visual feedback or validation for the user after a book has been rated.

| priority | visual feedback validation after a book is rated there should be visual feedback or validation for the user after a book has been rated | 1 |
576,746 | 17,093,634,051 | IssuesEvent | 2021-07-08 21:14:16 | aqualinkorg/aqualink-app | https://api.github.com/repos/aqualinkorg/aqualink-app | closed | Integrate OceanSense data and charts | enhancement frontend priority:high | We would like to integrate new datasets coming from OceanSense and display it in new charts.
- The data will come directly from an external API
- Only a very small sample of reefs will be affected

| 1.0 | Integrate OceanSense data and charts - We would like to integrate new datasets coming from OceanSense and display it in new charts.
- The data will come directly from an external API
- Only a very small sample of reefs will be affected

| priority | integrate oceansense data and charts we would like to integrate new datasets coming from oceansense and display it in new charts the data will come directly from an external api only a very small sample of reefs will be affected | 1 |
566,865 | 16,832,510,698 | IssuesEvent | 2021-06-18 07:36:11 | JulianTek/dotNetRogue | https://api.github.com/repos/JulianTek/dotNetRogue | closed | API Data can be decoded into an object | high priority user story | Acceptance criteria:
An object is the exact same as is noted in the API
Definition of done:
-Tests are written
-Tests pass
-Code is clean
-Code is readable | 1.0 | API Data can be decoded into an object - Acceptance criteria:
An object is the exact same as is noted in the API
Definition of done:
-Tests are written
-Tests pass
-Code is clean
-Code is readable | priority | api data can be decoded into an object acceptance criteria an object is the exact same as is noted in the api definition of done tests are written tests pass code is clean code is readable | 1 |
307,703 | 9,420,921,523 | IssuesEvent | 2019-04-11 04:38:18 | CS2113-AY1819S2-M11-1/main | https://api.github.com/repos/CS2113-AY1819S2-M11-1/main | closed | As a league organizer, I can sort the teams by their rank points | priority.High type.Story | so that I can find out easily who is leading in the league currently | 1.0 | As a league organizer, I can sort the teams by their rank points - so that I can find out easily who is leading in the league currently | priority | as a league organizer i can sort the teams by their rank points so that i can find out easily who is leading in the league currently | 1 |
231,627 | 7,641,454,489 | IssuesEvent | 2018-05-08 05:04:17 | openshiftio/openshift.io | https://api.github.com/repos/openshiftio/openshift.io | closed | Limit app name length to 40 characters to avoid jenkins plugin limits | SEV2-high priority/P0 team/docs team/launcher team/platform | As seen in https://github.com/openshiftio/openshift.io/issues/2527 there is a yet to be identified plugin in jenkins or pipeline libraries that prevent long names (>40) to be used.
Opening this as limiting in UI is the most feasible short term fix.
/cc @qodfathr | 1.0 | Limit app name length to 40 characters to avoid jenkins plugin limits - As seen in https://github.com/openshiftio/openshift.io/issues/2527 there is a yet to be identified plugin in jenkins or pipeline libraries that prevent long names (>40) to be used.
Opening this as limiting in UI is the most feasible short term fix.
/cc @qodfathr | priority | limit app name length to characters to avoid jenkins plugin limits as seen in there is a yet to be identified plugin in jenkins or pipeline libraries that prevent long names to be used opening this as limiting in ui is the most feasible short term fix cc qodfathr | 1 |
268,462 | 8,407,144,226 | IssuesEvent | 2018-10-11 20:02:40 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | opened | Configurator: Safari can not display interface modal during the configurator | Priority: Critical Priority: High Type: Bug | I can not click on my interface name using Safari Version 12.0 (14606.1.36.1.9) and the other hand works on Canary chrome.
PacketFence package version: packetfence-8.1.9-0.201810110000.el7.noarch
<img width="1163" alt="screen shot 2018-10-11 at 3 56 26 pm" src="https://user-images.githubusercontent.com/5261214/46830469-b1547900-cd6e-11e8-983a-23452ee1ff65.png"> | 2.0 | Configurator: Safari can not display interface modal during the configurator - I can not click on my interface name using Safari Version 12.0 (14606.1.36.1.9) and the other hand works on Canary chrome.
PacketFence package version: packetfence-8.1.9-0.201810110000.el7.noarch
<img width="1163" alt="screen shot 2018-10-11 at 3 56 26 pm" src="https://user-images.githubusercontent.com/5261214/46830469-b1547900-cd6e-11e8-983a-23452ee1ff65.png"> | priority | configurator safari can not display interface modal during the configurator i can not click on my interface name using safari version and the other hand works on canary chrome packetfence package version packetfence noarch img width alt screen shot at pm src | 1 |
240,608 | 7,803,499,374 | IssuesEvent | 2018-06-11 00:43:48 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | [composer] Command-Palette-> Format Document seems not working | Priority/Highest Severity/Major Type/Bug component/Composer | **Description:**
Issue
* Right click on source-> Command-Palette-> Format Document : seems it doesn't do any function.
I assumed it is similar to refactor source code, is it?

**Affected Versions:**
ballerina-tools-0.970.1-SNAPSHOT
**OS, DB, other environment details and versions:**
Ubuntu 15.10
Firefox 59.0.2 | 1.0 | [composer] Command-Palette-> Format Document seems not working - **Description:**
Issue
* Right click on source-> Command-Palette-> Format Document : seems it doesn't do any function.
I assumed it is similar to refactor source code, is it?

**Affected Versions:**
ballerina-tools-0.970.1-SNAPSHOT
**OS, DB, other environment details and versions:**
Ubuntu 15.10
Firefox 59.0.2 | priority | command palette format document seems not working description issue right click on source command palette format document seems it doesn t do any function i assumed it is similar to refactor source code is it affected versions ballerina tools snapshot os db other environment details and versions ubuntu firefox | 1 |
442,091 | 12,737,432,702 | IssuesEvent | 2020-06-25 18:45:31 | nathan-weinberg/jeeves | https://api.github.com/repos/nathan-weinberg/jeeves | closed | Prepend blocker name with status in report | RFE high priority | We are already using the Bugzilla and Jira APIs to fetch the names of bugs/tickets via their ID. We can extend the usage of the API by also fetching the statuses of blockers and prepending them to the names we are fetching within the report, providing additional useful information with minimal code changes and no real extra crowding of the existing report format. | 1.0 | Prepend blocker name with status in report - We are already using the Bugzilla and Jira APIs to fetch the names of bugs/tickets via their ID. We can extend the usage of the API by also fetching the statuses of blockers and prepending them to the names we are fetching within the report, providing additional useful information with minimal code changes and no real extra crowding of the existing report format. | priority | prepend blocker name with status in report we are already using the bugzilla and jira apis to fetch the names of bugs tickets via their id we can extend the usage of the api by also fetching the statuses of blockers and prepending them to the names we are fetching within the report providing additional useful information with minimal code changes and no real extra crowding of the existing report format | 1 |
328,614 | 9,997,428,301 | IssuesEvent | 2019-07-12 04:23:45 | wso2-cellery/sdk | https://api.github.com/repos/wso2-cellery/sdk | closed | Reinstalling the cellery without uninstalling causing an issue - native function not available | Priority/Highest Severity/Blocker bug | Steps to reproduce.
1) Have previous cellery 0.2.1 installation
2) Now install cellery 0.3.0, without uninstalling the pervious script with ./uninstall.sh
3) Now perform cellery run or celllery build. Below error will be prompted.
```
~/cellerytest/hello$ cellery build hello.bal pzfreo/hello:0.3.0
โ Building image pzfreo/hello:0.3.0
Build Failed.
======================
error: failed to compile file: native function not available celleryio/cellery:0.0.0:runInstances
Error occurred while building cell image: exit status 1
``` | 1.0 | Reinstalling the cellery without uninstalling causing an issue - native function not available - Steps to reproduce.
1) Have previous cellery 0.2.1 installation
2) Now install cellery 0.3.0, without uninstalling the pervious script with ./uninstall.sh
3) Now perform cellery run or celllery build. Below error will be prompted.
```
~/cellerytest/hello$ cellery build hello.bal pzfreo/hello:0.3.0
โ Building image pzfreo/hello:0.3.0
Build Failed.
======================
error: failed to compile file: native function not available celleryio/cellery:0.0.0:runInstances
Error occurred while building cell image: exit status 1
``` | priority | reinstalling the cellery without uninstalling causing an issue native function not available steps to reproduce have previous cellery installation now install cellery without uninstalling the pervious script with uninstall sh now perform cellery run or celllery build below error will be prompted cellerytest hello cellery build hello bal pzfreo hello โ building image pzfreo hello build failed error failed to compile file native function not available celleryio cellery runinstances error occurred while building cell image exit status | 1 |
458,237 | 13,171,492,454 | IssuesEvent | 2020-08-11 16:45:52 | pulibrary/orangelight | https://api.github.com/repos/pulibrary/orangelight | opened | Display "snippets" from Google Books and/or Hathi if possible | feature high-priority | Displaying snippets of non-public Google/Hathi items will help users evaluate items, and reduce unnecessary pickup or scanning requests. | 1.0 | Display "snippets" from Google Books and/or Hathi if possible - Displaying snippets of non-public Google/Hathi items will help users evaluate items, and reduce unnecessary pickup or scanning requests. | priority | display snippets from google books and or hathi if possible displaying snippets of non public google hathi items will help users evaluate items and reduce unnecessary pickup or scanning requests | 1 |
112,009 | 4,500,816,848 | IssuesEvent | 2016-09-01 06:59:57 | DanGrew/JenkinsTestTracker | https://api.github.com/repos/DanGrew/JenkinsTestTracker | closed | Add assignment should auto expand | High Priority | Should expand user's branch if collapsed when something is assigned. | 1.0 | Add assignment should auto expand - Should expand user's branch if collapsed when something is assigned. | priority | add assignment should auto expand should expand user s branch if collapsed when something is assigned | 1 |
508,658 | 14,704,203,589 | IssuesEvent | 2021-01-04 16:08:58 | aqualinkorg/aqualink-app | https://api.github.com/repos/aqualinkorg/aqualink-app | closed | Filter table and map to sites with a spotter only | backend frontend priority:high | Add a simple toggle for super admins between the details box and the table.
`has spotter` to filter down the table and map to only display sites with a linked spotter. | 1.0 | Filter table and map to sites with a spotter only - Add a simple toggle for super admins between the details box and the table.
`has spotter` to filter down the table and map to only display sites with a linked spotter. | priority | filter table and map to sites with a spotter only add a simple toggle for super admins between the details box and the table has spotter to filter down the table and map to only display sites with a linked spotter | 1 |
786,156 | 27,636,751,312 | IssuesEvent | 2023-03-10 14:59:34 | KinsonDigital/TagVerifier | https://api.github.com/repos/KinsonDigital/TagVerifier | opened | ๐งUpdate actions/core package | dependency update high priority preview | ### Complete The Item Below
- [X] I have updated the title without removing the ๐ง emoji.
### Description
Update the **_actions/core_** package to the latest version.
This update is required due to changes that GitHub has made to how action outputs are dealt with. This is a security update.
Click [here](https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/) for more information.
**Package to update:**
```diff
"dependencies": {
- "@actions/core": "^1.4.0",
+ "@actions/core": "^1.10.0",
"@actions/github": "^5.0.0",
"@vercel/ncc": "^0.28.6",
"axios": "^0.21.1",
"node": "^16.5.0"
},
```
### Acceptance Criteria
- [ ] **actions/core** package updated to latest version.
### ToDo Items
- [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below.
- [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below.
- [X] Issue linked to the correct project _(if applicable)_.
- [X] Issue linked to the correct milestone _(if applicable)_.
- [ ] Draft pull request created and linked to this issue _(only required with code changes)_.
### Issue Dependencies
_No response_
### Related Work
_No response_
### Additional Information:
**_<details closed><summary>Change Type Labels</summary>_**
| Change Type | Label |
|---------------------|----------------------|
| Bug Fixes | `๐bug` |
| Breaking Changes | `๐งจbreaking changes` |
| New Feature | `โจnew feature` |
| Workflow Changes | `workflow` |
| Code Doc Changes | `๐๏ธdocumentation/code` |
| Product Doc Changes | `๐documentation/product` |
</details>
**_<details closed><summary>Priority Type Labels</summary>_**
| Priority Type | Label |
|---------------------|-------------------|
| Low Priority | `low priority` |
| Medium Priority | `medium priority` |
| High Priority | `high priority` |
</details>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct. | 1.0 | ๐งUpdate actions/core package - ### Complete The Item Below
- [X] I have updated the title without removing the ๐ง emoji.
### Description
Update the **_actions/core_** package to the latest version.
This update is required due to changes that GitHub has made to how action outputs are dealt with. This is a security update.
Click [here](https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/) for more information.
**Package to update:**
```diff
"dependencies": {
- "@actions/core": "^1.4.0",
+ "@actions/core": "^1.10.0",
"@actions/github": "^5.0.0",
"@vercel/ncc": "^0.28.6",
"axios": "^0.21.1",
"node": "^16.5.0"
},
```
### Acceptance Criteria
- [ ] **actions/core** package updated to latest version.
### ToDo Items
- [X] Change type labels added to this issue. Refer to the _**Change Type Labels**_ section below.
- [X] Priority label added to this issue. Refer to the _**Priority Type Labels**_ section below.
- [X] Issue linked to the correct project _(if applicable)_.
- [X] Issue linked to the correct milestone _(if applicable)_.
- [ ] Draft pull request created and linked to this issue _(only required with code changes)_.
### Issue Dependencies
_No response_
### Related Work
_No response_
### Additional Information:
**_<details closed><summary>Change Type Labels</summary>_**
| Change Type | Label |
|---------------------|----------------------|
| Bug Fixes | `๐bug` |
| Breaking Changes | `๐งจbreaking changes` |
| New Feature | `โจnew feature` |
| Workflow Changes | `workflow` |
| Code Doc Changes | `๐๏ธdocumentation/code` |
| Product Doc Changes | `๐documentation/product` |
</details>
**_<details closed><summary>Priority Type Labels</summary>_**
| Priority Type | Label |
|---------------------|-------------------|
| Low Priority | `low priority` |
| Medium Priority | `medium priority` |
| High Priority | `high priority` |
</details>
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct. | priority | ๐งupdate actions core package complete the item below i have updated the title without removing the ๐ง emoji description update the actions core package to the latest version this update is required due to changes that github has made to how action outputs are dealt with this is a security update click for more information package to update diff dependencies actions core actions core actions github vercel ncc axios node acceptance criteria actions core package updated to latest version todo items change type labels added to this issue refer to the change type labels section below priority label added to this issue refer to the priority type labels section below issue linked to the correct project if applicable issue linked to the correct milestone if applicable draft pull request created and linked to this issue only required with code changes issue dependencies no response related work no response additional information change type labels change type label bug fixes ๐bug breaking changes ๐งจbreaking changes new feature โจnew feature workflow changes workflow code doc changes ๐๏ธdocumentation code product doc changes ๐documentation product priority type labels priority type label low priority low priority medium priority medium priority high priority high priority code of conduct i agree to follow this project s code of conduct | 1 |
651,965 | 21,516,546,368 | IssuesEvent | 2022-04-28 10:24:26 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Save button disappear on map Import | bug Priority: High Accepted Internal Regression | ## Description
<!-- Add here a few sentences describing the bug. -->
Save button disappear on map Import.
## How to reproduce
<!-- A list of steps to reproduce the bug -->

*Expected Result*
<!-- Describe here the expected result -->
If the user is authenticated with edit permission on the current map, then the Save button should be maintained in the Option menu.
*Current Result*
<!-- Describe here the current behavior -->
The Save button disappear in the Option menu.
- [x] Not browser related
<details><summary> <b>Browser info</b> </summary>
<!-- If browser related, please compile the following table -->
<!-- If your browser is not in the list please add a new row to the table with the version -->
(use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users)
| Browser Affected | Version |
|---|---|
|Internet Explorer| |
|Edge| |
|Chrome| |
|Firefox| |
|Safari| |
</details>
## Other useful information
<!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
This but affects DEV, QA and STABLE | 1.0 | Save button disappear on map Import - ## Description
<!-- Add here a few sentences describing the bug. -->
Save button disappear on map Import.
## How to reproduce
<!-- A list of steps to reproduce the bug -->

*Expected Result*
<!-- Describe here the expected result -->
If the user is authenticated with edit permission on the current map, then the Save button should be maintained in the Option menu.
*Current Result*
<!-- Describe here the current behavior -->
The Save button disappear in the Option menu.
- [x] Not browser related
<details><summary> <b>Browser info</b> </summary>
<!-- If browser related, please compile the following table -->
<!-- If your browser is not in the list please add a new row to the table with the version -->
(use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users)
| Browser Affected | Version |
|---|---|
|Internet Explorer| |
|Edge| |
|Chrome| |
|Firefox| |
|Safari| |
</details>
## Other useful information
<!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
This but affects DEV, QA and STABLE | priority | save button disappear on map import description save button disappear on map import how to reproduce expected result if the user is authenticated with edit permission on the current map then the save button should be maintained in the option menu current result the save button disappear in the option menu not browser related browser info use this site a href for non expert users browser affected version internet explorer edge chrome firefox safari other useful information this but affects dev qa and stable | 1 |
667,777 | 22,499,899,419 | IssuesEvent | 2022-06-23 10:50:26 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Getting 400 API response with UI error when select Magic Link Login from SPA sign in methods | Priority/Highest Severity/Critical bug console Affected-6.0.0 QA-Reported | **How to reproduce:**
1. Create a SPA
2. Go to Sign in methods tab of the created SPA
3. Select Magic Link Login
4. Once you are navigated to the config page click Update button
Getting 400 API response and below UI error


We need to get remove the magic link login option if this is not supported in IS 6.0.0 M5
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
H2 default
IS 6.0.0 M5 | 1.0 | Getting 400 API response with UI error when select Magic Link Login from SPA sign in methods - **How to reproduce:**
1. Create a SPA
2. Go to Sign in methods tab of the created SPA
3. Select Magic Link Login
4. Once you are navigated to the config page click Update button
Getting 400 API response and below UI error


We need to get remove the magic link login option if this is not supported in IS 6.0.0 M5
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
H2 default
IS 6.0.0 M5 | priority | getting api response with ui error when select magic link login from spa sign in methods how to reproduce create a spa go to sign in methods tab of the created spa select magic link login once you are navigated to the config page click update button getting api response and below ui error we need to get remove the magic link login option if this is not supported in is environment information please complete the following information remove any unnecessary fields default is | 1 |
50,492 | 3,006,435,676 | IssuesEvent | 2015-07-27 10:22:11 | Itseez/opencv | https://api.github.com/repos/Itseez/opencv | opened | OpenCV Python unable to access correct OpenNI device channels | affected: 2.4 auto-transferred bug category: highgui-camera priority: normal | Transferred from http://code.opencv.org/issues/3663
```
|| Andrew Braun on 2014-04-24 03:48
|| Priority: Normal
|| Affected: 2.4.9 (latest release)
|| Category: highgui-camera
|| Tracker: Bug
|| Difficulty: Medium
|| PR:
|| Platform: x64 / Linux
```
OpenCV Python unable to access correct OpenNI device channels
-----------
```
I put together some simple code in python to grab different channels from OpenNI devices. I built OpenCV myself with all the PrimeSense and OpenNI support enabled. The OpenNI samples work perfectly for both the Kinect sensor and PrimeSense sensor, as well as the OpenCV samples for testing OpenNI support (./cpp-example-openni_capture).
Here is the code I put together.
@
import cv2
import cv2.cv as cv
capture = cv2.VideoCapture(cv.CV_CAP_OPENNI)
capture.set(cv.CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE, cv.CV_CAP_OPENNI_VGA_30HZ)
print capture.get(cv.CV_CAP_PROP_OPENNI_REGISTRATION)
while True:
if not capture.grab():
print "Unable to Grab Frames from camera"
break
okay1, depth_map = capture.retrieve(cv.CV_CAP_OPENNI_DEPTH_MAP)
if not okay1:
print "Unable to Retrieve Disparity Map from camera"
break
okay2, gray_image = capture.retrieve(cv.CV_CAP_OPENNI_GRAY_IMAGE)
if not okay2:
print "Unable to retrieve Gray Image from device"
break
cv2.imshow("depth camera", depth_map)
cv2.imshow("rgb camera", gray_image)
if cv2.waitKey(10) == 27:
break
cv2.destroyAllWindows()
capture.release()
@
So everything runs fine, but the results that are being displayed are not the correct channels... For example if I wanted to access the gray image channel and the depth map channel, both images being displayed are depth_maps.
Yes I've tried accessing other channels and changing the OPENNI_IMAGE_GENERATOR_MODE. Unfortunately the results have stayed consistent. No matter what I try I always get the same depth channel back. depth_map-gray_image yields an all black image.
Like I said the C++ OpenCV OpenNI examples all work perfectly for both the Kinect sensor and primesense sensor. It seems like a problem with the Python modules, or I am doing something really stupid.
Running on Ubuntu 12.04 LTS OpenCV version 2.4.9
Thanks for helping.
Drew
```
History
-------
##### Alexander Smorkalov on 2014-04-30 19:05
```
- Target version changed from 2.4.9 to 2.4.10
```
##### Dmitry Retinskiy on 2014-09-17 10:12
```
Alexander, could you check this?
Thanks.
- Affected version changed from 2.4.8 (latest release) to 2.4.9
(latest release)
- Assignee set to Alexander Smorkalov
- Status changed from New to Open
``` | 1.0 | OpenCV Python unable to access correct OpenNI device channels - Transferred from http://code.opencv.org/issues/3663
```
|| Andrew Braun on 2014-04-24 03:48
|| Priority: Normal
|| Affected: 2.4.9 (latest release)
|| Category: highgui-camera
|| Tracker: Bug
|| Difficulty: Medium
|| PR:
|| Platform: x64 / Linux
```
OpenCV Python unable to access correct OpenNI device channels
-----------
```
I put together some simple code in python to grab different channels from OpenNI devices. I built OpenCV myself with all the PrimeSense and OpenNI support enabled. The OpenNI samples work perfectly for both the Kinect sensor and PrimeSense sensor, as well as the OpenCV samples for testing OpenNI support (./cpp-example-openni_capture).
Here is the code I put together.
@
import cv2
import cv2.cv as cv
capture = cv2.VideoCapture(cv.CV_CAP_OPENNI)
capture.set(cv.CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE, cv.CV_CAP_OPENNI_VGA_30HZ)
print capture.get(cv.CV_CAP_PROP_OPENNI_REGISTRATION)
while True:
if not capture.grab():
print "Unable to Grab Frames from camera"
break
okay1, depth_map = capture.retrieve(cv.CV_CAP_OPENNI_DEPTH_MAP)
if not okay1:
print "Unable to Retrieve Disparity Map from camera"
break
okay2, gray_image = capture.retrieve(cv.CV_CAP_OPENNI_GRAY_IMAGE)
if not okay2:
print "Unable to retrieve Gray Image from device"
break
cv2.imshow("depth camera", depth_map)
cv2.imshow("rgb camera", gray_image)
if cv2.waitKey(10) == 27:
break
cv2.destroyAllWindows()
capture.release()
@
So everything runs fine, but the results that are being displayed are not the correct channels... For example if I wanted to access the gray image channel and the depth map channel, both images being displayed are depth_maps.
Yes I've tried accessing other channels and changing the OPENNI_IMAGE_GENERATOR_MODE. Unfortunately the results have stayed consistent. No matter what I try I always get the same depth channel back. depth_map-gray_image yields an all black image.
Like I said the C++ OpenCV OpenNI examples all work perfectly for both the Kinect sensor and primesense sensor. It seems like a problem with the Python modules, or I am doing something really stupid.
Running on Ubuntu 12.04 LTS OpenCV version 2.4.9
Thanks for helping.
Drew
```
History
-------
##### Alexander Smorkalov on 2014-04-30 19:05
```
- Target version changed from 2.4.9 to 2.4.10
```
##### Dmitry Retinskiy on 2014-09-17 10:12
```
Alexander, could you check this?
Thanks.
- Affected version changed from 2.4.8 (latest release) to 2.4.9
(latest release)
- Assignee set to Alexander Smorkalov
- Status changed from New to Open
``` | priority | opencv python unable to access correct openni device channels transferred from andrew braun on priority normal affected latest release category highgui camera tracker bug difficulty medium pr platform linux opencv python unable to access correct openni device channels i put together some simple code in python to grab different channels from openni devices i built opencv myself with all the primesense and openni support enabled the openni samples work perfectly for both the kinect sensor and primesense sensor as well as the opencv samples for testing openni support cpp example openni capture here is the code i put together import import cv as cv capture videocapture cv cv cap openni capture set cv cv cap openni image generator output mode cv cv cap openni vga print capture get cv cv cap prop openni registration while true if not capture grab print unable to grab frames from camera break depth map capture retrieve cv cv cap openni depth map if not print unable to retrieve disparity map from camera break gray image capture retrieve cv cv cap openni gray image if not print unable to retrieve gray image from device break imshow depth camera depth map imshow rgb camera gray image if waitkey break destroyallwindows capture release so everything runs fine but the results that are being displayed are not the correct channels for example if i wanted to access the gray image channel and the depth map channel both images being displayed are depth maps yes i ve tried accessing other channels and changing the openni image generator mode unfortunately the results have stayed consistent no matter what i try i always get the same depth channel back depth map gray image yields an all black image like i said the c opencv openni examples all work perfectly for both the kinect sensor and primesense sensor it seems like a problem with the python modules or i am doing something really stupid running on ubuntu lts opencv version thanks for helping drew history alexander smorkalov on target version changed from to dmitry retinskiy on alexander could you check this thanks affected version changed from latest release to latest release assignee set to alexander smorkalov status changed from new to open | 1 |
703,813 | 24,173,874,304 | IssuesEvent | 2022-09-22 22:10:44 | dtcenter/MET | https://api.github.com/repos/dtcenter/MET | opened | Fix Stat-Analysis aggregation of the NBRCTC line type. | type: bug alert: NEED ACCOUNT KEY requestor: METplus Team MET: Aggregation Tools priority: high | ## Describe the Problem ##
This issue was raised via GitHub discussion dtcenter/METplus#1826. The aggregation of the NBRCTC line types by Stat-Analysis does not work as expected.
Please provide this bugfix for the main_v10.0, main_v10.1, and develop branches.
### Expected Behavior ###
Stat-Analysis should be able to aggregate multiple NBRCTC lines and write an output .stat file by running `-job aggregate -line_type NBRCTC -out_stat out.stat`.
### Environment ###
Describe your runtime environment:
*1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop)*
*2. OS: (e.g. RedHat Linux, MacOS)*
*3. Software version number(s)*
### To Reproduce ###
Describe the steps to reproduce the behavior:
1. Run MET version 10.0 or later:
```
stat_analysis -lookin grid_stat_APCP_12_120000L_20050807_120000V.txt \
-job aggregate -line_type NBRCTC -out_stat out_stat.txt
```
Using: [grid_stat_APCP_12_120000L_20050807_120000V.txt](https://github.com/dtcenter/MET/files/9629676/grid_stat_APCP_12_120000L_20050807_120000V.txt)
2. Observe the runtime error:
```
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_M_construct null not valid
Abort trap: 6
```
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
*Define the source of funding and account keys here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [x] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required: None needed
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Organization** level **Project** for support of the current coordinated release
- [x] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next bugfix version
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
No impacts.
## Bugfix Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **main_\<Version>**.
Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>`
- [ ] Fix the bug and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **main_\<Version>**.
Pull request: `bugfix <Issue Number> main_<Version> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Organization** level software support **Project** for the current coordinated release
Select: **Milestone** as the next bugfix version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Complete the steps above to fix the bug on the **develop** branch.
Branch name: `bugfix_<Issue Number>_develop_<Description>`
Pull request: `bugfix <Issue Number> develop <Description>`
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Close this issue.
| 1.0 | Fix Stat-Analysis aggregation of the NBRCTC line type. - ## Describe the Problem ##
This issue was raised via GitHub discussion dtcenter/METplus#1826. The aggregation of the NBRCTC line types by Stat-Analysis does not work as expected.
Please provide this bugfix for the main_v10.0, main_v10.1, and develop branches.
### Expected Behavior ###
Stat-Analysis should be able to aggregate multiple NBRCTC lines and write an output .stat file by running `-job aggregate -line_type NBRCTC -out_stat out.stat`.
### Environment ###
Describe your runtime environment:
*1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop)*
*2. OS: (e.g. RedHat Linux, MacOS)*
*3. Software version number(s)*
### To Reproduce ###
Describe the steps to reproduce the behavior:
1. Run MET version 10.0 or later:
```
stat_analysis -lookin grid_stat_APCP_12_120000L_20050807_120000V.txt \
-job aggregate -line_type NBRCTC -out_stat out_stat.txt
```
Using: [grid_stat_APCP_12_120000L_20050807_120000V.txt](https://github.com/dtcenter/MET/files/9629676/grid_stat_APCP_12_120000L_20050807_120000V.txt)
2. Observe the runtime error:
```
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_M_construct null not valid
Abort trap: 6
```
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
*Define the source of funding and account keys here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [x] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required: None needed
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Organization** level **Project** for support of the current coordinated release
- [x] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next bugfix version
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
No impacts.
## Bugfix Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **main_\<Version>**.
Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>`
- [ ] Fix the bug and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **main_\<Version>**.
Pull request: `bugfix <Issue Number> main_<Version> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Organization** level software support **Project** for the current coordinated release
Select: **Milestone** as the next bugfix version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Complete the steps above to fix the bug on the **develop** branch.
Branch name: `bugfix_<Issue Number>_develop_<Description>`
Pull request: `bugfix <Issue Number> develop <Description>`
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Close this issue.
| priority | fix stat analysis aggregation of the nbrctc line type describe the problem this issue was raised via github discussion dtcenter metplus the aggregation of the nbrctc line types by stat analysis does not work as expected please provide this bugfix for the main main and develop branches expected behavior stat analysis should be able to aggregate multiple nbrctc lines and write an output stat file by running job aggregate line type nbrctc out stat out stat environment describe your runtime environment machine e g hpc name linux workstation mac laptop os e g redhat linux macos software version number s to reproduce describe the steps to reproduce the behavior run met version or later stat analysis lookin grid stat apcp txt job aggregate line type nbrctc out stat out stat txt using observe the runtime error terminate called after throwing an instance of std logic error what basic string m construct null not valid abort trap relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required none needed labels select component s select priority select requestor s projects and milestone select organization level project for support of the current coordinated release select repository level project for development toward the next official release or add alert need project assignment label select milestone as the next bugfix version define related issue s consider the impact to the other metplus components no impacts bugfix checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of main branch name bugfix main fix the bug and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into main pull request bugfix main define the pull request metadata as permissions allow select reviewer s and linked issues select organization level software support project for the current coordinated release select milestone as the next bugfix version iterate until the reviewer s accept and merge your changes delete your fork or branch complete the steps above to fix the bug on the develop branch branch name bugfix develop pull request bugfix develop select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version close this issue | 1 |
491,062 | 14,144,172,658 | IssuesEvent | 2020-11-10 16:07:40 | ooni/ooni.org | https://api.github.com/repos/ooni/ooni.org | closed | Measurement Training (Week 5) | community effort/L priority/high workshop | As part of the 5th week of Internews' Measurement Training (which OONI is a lead partner on), I'll be working on the following:
- [x] Coordinate with mentor and share relevant resources
- [x] Create 7 new Google docs (for each training group) for the assignments of Week 5
- [x] Review participant edits to the assignments of Week 3 (based on my feedback)
- [x] Assist participants with questions and support
- [x] Help facilitate the synchronous session of Week 5
- [x] Share relevant resources for Week 5
- [x] Coordinate on participant group presentations (Week 6) | 1.0 | Measurement Training (Week 5) - As part of the 5th week of Internews' Measurement Training (which OONI is a lead partner on), I'll be working on the following:
- [x] Coordinate with mentor and share relevant resources
- [x] Create 7 new Google docs (for each training group) for the assignments of Week 5
- [x] Review participant edits to the assignments of Week 3 (based on my feedback)
- [x] Assist participants with questions and support
- [x] Help facilitate the synchronous session of Week 5
- [x] Share relevant resources for Week 5
- [x] Coordinate on participant group presentations (Week 6) | priority | measurement training week as part of the week of internews measurement training which ooni is a lead partner on i ll be working on the following coordinate with mentor and share relevant resources create new google docs for each training group for the assignments of week review participant edits to the assignments of week based on my feedback assist participants with questions and support help facilitate the synchronous session of week share relevant resources for week coordinate on participant group presentations week | 1 |
265,485 | 8,354,833,523 | IssuesEvent | 2018-10-02 14:17:45 | opentargets/genetics | https://api.github.com/repos/opentargets/genetics | closed | Add `variantInfo(variantId)` query to API | Kind: Enhancement Priority: High Status: In progress | It would be very useful to have a query that has the following format. This allows us to show rsId on the variant page.
```
type query {
variantInfo(variantId: String!): Variant
}
type Variant {
rsId: String!
}
```
Feel free to add additional fields. | 1.0 | Add `variantInfo(variantId)` query to API - It would be very useful to have a query that has the following format. This allows us to show rsId on the variant page.
```
type query {
variantInfo(variantId: String!): Variant
}
type Variant {
rsId: String!
}
```
Feel free to add additional fields. | priority | add variantinfo variantid query to api it would be very useful to have a query that has the following format this allows us to show rsid on the variant page type query variantinfo variantid string variant type variant rsid string feel free to add additional fields | 1 |
263,611 | 8,299,005,840 | IssuesEvent | 2018-09-21 00:12:08 | phetsims/faradays-law | https://api.github.com/repos/phetsims/faradays-law | closed | Remove "show" from tandem from sim | meeting:phet-io priority:2-high status:ready-for-review | There are two places where this is used and will need to be changed:
- [ ] for the two checkboxes, remove the `show` prefix, as `voltmeterCheckbox` and `fieldLinesCheckbox` is clear enough
- [ ] for model Properties, change the use of `show` to `visible`, i.e. `showFieldLinesProperty` -> `fieldLinesVisibleProperty`.
Assigning to many, for whoever gets to it first. | 1.0 | Remove "show" from tandem from sim - There are two places where this is used and will need to be changed:
- [ ] for the two checkboxes, remove the `show` prefix, as `voltmeterCheckbox` and `fieldLinesCheckbox` is clear enough
- [ ] for model Properties, change the use of `show` to `visible`, i.e. `showFieldLinesProperty` -> `fieldLinesVisibleProperty`.
Assigning to many, for whoever gets to it first. | priority | remove show from tandem from sim there are two places where this is used and will need to be changed for the two checkboxes remove the show prefix as voltmetercheckbox and fieldlinescheckbox is clear enough for model properties change the use of show to visible i e showfieldlinesproperty fieldlinesvisibleproperty assigning to many for whoever gets to it first | 1 |
503,742 | 14,597,305,391 | IssuesEvent | 2020-12-20 19:32:03 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | account.samsung.com - design is broken | browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-important | <!-- @browser: Firefox Mobile 84.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64035 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://account.samsung.com/accounts/v1/GalaxyStoreWeb/signInGate?response_type=code&client_id=fbv6kn9399&state=IVwReJftivwKNEFsf4lx49BNeho2WzCC&redirect_uri=https:%2F%2Fgalaxystore.samsung.com%2Fapi%2Faccount%2Fsigninsuccess
**Browser / Version**: Firefox Mobile 84.0
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
Someone trying to mirror my screen and get password
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/3bda5edf-4066-4d8f-881d-af3b8b2ba1fe.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201206192040</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/87308cc3-ece9-4481-b245-99bd7a325d38)
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_ | 1.0 | account.samsung.com - design is broken - <!-- @browser: Firefox Mobile 84.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64035 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://account.samsung.com/accounts/v1/GalaxyStoreWeb/signInGate?response_type=code&client_id=fbv6kn9399&state=IVwReJftivwKNEFsf4lx49BNeho2WzCC&redirect_uri=https:%2F%2Fgalaxystore.samsung.com%2Fapi%2Faccount%2Fsigninsuccess
**Browser / Version**: Firefox Mobile 84.0
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
Someone trying to mirror my screen and get password
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/3bda5edf-4066-4d8f-881d-af3b8b2ba1fe.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201206192040</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/87308cc3-ece9-4481-b245-99bd7a325d38)
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_ | priority | account samsung com design is broken url browser version firefox mobile operating system android tested another browser yes other problem type design is broken description items not fully visible steps to reproduce someone trying to mirror my screen and get password view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with โค๏ธ | 1 |
600,192 | 18,291,314,911 | IssuesEvent | 2021-10-05 15:31:38 | datalab-dev/covid_worksite_exposure | https://api.github.com/repos/datalab-dev/covid_worksite_exposure | opened | Error Handling for Dates | Feature High Priority | **Problem:** Currently in the code, if we have dates that don't parse, the web map code can't handle them and none of the exposure data can load.
**Proposed Solution:** Adjust the script to remove lines in the table with dates that didn't parse. This should be similar to how we handle unmatched buildings. The lines that don't parse can be put into a CSV file for inspection so we can add new date formats to the list of possible formats. | 1.0 | Error Handling for Dates - **Problem:** Currently in the code, if we have dates that don't parse, the web map code can't handle them and none of the exposure data can load.
**Proposed Solution:** Adjust the script to remove lines in the table with dates that didn't parse. This should be similar to how we handle unmatched buildings. The lines that don't parse can be put into a CSV file for inspection so we can add new date formats to the list of possible formats. | priority | error handling for dates problem currently in the code if we have dates that don t parse the web map code can t handle them and none of the exposure data can load proposed solution adjust the script to remove lines in the table with dates that didn t parse this should be similar to how we handle unmatched buildings the lines that don t parse can be put into a csv file for inspection so we can add new date formats to the list of possible formats | 1 |
481,441 | 13,886,329,927 | IssuesEvent | 2020-10-19 00:11:21 | AY2021S1-CS2103T-T11-1/tp | https://api.github.com/repos/AY2021S1-CS2103T-T11-1/tp | closed | As an organised salesman, I can find contacts who have purchased items of a certain sales tag | priority.High type.Story | ...so that I can gain insights in the type of items my contacts are interested in. | 1.0 | As an organised salesman, I can find contacts who have purchased items of a certain sales tag - ...so that I can gain insights in the type of items my contacts are interested in. | priority | as an organised salesman i can find contacts who have purchased items of a certain sales tag so that i can gain insights in the type of items my contacts are interested in | 1 |
598,817 | 18,256,374,234 | IssuesEvent | 2021-10-03 04:56:22 | AY2122S1-CS2113T-T10-1/tp | https://api.github.com/repos/AY2122S1-CS2113T-T10-1/tp | closed | Delete medication | priority.High type.Story | As a pharmacist, I want to be able to delete a medicine so that in the event of a product recall/end of production, I can remove it from the system. | 1.0 | Delete medication - As a pharmacist, I want to be able to delete a medicine so that in the event of a product recall/end of production, I can remove it from the system. | priority | delete medication as a pharmacist i want to be able to delete a medicine so that in the event of a product recall end of production i can remove it from the system | 1 |
211,519 | 7,201,827,362 | IssuesEvent | 2018-02-06 00:26:22 | jaws/jaws | https://api.github.com/repos/jaws/jaws | closed | Condense time and date conversion code | high priority | Code like the following is silly. Program it mathematically. Stop using string comparisons to perform numerical work. Not very data-sciencey. Always ask on StackOverflow before implementing crufty code like this:
```
columns[2] = float(columns[2])
if str(columns[2]-int(columns[2]))[1:4] in {'.0', '.00', '.02','.99'}:
hour[j] = 0
elif str(columns[2]-int(columns[2]))[1:4] in {'.04', '.05'}:
hour[j] = 1
elif str(columns[2]-int(columns[2]))[1:4] in {'.08', '.07'}:
hour[j] = 2
elif str(columns[2]-int(columns[2]))[1:4] in {'.12', '.10'}:
hour[j] = 3
elif str(columns[2]-int(columns[2]))[1:4] in {'.16', '.15'}:
hour[j] = 4
elif str(columns[2]-int(columns[2]))[1:4] in {'.20'}:
hour[j] = 5
elif str(columns[2]-int(columns[2]))[1:4] in {'.25', '.24'}:
hour[j] = 6
elif str(columns[2]-int(columns[2]))[1:4] in {'.29'}:
hour[j] = 7
elif str(columns[2]-int(columns[2]))[1:4] in {'.33'}:
hour[j] = 8
elif str(columns[2]-int(columns[2]))[1:4] in {'.37'}:
hour[j] = 9
elif str(columns[2]-int(columns[2]))[1:4] in {'.41'}:
hour[j] = 10
elif str(columns[2]-int(columns[2]))[1:4] in {'.45', '.48'}:
hour[j] = 11
elif str(columns[2]-int(columns[2]))[1:4] in {'.5', '.49'}:
hour[j] = 12
elif str(columns[2]-int(columns[2]))[1:4] in {'.54'}:
hour[j] = 13
elif str(columns[2]-int(columns[2]))[1:4] in {'.58'}:
hour[j] = 14
elif str(columns[2]-int(columns[2]))[1:4] in {'.62'}:
hour[j] = 15
elif str(columns[2]-int(columns[2]))[1:4] in {'.66'}:
hour[j] = 16
elif str(columns[2]-int(columns[2]))[1:4] in {'.70', '.71'}:
hour[j] = 17
elif str(columns[2]-int(columns[2]))[1:4] in {'.75', '.74'}:
hour[j] = 18
elif str(columns[2]-int(columns[2]))[1:4] in {'.79'}:
hour[j] = 19
elif str(columns[2]-int(columns[2]))[1:4] in {'.83'}:
hour[j] = 20
elif str(columns[2]-int(columns[2]))[1:4] in {'.87'}:
hour[j] = 21
elif str(columns[2]-int(columns[2]))[1:4] in {'.91'}:
hour[j] = 22
elif str(columns[2]-int(columns[2]))[1:4] in {'.95'}:
hour[j] = 23
``` | 1.0 | Condense time and date conversion code - Code like the following is silly. Program it mathematically. Stop using string comparisons to perform numerical work. Not very data-sciencey. Always ask on StackOverflow before implementing crufty code like this:
```
columns[2] = float(columns[2])
if str(columns[2]-int(columns[2]))[1:4] in {'.0', '.00', '.02','.99'}:
hour[j] = 0
elif str(columns[2]-int(columns[2]))[1:4] in {'.04', '.05'}:
hour[j] = 1
elif str(columns[2]-int(columns[2]))[1:4] in {'.08', '.07'}:
hour[j] = 2
elif str(columns[2]-int(columns[2]))[1:4] in {'.12', '.10'}:
hour[j] = 3
elif str(columns[2]-int(columns[2]))[1:4] in {'.16', '.15'}:
hour[j] = 4
elif str(columns[2]-int(columns[2]))[1:4] in {'.20'}:
hour[j] = 5
elif str(columns[2]-int(columns[2]))[1:4] in {'.25', '.24'}:
hour[j] = 6
elif str(columns[2]-int(columns[2]))[1:4] in {'.29'}:
hour[j] = 7
elif str(columns[2]-int(columns[2]))[1:4] in {'.33'}:
hour[j] = 8
elif str(columns[2]-int(columns[2]))[1:4] in {'.37'}:
hour[j] = 9
elif str(columns[2]-int(columns[2]))[1:4] in {'.41'}:
hour[j] = 10
elif str(columns[2]-int(columns[2]))[1:4] in {'.45', '.48'}:
hour[j] = 11
elif str(columns[2]-int(columns[2]))[1:4] in {'.5', '.49'}:
hour[j] = 12
elif str(columns[2]-int(columns[2]))[1:4] in {'.54'}:
hour[j] = 13
elif str(columns[2]-int(columns[2]))[1:4] in {'.58'}:
hour[j] = 14
elif str(columns[2]-int(columns[2]))[1:4] in {'.62'}:
hour[j] = 15
elif str(columns[2]-int(columns[2]))[1:4] in {'.66'}:
hour[j] = 16
elif str(columns[2]-int(columns[2]))[1:4] in {'.70', '.71'}:
hour[j] = 17
elif str(columns[2]-int(columns[2]))[1:4] in {'.75', '.74'}:
hour[j] = 18
elif str(columns[2]-int(columns[2]))[1:4] in {'.79'}:
hour[j] = 19
elif str(columns[2]-int(columns[2]))[1:4] in {'.83'}:
hour[j] = 20
elif str(columns[2]-int(columns[2]))[1:4] in {'.87'}:
hour[j] = 21
elif str(columns[2]-int(columns[2]))[1:4] in {'.91'}:
hour[j] = 22
elif str(columns[2]-int(columns[2]))[1:4] in {'.95'}:
hour[j] = 23
``` | priority | condense time and date conversion code code like the following is silly program it mathematically stop using string comparisons to perform numerical work not very data sciencey always ask on stackoverflow before implementing crufty code like this columns float columns if str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour elif str columns int columns in hour | 1 |
62,159 | 3,172,993,345 | IssuesEvent | 2015-09-23 11:47:41 | WarEmu/WarBugs | https://api.github.com/repos/WarEmu/WarBugs | closed | Kick to Char Menu after SC | Ability High Priority | since the last patch i have the Problem that almost everytime after an SC endยดs that i got kicked to the Char selection Menu and have to login again with the char. Most time it happens with the Kotbs, the other chars are not so frequent.
so i tested it yesterday with a guild mate: Auras and Guard on or off dont change anything but with the BW buff on it happens, is the BW buff off iยดm after SC where i was before but have mostly the Combat bug and have to relogg | 1.0 | Kick to Char Menu after SC - since the last patch i have the Problem that almost everytime after an SC endยดs that i got kicked to the Char selection Menu and have to login again with the char. Most time it happens with the Kotbs, the other chars are not so frequent.
so i tested it yesterday with a guild mate: Auras and Guard on or off dont change anything but with the BW buff on it happens, is the BW buff off iยดm after SC where i was before but have mostly the Combat bug and have to relogg | priority | kick to char menu after sc since the last patch i have the problem that almost everytime after an sc endยดs that i got kicked to the char selection menu and have to login again with the char most time it happens with the kotbs the other chars are not so frequent so i tested it yesterday with a guild mate auras and guard on or off dont change anything but with the bw buff on it happens is the bw buff off iยดm after sc where i was before but have mostly the combat bug and have to relogg | 1 |
717,921 | 24,696,394,538 | IssuesEvent | 2022-10-19 12:27:11 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | Generating a TextBox for a model property decorated with [DataType(DataType.DateTime)] attribute renders default input | Bug SEV: High S: Wrappers (ASP.NET Core) Priority 5 FP: Unplanned C: TextBox | ### Bug report
Generating a TextBox for a model property with DataAnnotation `[DataType(DataType.DateTime)]` breaks the rendering of the TextBox component and renders default browser date input.
This is a regression introduced with v2022.2.802
### Reproduction of the problem
Using the following model property and TextBox definition breaks the rendering of the TextBox and renders default input
```
[DataType(DataType.DateTime)]
public DateTime OrderDate { get; set; }
```
```
@model OrderViewModel
@Html.Kendo().TextBox().Name("OrderDate").Value(Model.OrderDate.ToShortDateString())
```

### Expected/desired behavior
Functional TextBox component should be rendered.
### Environment
* **Kendo UI version:** 2022.3.913
* **Browser:** [all]
| 1.0 | Generating a TextBox for a model property decorated with [DataType(DataType.DateTime)] attribute renders default input - ### Bug report
Generating a TextBox for a model property with DataAnnotation `[DataType(DataType.DateTime)]` breaks the rendering of the TextBox component and renders default browser date input.
This is a regression introduced with v2022.2.802
### Reproduction of the problem
Using the following model property and TextBox definition breaks the rendering of the TextBox and renders default input
```
[DataType(DataType.DateTime)]
public DateTime OrderDate { get; set; }
```
```
@model OrderViewModel
@Html.Kendo().TextBox().Name("OrderDate").Value(Model.OrderDate.ToShortDateString())
```

### Expected/desired behavior
Functional TextBox component should be rendered.
### Environment
* **Kendo UI version:** 2022.3.913
* **Browser:** [all]
| priority | generating a textbox for a model property decorated with attribute renders default input bug report generating a textbox for a model property with dataannotation breaks the rendering of the textbox component and renders default browser date input this is a regression introduced with reproduction of the problem using the following model property and textbox definition breaks the rendering of the textbox and renders default input public datetime orderdate get set model orderviewmodel html kendo textbox name orderdate value model orderdate toshortdatestring expected desired behavior functional textbox component should be rendered environment kendo ui version browser | 1 |
321,161 | 9,794,185,555 | IssuesEvent | 2019-06-10 22:00:17 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | CircleCI panel shows 48,600+ lines of expanded YAML configuration above build output | high priority module: ci releng triaged | For example, see this page: https://circleci.com/gh/pytorch/pytorch/812808?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
The browser becomes sluggish when interacting with that page.
Moving some of the inline shell scripts to their own files (#17028) could help. | 1.0 | CircleCI panel shows 48,600+ lines of expanded YAML configuration above build output - For example, see this page: https://circleci.com/gh/pytorch/pytorch/812808?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
The browser becomes sluggish when interacting with that page.
Moving some of the inline shell scripts to their own files (#17028) could help. | priority | circleci panel shows lines of expanded yaml configuration above build output for example see this page the browser becomes sluggish when interacting with that page moving some of the inline shell scripts to their own files could help | 1 |
340,740 | 10,277,833,709 | IssuesEvent | 2019-08-25 08:55:14 | fontforge/fontforge | https://api.github.com/repos/fontforge/fontforge | closed | Add a crash reporter (mac first) so anytime a crash happens, a backtrace is posted to gist | High Priority feature | FontForge crashes a lot. It would be great to deploy a crash reporter utility to collect crash reports for users who want to help debug FF.
| 1.0 | Add a crash reporter (mac first) so anytime a crash happens, a backtrace is posted to gist - FontForge crashes a lot. It would be great to deploy a crash reporter utility to collect crash reports for users who want to help debug FF.
| priority | add a crash reporter mac first so anytime a crash happens a backtrace is posted to gist fontforge crashes a lot it would be great to deploy a crash reporter utility to collect crash reports for users who want to help debug ff | 1 |
536,372 | 15,708,031,296 | IssuesEvent | 2021-03-26 19:50:51 | jsmentch/feature_visualizer | https://api.github.com/repos/jsmentch/feature_visualizer | closed | focus visualizer on stimuli, not runs. add timing | High Priority | get the timing and stimulus file name from the new api route:
https://neuroscout.org/api/swagger/#/run/get_api_runs__run_id__timing
- for each dataset (or task?): are they uniform?
- if yes, collapse run IDs, pick first sub automatically:
- 'uniquefy' the run list | 1.0 | focus visualizer on stimuli, not runs. add timing - get the timing and stimulus file name from the new api route:
https://neuroscout.org/api/swagger/#/run/get_api_runs__run_id__timing
- for each dataset (or task?): are they uniform?
- if yes, collapse run IDs, pick first sub automatically:
- 'uniquefy' the run list | priority | focus visualizer on stimuli not runs add timing get the timing and stimulus file name from the new api route for each dataset or task are they uniform if yes collapse run ids pick first sub automatically uniquefy the run list | 1 |
460,904 | 13,220,390,479 | IssuesEvent | 2020-08-17 12:19:39 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | Jurisdiction metadata fails when response from API is empty | Priority: High | `Cannot read property 'settingIdentifier' of undefined` happens when you attempt to download metadata that has not been set:

| 1.0 | Jurisdiction metadata fails when response from API is empty - `Cannot read property 'settingIdentifier' of undefined` happens when you attempt to download metadata that has not been set:

| priority | jurisdiction metadata fails when response from api is empty cannot read property settingidentifier of undefined happens when you attempt to download metadata that has not been set | 1 |
545,613 | 15,953,978,232 | IssuesEvent | 2021-04-15 13:05:03 | geneontology/go-annotation | https://api.github.com/repos/geneontology/go-annotation | closed | PTN000291682 (PTHR12653) & PTN000311525 (PTHR12966) | FlyBase MGI PAINT annotation high priority | PTN000291682 adds "NADH dehydrogenase (ubiquinone) activity" to members of the "NADH-UBIQUINONE OXIDOREDUCTASE 13 KD-B SUBUNIT (PTHR12653)" family based on single annotation to the mouse protein (MOUSE|MGI=MGI=1915452|UniProtKB=Q9CPP6).
PTN000311525 adds "NADH dehydrogenase (ubiquinone) activity" to members of the "NADH DEHYDROGENASE UBIQUINONE 1 ALPHA SUBCOMPLEX SUBUNIT 13 (PTHR12966)" family based on annotations to the human and cow proteins (HUMAN|HGNC=17194|UniProtKB=Q9P0J0) (BOVIN|Ensembl=ENSBTAG00000007812|UniProtKB=Q95KV7).
Both these proteins are characterized as non-catatlytic, accessory proteins within Complex I, so these MF annotations should be removed.
(e.g. PMID: 23527692, PMID: 31485716; also see UniProt entries.) | 1.0 | PTN000291682 (PTHR12653) & PTN000311525 (PTHR12966) - PTN000291682 adds "NADH dehydrogenase (ubiquinone) activity" to members of the "NADH-UBIQUINONE OXIDOREDUCTASE 13 KD-B SUBUNIT (PTHR12653)" family based on single annotation to the mouse protein (MOUSE|MGI=MGI=1915452|UniProtKB=Q9CPP6).
PTN000311525 adds "NADH dehydrogenase (ubiquinone) activity" to members of the "NADH DEHYDROGENASE UBIQUINONE 1 ALPHA SUBCOMPLEX SUBUNIT 13 (PTHR12966)" family based on annotations to the human and cow proteins (HUMAN|HGNC=17194|UniProtKB=Q9P0J0) (BOVIN|Ensembl=ENSBTAG00000007812|UniProtKB=Q95KV7).
Both these proteins are characterized as non-catatlytic, accessory proteins within Complex I, so these MF annotations should be removed.
(e.g. PMID: 23527692, PMID: 31485716; also see UniProt entries.) | priority | adds nadh dehydrogenase ubiquinone activity to members of the nadh ubiquinone oxidoreductase kd b subunit family based on single annotation to the mouse protein mouse mgi mgi uniprotkb adds nadh dehydrogenase ubiquinone activity to members of the nadh dehydrogenase ubiquinone alpha subcomplex subunit family based on annotations to the human and cow proteins human hgnc uniprotkb bovin ensembl uniprotkb both these proteins are characterized as non catatlytic accessory proteins within complex i so these mf annotations should be removed e g pmid pmid also see uniprot entries | 1 |
332,823 | 10,111,392,874 | IssuesEvent | 2019-07-30 12:39:58 | Templarian/MaterialDesign | https://api.github.com/repos/Templarian/MaterialDesign | closed | Add andrewnenakhov as a contributor | High Priority :grey_exclamation: Reassignment :repeat: | Add @andrewnenakhov as a contributor and transfer icons over.
https://materialdesignicons.com/contributor/Andrew-Nenakhov
If you have a twitter account I can link it above. Eventually you'll be able to edit the page, but not yet. | 1.0 | Add andrewnenakhov as a contributor - Add @andrewnenakhov as a contributor and transfer icons over.
https://materialdesignicons.com/contributor/Andrew-Nenakhov
If you have a twitter account I can link it above. Eventually you'll be able to edit the page, but not yet. | priority | add andrewnenakhov as a contributor add andrewnenakhov as a contributor and transfer icons over if you have a twitter account i can link it above eventually you ll be able to edit the page but not yet | 1 |
324,459 | 9,904,171,861 | IssuesEvent | 2019-06-27 08:36:26 | Sp2000/colplus-frontend | https://api.github.com/repos/Sp2000/colplus-frontend | closed | Visualize assembly duplicates | high priority | Finding, monitoring and acting on duplicate names in the CoL and its sources is probably the most important work when assembling the CoL and this needs the best tools to support.
Consider and document here how to best do that in the frontend.
We can broadly classify duplicates into:
1. [Duplicate names within the dataset](https://github.com/Sp2000/colplus-backend/issues/195)
2. [Duplicate names between the dataset and CoL](https://github.com/Sp2000/colplus-backend/issues/194)
For duplicates within a dataset we can use the search and limit results by datasetKey.
For duplicates between a dataset and the CoL we should be able to use the search and limit results by records in the draft catalogue only, i.e. datasetKey=3. This only catches records that are part of a mapped sector already and therefore have been copied to the draft. This should be the desired behavior, as we don't want to see all duplicates of WoRMS for example, just the part that is actually mapped to the CoL.
For each of those we typically want to filter by:
- higher taxon: restrict results to group(s) within the tree. This is useful when evaluating a single or few sectors
- rank: restrict results to user selected ranks
- status: restrict results to user selected statuses
- identical name with authorship, identical name without authorship, name variant (e.g. epithet gender): For this we should flag different issues (`duplicate name`, `???`, `potential variant`) to filter by
In general a [search result table](http://test.col.plus/dataset/1008/names?issue=duplicate%20name&limit=50&offset=100&reverse=false&status=accepted&status=provisionally%20accepted) ordered by name (to show duplicates next to each other), with a classification (see #60), the above filter options and the ability to batch apply a decision to multiple records should be covering the basic needs. When applying some of the above filters the corresponding duplicate record(s) might have been excluded (especially higher group & status). We need to think how to best include them in the shown table nevertheless.
Apart from a link from the issues tab *Duplicates* should probably be a tab in its own rights | 1.0 | Visualize assembly duplicates - Finding, monitoring and acting on duplicate names in the CoL and its sources is probably the most important work when assembling the CoL and this needs the best tools to support.
Consider and document here how to best do that in the frontend.
We can broadly classify duplicates into:
1. [Duplicate names within the dataset](https://github.com/Sp2000/colplus-backend/issues/195)
2. [Duplicate names between the dataset and CoL](https://github.com/Sp2000/colplus-backend/issues/194)
For duplicates within a dataset we can use the search and limit results by datasetKey.
For duplicates between a dataset and the CoL we should be able to use the search and limit results by records in the draft catalogue only, i.e. datasetKey=3. This only catches records that are part of a mapped sector already and therefore have been copied to the draft. This should be the desired behavior, as we don't want to see all duplicates of WoRMS for example, just the part that is actually mapped to the CoL.
For each of those we typically want to filter by:
- higher taxon: restrict results to group(s) within the tree. This is useful when evaluating a single or few sectors
- rank: restrict results to user selected ranks
- status: restrict results to user selected statuses
- identical name with authorship, identical name without authorship, name variant (e.g. epithet gender): For this we should flag different issues (`duplicate name`, `???`, `potential variant`) to filter by
In general a [search result table](http://test.col.plus/dataset/1008/names?issue=duplicate%20name&limit=50&offset=100&reverse=false&status=accepted&status=provisionally%20accepted) ordered by name (to show duplicates next to each other), with a classification (see #60), the above filter options and the ability to batch apply a decision to multiple records should be covering the basic needs. When applying some of the above filters the corresponding duplicate record(s) might have been excluded (especially higher group & status). We need to think how to best include them in the shown table nevertheless.
Apart from a link from the issues tab *Duplicates* should probably be a tab in its own rights | priority | visualize assembly duplicates finding monitoring and acting on duplicate names in the col and its sources is probably the most important work when assembling the col and this needs the best tools to support consider and document here how to best do that in the frontend we can broadly classify duplicates into for duplicates within a dataset we can use the search and limit results by datasetkey for duplicates between a dataset and the col we should be able to use the search and limit results by records in the draft catalogue only i e datasetkey this only catches records that are part of a mapped sector already and therefore have been copied to the draft this should be the desired behavior as we don t want to see all duplicates of worms for example just the part that is actually mapped to the col for each of those we typically want to filter by higher taxon restrict results to group s within the tree this is useful when evaluating a single or few sectors rank restrict results to user selected ranks status restrict results to user selected statuses identical name with authorship identical name without authorship name variant e g epithet gender for this we should flag different issues duplicate name potential variant to filter by in general a ordered by name to show duplicates next to each other with a classification see the above filter options and the ability to batch apply a decision to multiple records should be covering the basic needs when applying some of the above filters the corresponding duplicate record s might have been excluded especially higher group status we need to think how to best include them in the shown table nevertheless apart from a link from the issues tab duplicates should probably be a tab in its own rights | 1 |
261,849 | 8,246,862,005 | IssuesEvent | 2018-09-11 14:06:06 | SiLeBAT/FSK-Lab | https://api.github.com/repos/SiLeBAT/FSK-Lab | closed | FSK-Lab does have problems with the knime protocol knime:// | FSK-Lab bug high priority | After a long time fighting with the REST API and endless discussions with KNIME I started from the scratch and figured out, that I am dependend on the the KNIME protocol which does not fully work on the server if FSK-Lab nodes are used to write files.
I run two workflows I created on the server and the server KNIME. Both finish as expected in the server KNIME, but only the one which uses KNIME nodes only can finish as expected in the Server.
https://vm-maslxknime02.bfr.bund.de/knime/#/testing/Lars_testing/testwriteanddownload2 does create a table file in the folder FSK-Web/temp/table.test which can be seen via the connected KNIME explorer if present (knime://knime.mountpoint/FSK-Web/temp/test.table).
https://vm-maslxknime02.bfr.bund.de/knime/#/testing/Lars_testing/testwriteanddownload does create a fskx file in the folder FSK-Web/temp/test.fskx which can be seen via the connected KNIME explorer if present (knime://knime.mountpoint/FSK-Web/temp/test.fskx).
It would be great if you could fix that for me, in order to finally use the full capability of the server. And please check if that also applies for the reader node.
Thanks a lot
Lars
PS. Same issue on the external server - seems not to be related with the plugin version used https://knime.bfr.berlin/knime/#/testing/Lars/testknimeprotocollfsk
(knime://knime.mountpoint/temp/test.fskx)
Error message is the good old: Execute failed: ("NullPointerException"): null | 1.0 | FSK-Lab does have problems with the knime protocol knime:// - After a long time fighting with the REST API and endless discussions with KNIME I started from the scratch and figured out, that I am dependend on the the KNIME protocol which does not fully work on the server if FSK-Lab nodes are used to write files.
I run two workflows I created on the server and the server KNIME. Both finish as expected in the server KNIME, but only the one which uses KNIME nodes only can finish as expected in the Server.
https://vm-maslxknime02.bfr.bund.de/knime/#/testing/Lars_testing/testwriteanddownload2 does create a table file in the folder FSK-Web/temp/table.test which can be seen via the connected KNIME explorer if present (knime://knime.mountpoint/FSK-Web/temp/test.table).
https://vm-maslxknime02.bfr.bund.de/knime/#/testing/Lars_testing/testwriteanddownload does create a fskx file in the folder FSK-Web/temp/test.fskx which can be seen via the connected KNIME explorer if present (knime://knime.mountpoint/FSK-Web/temp/test.fskx).
It would be great if you could fix that for me, in order to finally use the full capability of the server. And please check if that also applies for the reader node.
Thanks a lot
Lars
PS. Same issue on the external server - seems not to be related with the plugin version used https://knime.bfr.berlin/knime/#/testing/Lars/testknimeprotocollfsk
(knime://knime.mountpoint/temp/test.fskx)
Error message is the good old: Execute failed: ("NullPointerException"): null | priority | fsk lab does have problems with the knime protocol knime after a long time fighting with the rest api and endless discussions with knime i started from the scratch and figured out that i am dependend on the the knime protocol which does not fully work on the server if fsk lab nodes are used to write files i run two workflows i created on the server and the server knime both finish as expected in the server knime but only the one which uses knime nodes only can finish as expected in the server does create a table file in the folder fsk web temp table test which can be seen via the connected knime explorer if present knime knime mountpoint fsk web temp test table does create a fskx file in the folder fsk web temp test fskx which can be seen via the connected knime explorer if present knime knime mountpoint fsk web temp test fskx it would be great if you could fix that for me in order to finally use the full capability of the server and please check if that also applies for the reader node thanks a lot lars ps same issue on the external server seems not to be related with the plugin version used knime knime mountpoint temp test fskx error message is the good old execute failed nullpointerexception null | 1 |
647,235 | 21,095,945,167 | IssuesEvent | 2022-04-04 10:18:12 | bitsongofficial/sinfonia-ui | https://api.github.com/repos/bitsongofficial/sinfonia-ui | closed | Missing airdrop link | High Priority Review | In the Fan token PDP, we are missing an external link (we also need the external link icon) to the Airdrop page.
Implementation:

| 1.0 | Missing airdrop link - In the Fan token PDP, we are missing an external link (we also need the external link icon) to the Airdrop page.
Implementation:

| priority | missing airdrop link in the fan token pdp we are missing an external link we also need the external link icon to the airdrop page implementation | 1 |
590,542 | 17,780,384,953 | IssuesEvent | 2021-08-31 03:05:14 | TencentBlueKing/bk-iam-saas | https://api.github.com/repos/TencentBlueKing/bk-iam-saas | closed | [ไบงๅ]BFๅฏนๆฅๆ้ไธญๅฟๆนๆกๅๅ่ฏๅฎก | Type: Question Priority: High Layer: Product Size: M | ๆถๅ๏ผ
1. ๆไฝๅ
ณ่ไธคไธชไปฅไธ่ตๆบ็ฑปๅๆถ๏ผๅฝๅ็ฌๅกๅฐ็งฏ็ๆนๅผไธๆปก่ถณ้ๆฑ
2. ไธดๆถๆ้ๆถๅๅไธไธชๆไฝไธๅๅฎไพ็ๆๆๆไธไธ่ด้ฎ้ข
------
ไบงๅๅฑ้่ฆๆๅฏนๅบ็ๅบๅฏนๆนๆก | 1.0 | [ไบงๅ]BFๅฏนๆฅๆ้ไธญๅฟๆนๆกๅๅ่ฏๅฎก - ๆถๅ๏ผ
1. ๆไฝๅ
ณ่ไธคไธชไปฅไธ่ตๆบ็ฑปๅๆถ๏ผๅฝๅ็ฌๅกๅฐ็งฏ็ๆนๅผไธๆปก่ถณ้ๆฑ
2. ไธดๆถๆ้ๆถๅๅไธไธชๆไฝไธๅๅฎไพ็ๆๆๆไธไธ่ด้ฎ้ข
------
ไบงๅๅฑ้่ฆๆๅฏนๅบ็ๅบๅฏนๆนๆก | priority | bfๅฏนๆฅๆ้ไธญๅฟๆนๆกๅๅ่ฏๅฎก ๆถๅ๏ผ ๆไฝๅ
ณ่ไธคไธชไปฅไธ่ตๆบ็ฑปๅๆถ๏ผๅฝๅ็ฌๅกๅฐ็งฏ็ๆนๅผไธๆปก่ถณ้ๆฑ ไธดๆถๆ้ๆถๅๅไธไธชๆไฝไธๅๅฎไพ็ๆๆๆไธไธ่ด้ฎ้ข ไบงๅๅฑ้่ฆๆๅฏนๅบ็ๅบๅฏนๆนๆก | 1 |
131,367 | 5,147,092,244 | IssuesEvent | 2017-01-13 05:06:52 | rainlab/builder-plugin | https://api.github.com/repos/rainlab/builder-plugin | closed | creating additional fields.yaml ex: newname_fields.yaml will not save | Priority: High Status: Review Needed Type: Unconfirmed Bug | ##### Expected behavior
When user clicks on "save" while working with a fields.yaml that has a different name (example: newname_fields.yaml) the file should be saved.
##### Actual behavior
When user clicks on "save" while working with a fields.yaml that has a different name (example: newname_fields.yaml) the file does not save, and the Builder plugin shows that there are unsaved changes
##### Reproduce steps
- Open up a model
- Click on the + symbol adjacent to the Form tree branch
- Type in a new fields.yaml name such as "newname_fields.yaml"
- Save document by clicking the save option
##### October build
365
##### Builder Version
1.0.11
| 1.0 | creating additional fields.yaml ex: newname_fields.yaml will not save - ##### Expected behavior
When user clicks on "save" while working with a fields.yaml that has a different name (example: newname_fields.yaml) the file should be saved.
##### Actual behavior
When user clicks on "save" while working with a fields.yaml that has a different name (example: newname_fields.yaml) the file does not save, and the Builder plugin shows that there are unsaved changes
##### Reproduce steps
- Open up a model
- Click on the + symbol adjacent to the Form tree branch
- Type in a new fields.yaml name such as "newname_fields.yaml"
- Save document by clicking the save option
##### October build
365
##### Builder Version
1.0.11
| priority | creating additional fields yaml ex newname fields yaml will not save expected behavior when user clicks on save while working with a fields yaml that has a different name example newname fields yaml the file should be saved actual behavior when user clicks on save while working with a fields yaml that has a different name example newname fields yaml the file does not save and the builder plugin shows that there are unsaved changes reproduce steps open up a model click on the symbol adjacent to the form tree branch type in a new fields yaml name such as newname fields yaml save document by clicking the save option october build builder version | 1 |
805,753 | 29,661,481,446 | IssuesEvent | 2023-06-10 07:54:00 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [xCluster][Tablet Splitting] xCluster with automatic tablet splitting test failing at data validation step | kind/bug area/docdb priority/high jira-originated | Jira Link: [DB-6692](https://yugabyte.atlassian.net/browse/DB-6692)
[DB-6692]: https://yugabyte.atlassian.net/browse/DB-6692?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [xCluster][Tablet Splitting] xCluster with automatic tablet splitting test failing at data validation step - Jira Link: [DB-6692](https://yugabyte.atlassian.net/browse/DB-6692)
[DB-6692]: https://yugabyte.atlassian.net/browse/DB-6692?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | xcluster with automatic tablet splitting test failing at data validation step jira link | 1 |
292,214 | 8,954,817,379 | IssuesEvent | 2019-01-26 00:53:48 | IBAMR/IBAMR | https://api.github.com/repos/IBAMR/IBAMR | opened | Avoid calling FE::reinit when possible. | High Priority :warning: Meta | The heart solver (with the afterload 10 benchmark) spends about 20% of its time calling `libMesh::FE::reinit`: unfortunately, specifying exactly what needs to be recomputed to libMesh is a bit tricky so we end up doing a lot of work there. In other benchmarks that have thicker structures (e.g., IBFE/explicit/ex4) we spend nearly half of our time calling this function. I'll have to do some more profiling to see where we are with the full heart model.
There is a bunch of stuff we can do to fix this:
- [ ] A lot of the calls to `reinit` come from changing quadrature rules. We should just cache multiple `FE` objects (one per quadrature rule) to get around this. This is mostly done (see #434). This speeds up the first part of `interpWeighted` where we only need the function values.
- [ ] We can get the `JxW` values by using `FEMap`. We should implement a similar cache strategy here: `FEMap` only depends on details of the finite element space (FE order, FE family) and the quadrature rule, so we can avoid recomputing everything but the Jacobians if we have cache here that uses the same strategy. | 1.0 | Avoid calling FE::reinit when possible. - The heart solver (with the afterload 10 benchmark) spends about 20% of its time calling `libMesh::FE::reinit`: unfortunately, specifying exactly what needs to be recomputed to libMesh is a bit tricky so we end up doing a lot of work there. In other benchmarks that have thicker structures (e.g., IBFE/explicit/ex4) we spend nearly half of our time calling this function. I'll have to do some more profiling to see where we are with the full heart model.
There is a bunch of stuff we can do to fix this:
- [ ] A lot of the calls to `reinit` come from changing quadrature rules. We should just cache multiple `FE` objects (one per quadrature rule) to get around this. This is mostly done (see #434). This speeds up the first part of `interpWeighted` where we only need the function values.
- [ ] We can get the `JxW` values by using `FEMap`. We should implement a similar cache strategy here: `FEMap` only depends on details of the finite element space (FE order, FE family) and the quadrature rule, so we can avoid recomputing everything but the Jacobians if we have cache here that uses the same strategy. | priority | avoid calling fe reinit when possible the heart solver with the afterload benchmark spends about of its time calling libmesh fe reinit unfortunately specifying exactly what needs to be recomputed to libmesh is a bit tricky so we end up doing a lot of work there in other benchmarks that have thicker structures e g ibfe explicit we spend nearly half of our time calling this function i ll have to do some more profiling to see where we are with the full heart model there is a bunch of stuff we can do to fix this a lot of the calls to reinit come from changing quadrature rules we should just cache multiple fe objects one per quadrature rule to get around this this is mostly done see this speeds up the first part of interpweighted where we only need the function values we can get the jxw values by using femap we should implement a similar cache strategy here femap only depends on details of the finite element space fe order fe family and the quadrature rule so we can avoid recomputing everything but the jacobians if we have cache here that uses the same strategy | 1 |
625,191 | 19,721,398,580 | IssuesEvent | 2022-01-13 15:41:47 | merico-dev/lake | https://api.github.com/repos/merico-dev/lake | closed | Missing API Validation Layer - Results in lost data | bug refactor priority: high | ## Description:
- API validation is needed to ensure response data from APIs are consistent with what we expect
- Every entity we request from an API gets converted from a response to a Go struct prior to saving in our DB
- Ex. Commits from GitHub
- If the responses from the API change in any way from what we expect to get back, then we risk losing data
- This can happen silently because we donโt have validation to ensure the data matches what we expect
- API responses can vary, or be changed intentionally by the provider, so we must be prepared for this
- We have discovered this problem exists in our system in the GitHub plugin, so it may exist in others as well. If not now, it could in the future.
## Example (tested, proven):
- In the GitHub plugin, only some of our response data contained author data and committer data
- This is because some users are not verified on GitHub
- Some commits were not being saved because of the failed conversion
- We expected certain data, but the responses were inconsistent leading to lost data
- Without validation we have no way to know this is happening, since this error occurs at runtime
## Screenshots:
1. Commit w/ author

2. Commit w/out author

3. Proof of why author / committer is not found (no valid user account on GitHub)

4. Problem in the code

## Possible Solutions:
- Add validation layer
- Maybe this package: https://pkg.go.dev/gopkg.in/go-playground/validator.v8#section-readme
@joncodo Please comment / edit with your understanding if needed
| 1.0 | Missing API Validation Layer - Results in lost data - ## Description:
- API validation is needed to ensure response data from APIs are consistent with what we expect
- Every entity we request from an API gets converted from a response to a Go struct prior to saving in our DB
- Ex. Commits from GitHub
- If the responses from the API change in any way from what we expect to get back, then we risk losing data
- This can happen silently because we donโt have validation to ensure the data matches what we expect
- API responses can vary, or be changed intentionally by the provider, so we must be prepared for this
- We have discovered this problem exists in our system in the GitHub plugin, so it may exist in others as well. If not now, it could in the future.
## Example (tested, proven):
- In the GitHub plugin, only some of our response data contained author data and committer data
- This is because some users are not verified on GitHub
- Some commits were not being saved because of the failed conversion
- We expected certain data, but the responses were inconsistent leading to lost data
- Without validation we have no way to know this is happening, since this error occurs at runtime
## Screenshots:
1. Commit w/ author

2. Commit w/out author

3. Proof of why author / committer is not found (no valid user account on GitHub)

4. Problem in the code

## Possible Solutions:
- Add validation layer
- Maybe this package: https://pkg.go.dev/gopkg.in/go-playground/validator.v8#section-readme
@joncodo Please comment / edit with your understanding if needed
| priority | missing api validation layer results in lost data description api validation is needed to ensure response data from apis are consistent with what we expect every entity we request from an api gets converted from a response to a go struct prior to saving in our db ex commits from github if the responses from the api change in any way from what we expect to get back then we risk losing data this can happen silently because we donโt have validation to ensure the data matches what we expect api responses can vary or be changed intentionally by the provider so we must be prepared for this we have discovered this problem exists in our system in the github plugin so it may exist in others as well if not now it could in the future example tested proven in the github plugin only some of our response data contained author data and committer data this is because some users are not verified on github some commits were not being saved because of the failed conversion we expected certain data but the responses were inconsistent leading to lost data without validation we have no way to know this is happening since this error occurs at runtime screenshots commit w author commit w out author proof of why author committer is not found no valid user account on github problem in the code possible solutions add validation layer maybe this package joncodo please comment edit with your understanding if needed | 1 |
6,744 | 2,594,465,642 | IssuesEvent | 2015-02-20 03:49:17 | CSSE1001/MyPyTutor | https://api.github.com/repos/CSSE1001/MyPyTutor | closed | problem with code analysis? | bug priority: high problem set | For the Using Classes problem I wrote
if friend is not None:
print("Friends with {}".format(friend))
and got (in the terminal - god knows what happens if run in windows)
File "/home/pjr/MyPyTutor3/MyPyTutor/code/tutorlib/analysis/ast_tools.py", line 87, in identifier
'No known identifier exists for node {}'.format(node)
tutorlib.analysis.support.StaticAnalysisError: No known identifier exists for node <_ast.Str object at 0x7ff2d5f10a90>
| 1.0 | problem with code analysis? - For the Using Classes problem I wrote
if friend is not None:
print("Friends with {}".format(friend))
and got (in the terminal - god knows what happens if run in windows)
File "/home/pjr/MyPyTutor3/MyPyTutor/code/tutorlib/analysis/ast_tools.py", line 87, in identifier
'No known identifier exists for node {}'.format(node)
tutorlib.analysis.support.StaticAnalysisError: No known identifier exists for node <_ast.Str object at 0x7ff2d5f10a90>
| priority | problem with code analysis for the using classes problem i wrote if friend is not none print friends with format friend and got in the terminal god knows what happens if run in windows file home pjr mypytutor code tutorlib analysis ast tools py line in identifier no known identifier exists for node format node tutorlib analysis support staticanalysiserror no known identifier exists for node | 1 |
463,913 | 13,303,189,431 | IssuesEvent | 2020-08-25 15:11:43 | Automattic/woocommerce-payments | https://api.github.com/repos/Automattic/woocommerce-payments | opened | Force WooCommerce Payments into DEV mode if wp_get_environment_type() returns staging or development | Priority: High | Don't allow the user to create a live Stripe account for a staging or development site. Only allow test account creation.
Available in WP 5.5
https://make.wordpress.org/core/2020/07/24/new-wp_get_environment_type-function-in-wordpress-5-5/
See also #749
cc @thenbrent | 1.0 | Force WooCommerce Payments into DEV mode if wp_get_environment_type() returns staging or development - Don't allow the user to create a live Stripe account for a staging or development site. Only allow test account creation.
Available in WP 5.5
https://make.wordpress.org/core/2020/07/24/new-wp_get_environment_type-function-in-wordpress-5-5/
See also #749
cc @thenbrent | priority | force woocommerce payments into dev mode if wp get environment type returns staging or development don t allow the user to create a live stripe account for a staging or development site only allow test account creation available in wp see also cc thenbrent | 1 |
512,764 | 14,908,940,829 | IssuesEvent | 2021-01-22 07:01:13 | xournalpp/xournalpp | https://api.github.com/repos/xournalpp/xournalpp | closed | The application crashes when I try to set audio directory | Crash Dependency Issue bug priority::high | (Please complete the following information, and then delete this line)
**Affects versions :**
- OS: Linux (Ubuntu 18.04)
- Plasma 5.12
- libgtk 3.22.30
- Version of Xournal++ 1.0.6-0~201912201858~ubuntu18.04.1 and also xournalpp_1.0.6-0~202001210214~ubuntu18.04.1_amd64
The application crashes when I try to set audio directory
I go into the Preferences dialog, in the "Audio Recording" tab. I clic to select directory for audio and
choose "Other...". Now standard open file GTK dialog opens as expected. Now if I choose any
**To Reproduce**
Steps to reproduce the behavior:
1. I go into the Preferences dialog
2. I select the "Audio Recording" tab
3. I clic to select directory for audio and choose "Other..."
4. Now standard open file GTK dialog opens as expected.
5. At this point choosing a directory shortcut, like "Home" in filechooser the program crashes.
I Attach the error log found in ~/.xournalpp/errorlogs/
[errorlog.20200123-152330.log](https://github.com/xournalpp/xournalpp/files/4103524/errorlog.20200123-152330.log)
| 1.0 | The application crashes when I try to set audio directory - (Please complete the following information, and then delete this line)
**Affects versions :**
- OS: Linux (Ubuntu 18.04)
- Plasma 5.12
- libgtk 3.22.30
- Version of Xournal++ 1.0.6-0~201912201858~ubuntu18.04.1 and also xournalpp_1.0.6-0~202001210214~ubuntu18.04.1_amd64
The application crashes when I try to set audio directory
I go into the Preferences dialog, in the "Audio Recording" tab. I clic to select directory for audio and
choose "Other...". Now standard open file GTK dialog opens as expected. Now if I choose any
**To Reproduce**
Steps to reproduce the behavior:
1. I go into the Preferences dialog
2. I select the "Audio Recording" tab
3. I clic to select directory for audio and choose "Other..."
4. Now standard open file GTK dialog opens as expected.
5. At this point choosing a directory shortcut, like "Home" in filechooser the program crashes.
I Attach the error log found in ~/.xournalpp/errorlogs/
[errorlog.20200123-152330.log](https://github.com/xournalpp/xournalpp/files/4103524/errorlog.20200123-152330.log)
| priority | the application crashes when i try to set audio directory please complete the following information and then delete this line affects versions os linux ubuntu plasma libgtk version of xournal and also xournalpp the application crashes when i try to set audio directory i go into the preferences dialog in the audio recording tab i clic to select directory for audio and choose other now standard open file gtk dialog opens as expected now if i choose any to reproduce steps to reproduce the behavior i go into the preferences dialog i select the audio recording tab i clic to select directory for audio and choose other now standard open file gtk dialog opens as expected at this point choosing a directory shortcut like home in filechooser the program crashes i attach the error log found in xournalpp errorlogs | 1 |
283,496 | 8,719,727,948 | IssuesEvent | 2018-12-08 03:42:32 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | CSG meshing broken for a large problem. | bug likelihood medium priority reviewed severity high | Greg Greenman had a large CSG problem and he tried both the adaptive and multipass methods and neither version ever finished. He increased the number of processors to 64 for both models and this didn't make it any better. He reported that it is slightly larger than a problem that works ok.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1793
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: CSG meshing broken for a large problem.
Assigned to: Eric Brugger
Category:
Target version: 2.7.3
Author: Eric Brugger
Start: 04/01/2014
Due date:
% Done: 100
Estimated time: 24.0
Created: 04/01/2014 07:44 pm
Updated: 04/25/2014 07:28 pm
Likelihood: 3 - Occasional
Severity: 5 - Very Serious
Found in version: 2.7.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Greg Greenman had a large CSG problem and he tried both the adaptive and multipass methods and neither version ever finished. He increased the number of processors to 64 for both models and this didn't make it any better. He reported that it is slightly larger than a problem that works ok.
Comments:
I committed revisions 23224 and 23226 to the 2.7 RC and trunk with thefollowing changes:1) I modified the multi-pass discretization of CSG meshes to process each domain independently if the number total number of boundary surfaces is above the internal limit. This will make if possible to handle larger CSG meshes. I also added the class vtkCSGFixedLengthBitField in order to make it easier to change the number of bits used by the multi-pass discretization. I added a description of the change to the release notes. This resolves #1793.A visit_vtk/full/vtkCSGFixedLengthBitField.hM avt/Database/Database/avtTransformManager.CM resources/help/en_US/relnotes2.7.3.htmlM visit_vtk/full/vtkBinaryPartitionVolumeFromVolume.CM visit_vtk/full/vtkBinaryPartitionVolumeFromVolume.hM visit_vtk/full/vtkCSGGrid.CM visit_vtk/full/vtkCSGGrid.hM visit_vtk/full/vtkMultiSplitter.CM visit_vtk/full/vtkMultiSplitter.hM visit_vtk/full/vtkVisItSplitter.CM visit_vtk/full/vtkVisItSplitter.hM visit_vtk/full/vtkVolumeFromCSGVolume.CM visit_vtk/full/vtkVolumeFromCSGVolume.h
| 1.0 | CSG meshing broken for a large problem. - Greg Greenman had a large CSG problem and he tried both the adaptive and multipass methods and neither version ever finished. He increased the number of processors to 64 for both models and this didn't make it any better. He reported that it is slightly larger than a problem that works ok.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1793
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: CSG meshing broken for a large problem.
Assigned to: Eric Brugger
Category:
Target version: 2.7.3
Author: Eric Brugger
Start: 04/01/2014
Due date:
% Done: 100
Estimated time: 24.0
Created: 04/01/2014 07:44 pm
Updated: 04/25/2014 07:28 pm
Likelihood: 3 - Occasional
Severity: 5 - Very Serious
Found in version: 2.7.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Greg Greenman had a large CSG problem and he tried both the adaptive and multipass methods and neither version ever finished. He increased the number of processors to 64 for both models and this didn't make it any better. He reported that it is slightly larger than a problem that works ok.
Comments:
I committed revisions 23224 and 23226 to the 2.7 RC and trunk with thefollowing changes:1) I modified the multi-pass discretization of CSG meshes to process each domain independently if the number total number of boundary surfaces is above the internal limit. This will make if possible to handle larger CSG meshes. I also added the class vtkCSGFixedLengthBitField in order to make it easier to change the number of bits used by the multi-pass discretization. I added a description of the change to the release notes. This resolves #1793.A visit_vtk/full/vtkCSGFixedLengthBitField.hM avt/Database/Database/avtTransformManager.CM resources/help/en_US/relnotes2.7.3.htmlM visit_vtk/full/vtkBinaryPartitionVolumeFromVolume.CM visit_vtk/full/vtkBinaryPartitionVolumeFromVolume.hM visit_vtk/full/vtkCSGGrid.CM visit_vtk/full/vtkCSGGrid.hM visit_vtk/full/vtkMultiSplitter.CM visit_vtk/full/vtkMultiSplitter.hM visit_vtk/full/vtkVisItSplitter.CM visit_vtk/full/vtkVisItSplitter.hM visit_vtk/full/vtkVolumeFromCSGVolume.CM visit_vtk/full/vtkVolumeFromCSGVolume.h
| priority | csg meshing broken for a large problem greg greenman had a large csg problem and he tried both the adaptive and multipass methods and neither version ever finished he increased the number of processors to for both models and this didn t make it any better he reported that it is slightly larger than a problem that works ok redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority urgent subject csg meshing broken for a large problem assigned to eric brugger category target version author eric brugger start due date done estimated time created pm updated pm likelihood occasional severity very serious found in version impact expected use os all support group any description greg greenman had a large csg problem and he tried both the adaptive and multipass methods and neither version ever finished he increased the number of processors to for both models and this didn t make it any better he reported that it is slightly larger than a problem that works ok comments i committed revisions and to the rc and trunk with thefollowing changes i modified the multi pass discretization of csg meshes to process each domain independently if the number total number of boundary surfaces is above the internal limit this will make if possible to handle larger csg meshes i also added the class vtkcsgfixedlengthbitfield in order to make it easier to change the number of bits used by the multi pass discretization i added a description of the change to the release notes this resolves a visit vtk full vtkcsgfixedlengthbitfield hm avt database database avttransformmanager cm resources help en us htmlm visit vtk full vtkbinarypartitionvolumefromvolume cm visit vtk full vtkbinarypartitionvolumefromvolume hm visit vtk full vtkcsggrid cm visit vtk full vtkcsggrid hm visit vtk full vtkmultisplitter cm visit vtk full vtkmultisplitter hm visit vtk full vtkvisitsplitter cm visit vtk full vtkvisitsplitter hm visit vtk full vtkvolumefromcsgvolume cm visit vtk full vtkvolumefromcsgvolume h | 1 |
696,033 | 23,880,424,261 | IssuesEvent | 2022-09-08 00:28:48 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | Reports not working | Priority-High (Needed for work) Bug function-Reports | Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html
**Describe the bug**
From an Edit loan form, click Print Any Report, Select Print Any Report at
http://reports.arctos.database.museum/reporter/report_printer.cfm?auth_key=6F3E0DAA-137E-48E1-A6D6919D501BB795&transaction_id=21134073
Get "nope" as result
**To Reproduce**
Steps to reproduce the behavior: see above
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Data**
If this involves external data, attach the __actual__ data that caused the problem. Do not attach a transformation or subset. You may ZIP most formats to attach, or request a Box email address for very large files.
**Desktop (please complete the following information):**
- OS: [e.g. iOS] Windows
- Browser [e.g. chrome, safari] Chrome (happened once in Firefox, then worked in Firefox)
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
**Priority**
Please assign a priority-label. Unprioritized issues get sent into a black hole of despair.
| 1.0 | Reports not working - Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html
**Describe the bug**
From an Edit loan form, click Print Any Report, Select Print Any Report at
http://reports.arctos.database.museum/reporter/report_printer.cfm?auth_key=6F3E0DAA-137E-48E1-A6D6919D501BB795&transaction_id=21134073
Get "nope" as result
**To Reproduce**
Steps to reproduce the behavior: see above
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Data**
If this involves external data, attach the __actual__ data that caused the problem. Do not attach a transformation or subset. You may ZIP most formats to attach, or request a Box email address for very large files.
**Desktop (please complete the following information):**
- OS: [e.g. iOS] Windows
- Browser [e.g. chrome, safari] Chrome (happened once in Firefox, then worked in Firefox)
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
**Priority**
Please assign a priority-label. Unprioritized issues get sent into a black hole of despair.
| priority | reports not working issue documentation is describe the bug from an edit loan form click print any report select print any report at get nope as result to reproduce steps to reproduce the behavior see above go to click on scroll down to see error expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem data if this involves external data attach the actual data that caused the problem do not attach a transformation or subset you may zip most formats to attach or request a box email address for very large files desktop please complete the following information os windows browser chrome happened once in firefox then worked in firefox version additional context add any other context about the problem here priority please assign a priority label unprioritized issues get sent into a black hole of despair | 1 |
194,357 | 6,894,101,296 | IssuesEvent | 2017-11-23 08:38:54 | ballerinalang/composer | https://api.github.com/repos/ballerinalang/composer | closed | Error when the restful and passthrough service samples are loaded | Priority/High Severity/Major Type/Bug | Release 0.93
The following error is logged when the restful and passthrough service samples are opened
```
2017-08-29 10:08:29,128 WARN - internal error occured in 'ballerina.net.ws:<init>'
2017-08-29 10:08:29,538 WARN - internal error occured in 'ballerina.net.ws:<init>'
2017-08-29 10:08:29,960 WARN - internal error occured in 'ballerina.net.ws:<init>'
2017-08-29 10:08:30,559 WARN - internal error occured in 'ballerina.net.ws:<init>'
```
| 1.0 | Error when the restful and passthrough service samples are loaded - Release 0.93
The following error is logged when the restful and passthrough service samples are opened
```
2017-08-29 10:08:29,128 WARN - internal error occured in 'ballerina.net.ws:<init>'
2017-08-29 10:08:29,538 WARN - internal error occured in 'ballerina.net.ws:<init>'
2017-08-29 10:08:29,960 WARN - internal error occured in 'ballerina.net.ws:<init>'
2017-08-29 10:08:30,559 WARN - internal error occured in 'ballerina.net.ws:<init>'
```
| priority | error when the restful and passthrough service samples are loaded release the following error is logged when the restful and passthrough service samples are opened warn internal error occured in ballerina net ws warn internal error occured in ballerina net ws warn internal error occured in ballerina net ws warn internal error occured in ballerina net ws | 1 |
468,065 | 13,461,045,152 | IssuesEvent | 2020-09-09 14:21:07 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Issues with GFI popup in edit mode | Accepted Priority: High Project: C125 bug | ## Description
Several issues in popup of GFI (when set to burger menu --> settings --> Feature info trigger is set to `hover`).
See this map:
https://dev.mapstore2.geo-solutions.it/mapstore/#/viewer/openlayers/27684
### Overlapping entries
In same cases the results come in column, instad of being paginated.

### Arrows are showing also with one single result

### A glitch
Sometimes the map pans even if not necessary (I think because of a temporary stacking of items in the popup). I was not able to replicate it, @tdipisa showed me in a different map.
All these are regressions not present in the latest release. So please check what recent changes introduced it and try to fix.
| 1.0 | Issues with GFI popup in edit mode - ## Description
Several issues in popup of GFI (when set to burger menu --> settings --> Feature info trigger is set to `hover`).
See this map:
https://dev.mapstore2.geo-solutions.it/mapstore/#/viewer/openlayers/27684
### Overlapping entries
In same cases the results come in column, instad of being paginated.

### Arrows are showing also with one single result

### A glitch
Sometimes the map pans even if not necessary (I think because of a temporary stacking of items in the popup). I was not able to replicate it, @tdipisa showed me in a different map.
All these are regressions not present in the latest release. So please check what recent changes introduced it and try to fix.
| priority | issues with gfi popup in edit mode description several issues in popup of gfi when set to burger menu settings feature info trigger is set to hover see this map overlapping entries in same cases the results come in column instad of being paginated arrows are showing also with one single result a glitch sometimes the map pans even if not necessary i think because of a temporary stacking of items in the popup i was not able to replicate it tdipisa showed me in a different map all these are regressions not present in the latest release so please check what recent changes introduced it and try to fix | 1 |
399,615 | 11,757,822,460 | IssuesEvent | 2020-03-13 14:22:06 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | Bell message doesnt match tx | Bug Needed for V2 launch Priority: High | There was 3 bids on the book:
.3 for 10
.25 for 10
.2 for 10
I sold all 30 at once....I only get a filled bell notification for one of the 3 trades
 | 1.0 | Bell message doesnt match tx - There was 3 bids on the book:
.3 for 10
.25 for 10
.2 for 10
I sold all 30 at once....I only get a filled bell notification for one of the 3 trades
 | priority | bell message doesnt match tx there was bids on the book for for for i sold all at once i only get a filled bell notification for one of the trades | 1 |
445,511 | 12,832,068,665 | IssuesEvent | 2020-07-07 06:57:20 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | Create Temperature class for localized temperature formatting | Docs: not needed Effort: small Feature Module: vaccines Priority: high | ## Is your feature request related to a problem? Please describe.
To easily display temperatures as F or C or whatever for a particular language that's been selected, it would be easier to do this similar to how `moment` or `currency` are used. Except for temperatures, there's not really a widely used package
## Describe the solution you'd like
Create a singleton `temperature` which when passed a number returns and object which represents a temperature.
Has methods `format = (scale = "celsius") => `${this.temp}${degree symbol}${celsisus ? C : F}` kind of thing.
## Implementation
I think a global property which sets temperatures to be formatted to celsius or farenheit
## Describe alternatives you've considered
N/A
## Additional context
N/A
| 1.0 | Create Temperature class for localized temperature formatting - ## Is your feature request related to a problem? Please describe.
To easily display temperatures as F or C or whatever for a particular language that's been selected, it would be easier to do this similar to how `moment` or `currency` are used. Except for temperatures, there's not really a widely used package
## Describe the solution you'd like
Create a singleton `temperature` which when passed a number returns and object which represents a temperature.
Has methods `format = (scale = "celsius") => `${this.temp}${degree symbol}${celsisus ? C : F}` kind of thing.
## Implementation
I think a global property which sets temperatures to be formatted to celsius or farenheit
## Describe alternatives you've considered
N/A
## Additional context
N/A
| priority | create temperature class for localized temperature formatting is your feature request related to a problem please describe to easily display temperatures as f or c or whatever for a particular language that s been selected it would be easier to do this similar to how moment or currency are used except for temperatures there s not really a widely used package describe the solution you d like create a singleton temperature which when passed a number returns and object which represents a temperature has methods format scale celsius this temp degree symbol celsisus c f kind of thing implementation i think a global property which sets temperatures to be formatted to celsius or farenheit describe alternatives you ve considered n a additional context n a | 1 |
53,454 | 3,040,433,815 | IssuesEvent | 2015-08-07 15:23:11 | centreon/centreon-clapi | https://api.github.com/repos/centreon/centreon-clapi | closed | Missing functionality: [ADD] Hostgroup into Host Object | Component: Resolution Priority: High Status: Rejected Tracker: Enhancement | ---
Author Name: **christophe demoire** (christophe demoire)
Original Redmine Issue: 1882, https://forge.centreon.com/issues/1882
Original Date: 2010-08-02
Original Assignee: Julien Mathis
---
None
| 1.0 | Missing functionality: [ADD] Hostgroup into Host Object - ---
Author Name: **christophe demoire** (christophe demoire)
Original Redmine Issue: 1882, https://forge.centreon.com/issues/1882
Original Date: 2010-08-02
Original Assignee: Julien Mathis
---
None
| priority | missing functionality hostgroup into host object author name christophe demoire christophe demoire original redmine issue original date original assignee julien mathis none | 1 |
68,338 | 3,286,469,991 | IssuesEvent | 2015-10-29 02:54:11 | cs2103aug2015-t14-1j/main | https://api.github.com/repos/cs2103aug2015-t14-1j/main | closed | different colour code for different types of task | priority.high type.enhancement | red colour time for deadline task
blue colour time for duration task | 1.0 | different colour code for different types of task - red colour time for deadline task
blue colour time for duration task | priority | different colour code for different types of task red colour time for deadline task blue colour time for duration task | 1 |
298,414 | 9,199,924,483 | IssuesEvent | 2019-03-07 15:57:03 | storybooks/storybook | https://api.github.com/repos/storybooks/storybook | closed | Story search does not work in production mode | bug core has workaround high priority ui | Thanks for the great library, super excited about SB5.0!
**Describe the bug**
In production mode, typing two characters into the search fails with the error:
```
Uncaught TypeError: i.parameters.fileName.includes is not a function
```
Typing > 2 characters leads to:
```
Uncaught TypeError: e.toLocaleLowerCase is not a function
```
In development, this is not an issue.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a custom webpack config, and force the `config.mode = "production"`
2. Start up storybook locally
3. Type in two or more characters you expect to match
4. See error
Tracing through, I found that the `parameters.fileName` is a `number` when `config.mode = "production"` and a `string` in dev (the actual real file name)
I've also seen this behavior in a storybook from other person's issue:
https://next--storybooks-official.netlify.com/?path=/story/ui-panel--default (from #5772)
**Expected behavior**
Searching for components should work in development and in production
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Code snippets**
```
// From my own project
const baseConfig = require("../webpack.config")
module.exports = ({ config, mode }) => {
config.mode = "production";
config.module = baseConfig.module;
config.resolve = baseConfig.resolve;
config.optimization = baseConfig.optimization;
return config;
};
```
```
// ...baseConfig
optimization: {
minimizer: [
new UglifyJsPlugin({
sourceMap: true,
parallel: true,
cache: true,
uglifyOptions: {
compress: {
conditionals: false,
warnings: false,
},
},
}),
],
splitChunks: {
chunks: "all",
cacheGroups: {
commons: {
name: "vendor",
test: /[\\/]node_modules[\\/]/,
},
},
},
},
```
**System:**
- OS: MacOS Mojave
- Device: MacbookPro 2018
- Browser: Chrome, Safari
- Framework: React
- Addons: [if relevant]
- Version: [e.g. 5.0.0]
**Additional context**
Add any other context about the problem here.
- Webpack: "webpack": "4.8.3"
| 1.0 | Story search does not work in production mode - Thanks for the great library, super excited about SB5.0!
**Describe the bug**
In production mode, typing two characters into the search fails with the error:
```
Uncaught TypeError: i.parameters.fileName.includes is not a function
```
Typing > 2 characters leads to:
```
Uncaught TypeError: e.toLocaleLowerCase is not a function
```
In development, this is not an issue.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a custom webpack config, and force the `config.mode = "production"`
2. Start up storybook locally
3. Type in two or more characters you expect to match
4. See error
Tracing through, I found that the `parameters.fileName` is a `number` when `config.mode = "production"` and a `string` in dev (the actual real file name)
I've also seen this behavior in a storybook from other person's issue:
https://next--storybooks-official.netlify.com/?path=/story/ui-panel--default (from #5772)
**Expected behavior**
Searching for components should work in development and in production
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Code snippets**
```
// From my own project
const baseConfig = require("../webpack.config")
module.exports = ({ config, mode }) => {
config.mode = "production";
config.module = baseConfig.module;
config.resolve = baseConfig.resolve;
config.optimization = baseConfig.optimization;
return config;
};
```
```
// ...baseConfig
optimization: {
minimizer: [
new UglifyJsPlugin({
sourceMap: true,
parallel: true,
cache: true,
uglifyOptions: {
compress: {
conditionals: false,
warnings: false,
},
},
}),
],
splitChunks: {
chunks: "all",
cacheGroups: {
commons: {
name: "vendor",
test: /[\\/]node_modules[\\/]/,
},
},
},
},
```
**System:**
- OS: MacOS Mojave
- Device: MacbookPro 2018
- Browser: Chrome, Safari
- Framework: React
- Addons: [if relevant]
- Version: [e.g. 5.0.0]
**Additional context**
Add any other context about the problem here.
- Webpack: "webpack": "4.8.3"
| priority | story search does not work in production mode thanks for the great library super excited about describe the bug in production mode typing two characters into the search fails with the error uncaught typeerror i parameters filename includes is not a function typing characters leads to uncaught typeerror e tolocalelowercase is not a function in development this is not an issue to reproduce steps to reproduce the behavior create a custom webpack config and force the config mode production start up storybook locally type in two or more characters you expect to match see error tracing through i found that the parameters filename is a number when config mode production and a string in dev the actual real file name i ve also seen this behavior in a storybook from other person s issue from expected behavior searching for components should work in development and in production screenshots if applicable add screenshots to help explain your problem code snippets from my own project const baseconfig require webpack config module exports config mode config mode production config module baseconfig module config resolve baseconfig resolve config optimization baseconfig optimization return config baseconfig optimization minimizer new uglifyjsplugin sourcemap true parallel true cache true uglifyoptions compress conditionals false warnings false splitchunks chunks all cachegroups commons name vendor test node modules system os macos mojave device macbookpro browser chrome safari framework react addons version additional context add any other context about the problem here webpack webpack | 1 |
196,754 | 6,948,389,750 | IssuesEvent | 2017-12-06 00:04:19 | python/mypy | https://api.github.com/repos/python/mypy | reopened | Better message for invalid types like List(int) | feature priority-0-high topic-usability | The error message "invalid type comment or annotation" isn't very helpful if a user writes `List(int)` instead of `List[int]` (or `Optional(int)`). This seems to be a common error.
Example:
```py
from typing import List
def f():
# type: () -> List(int) # invalid type comment or annotation
return [1]
```
A better message could be something like this:
```
program.py:3: error: Syntax error in type annotation
program.py:3:note: Suggestion: Use List[...] instead of List(...)
``` | 1.0 | Better message for invalid types like List(int) - The error message "invalid type comment or annotation" isn't very helpful if a user writes `List(int)` instead of `List[int]` (or `Optional(int)`). This seems to be a common error.
Example:
```py
from typing import List
def f():
# type: () -> List(int) # invalid type comment or annotation
return [1]
```
A better message could be something like this:
```
program.py:3: error: Syntax error in type annotation
program.py:3:note: Suggestion: Use List[...] instead of List(...)
``` | priority | better message for invalid types like list int the error message invalid type comment or annotation isn t very helpful if a user writes list int instead of list or optional int this seems to be a common error example py from typing import list def f type list int invalid type comment or annotation return a better message could be something like this program py error syntax error in type annotation program py note suggestion use list instead of list | 1 |
744,688 | 25,951,670,639 | IssuesEvent | 2022-12-17 17:35:54 | amitsingh-007/bypass-links | https://api.github.com/repos/amitsingh-007/bypass-links | closed | Replace redux with zustand | High Priority | * Convert either all to Class components or create a wrapper to use hook | 1.0 | Replace redux with zustand - * Convert either all to Class components or create a wrapper to use hook | priority | replace redux with zustand convert either all to class components or create a wrapper to use hook | 1 |
632,079 | 20,171,059,578 | IssuesEvent | 2022-02-10 10:26:51 | Betarena/scores | https://api.github.com/repos/Betarena/scores | opened | Header - Mouse pointer blocked | enhancement high priority | For releasing purposes we need to add a mouse pointer show a blocked icon: "not-allowed" for a few options:
https://css-tricks.com/almanac/properties/c/cursor/
Blocked option visible on:
- Odds format;
- Login option;
<img width="1550" alt="Screenshot 2022-02-10 at 10 25 10" src="https://user-images.githubusercontent.com/37311649/153387792-01f43a32-fb09-4aa5-87b6-7170ebaa8e9b.png">
<img width="1535" alt="Screenshot 2022-02-10 at 10 26 00" src="https://user-images.githubusercontent.com/37311649/153387980-259395a0-473f-48a4-b862-189769d8655c.png">
| 1.0 | Header - Mouse pointer blocked - For releasing purposes we need to add a mouse pointer show a blocked icon: "not-allowed" for a few options:
https://css-tricks.com/almanac/properties/c/cursor/
Blocked option visible on:
- Odds format;
- Login option;
<img width="1550" alt="Screenshot 2022-02-10 at 10 25 10" src="https://user-images.githubusercontent.com/37311649/153387792-01f43a32-fb09-4aa5-87b6-7170ebaa8e9b.png">
<img width="1535" alt="Screenshot 2022-02-10 at 10 26 00" src="https://user-images.githubusercontent.com/37311649/153387980-259395a0-473f-48a4-b862-189769d8655c.png">
| priority | header mouse pointer blocked for releasing purposes we need to add a mouse pointer show a blocked icon not allowed for a few options blocked option visible on odds format login option img width alt screenshot at src img width alt screenshot at src | 1 |
614,619 | 19,187,084,864 | IssuesEvent | 2021-12-05 11:41:09 | oyvind-stromsvik/spelunky | https://api.github.com/repos/oyvind-stromsvik/spelunky | opened | Regressions caused by merging the latest PR | bug high priority | Ref. https://github.com/oyvind-stromsvik/spelunky/pull/23
- [ ] Crawling over a ledge to hang from it causes you to face the wrong way.
- [ ] No longer able to push blocks.
- [ ] No longer able to pickup the climbing glove.
But the most important thing of all which actually breaks the project for everyone other than myself is:
- [ ] I moved the sprite animator out into its own package, but this package is still only available on my own local machine. I need to put it up on github. | 1.0 | Regressions caused by merging the latest PR - Ref. https://github.com/oyvind-stromsvik/spelunky/pull/23
- [ ] Crawling over a ledge to hang from it causes you to face the wrong way.
- [ ] No longer able to push blocks.
- [ ] No longer able to pickup the climbing glove.
But the most important thing of all which actually breaks the project for everyone other than myself is:
- [ ] I moved the sprite animator out into its own package, but this package is still only available on my own local machine. I need to put it up on github. | priority | regressions caused by merging the latest pr ref crawling over a ledge to hang from it causes you to face the wrong way no longer able to push blocks no longer able to pickup the climbing glove but the most important thing of all which actually breaks the project for everyone other than myself is i moved the sprite animator out into its own package but this package is still only available on my own local machine i need to put it up on github | 1 |
415,352 | 12,128,095,548 | IssuesEvent | 2020-04-22 19:53:06 | Redsart/TodoApp | https://api.github.com/repos/Redsart/TodoApp | closed | Use the Repository pattern for data persistence | priority: high status: done type: enhancement | The repository pattern helps encapsulating the functionality related data manipulations.
>Repositories are classes or components that encapsulate the logic required to access data sources. They centralize common data access functionality, providing better maintainability and decoupling the infrastructure or technology used to access databases from the domain model layer. If you use an Object-Relational Mapper (ORM) like Entity Framework, the code that must be implemented is simplified, thanks to LINQ and strong typing. This lets you focus on the data persistence logic rather than on data access plumbing.
>
> -- [Design the infrastructure persistence layer, Microsoft](https://docs.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design)
Useful resource: https://stackoverflow.com/questions/4819680/xml-repository-implementation
~Blocked by: #20 - Need to finish the design of the Data persistence layer.~ Repository module architecture done.
- [x] Add Repository module in ConsoleApp project.
- [x] Add Models sub-module: Add required data models.
- [x] Add Interfaces sub-module: Add IRepository and ITodoRepository.
- [x] XML sub-module: Add classes that implement IRepository and ITodoRepository.
- [x] Preferably use [LINQ to XML](https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/linq-to-xml-overview)
- [ ] Update Services to use the Repositories instead of classes in TodoApp.Library.
- [ ] Remove the TodoApp.Library project.
Please, refer to the [architecture diagram](https://github.com/Redsart/TodoApp/tree/master/Docs) for further details.
Actual implementation might differ from the design, but the main goal must be met. | 1.0 | Use the Repository pattern for data persistence - The repository pattern helps encapsulating the functionality related data manipulations.
>Repositories are classes or components that encapsulate the logic required to access data sources. They centralize common data access functionality, providing better maintainability and decoupling the infrastructure or technology used to access databases from the domain model layer. If you use an Object-Relational Mapper (ORM) like Entity Framework, the code that must be implemented is simplified, thanks to LINQ and strong typing. This lets you focus on the data persistence logic rather than on data access plumbing.
>
> -- [Design the infrastructure persistence layer, Microsoft](https://docs.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design)
Useful resource: https://stackoverflow.com/questions/4819680/xml-repository-implementation
~Blocked by: #20 - Need to finish the design of the Data persistence layer.~ Repository module architecture done.
- [x] Add Repository module in ConsoleApp project.
- [x] Add Models sub-module: Add required data models.
- [x] Add Interfaces sub-module: Add IRepository and ITodoRepository.
- [x] XML sub-module: Add classes that implement IRepository and ITodoRepository.
- [x] Preferably use [LINQ to XML](https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/linq-to-xml-overview)
- [ ] Update Services to use the Repositories instead of classes in TodoApp.Library.
- [ ] Remove the TodoApp.Library project.
Please, refer to the [architecture diagram](https://github.com/Redsart/TodoApp/tree/master/Docs) for further details.
Actual implementation might differ from the design, but the main goal must be met. | priority | use the repository pattern for data persistence the repository pattern helps encapsulating the functionality related data manipulations repositories are classes or components that encapsulate the logic required to access data sources they centralize common data access functionality providing better maintainability and decoupling the infrastructure or technology used to access databases from the domain model layer if you use an object relational mapper orm like entity framework the code that must be implemented is simplified thanks to linq and strong typing this lets you focus on the data persistence logic rather than on data access plumbing useful resource blocked by need to finish the design of the data persistence layer repository module architecture done add repository module in consoleapp project add models sub module add required data models add interfaces sub module add irepository and itodorepository xml sub module add classes that implement irepository and itodorepository preferably use update services to use the repositories instead of classes in todoapp library remove the todoapp library project please refer to the for further details actual implementation might differ from the design but the main goal must be met | 1 |
461,099 | 13,223,665,019 | IssuesEvent | 2020-08-17 17:39:01 | phetsims/chipper | https://api.github.com/repos/phetsims/chipper | closed | Add alternative screenshots to build process | priority:2-high | In the new sim page design, there is a requirement to use the alternative screenshots that have been added to the sim repos. Before we can implement this, we'll need to add a processing step to add these images to the build artifacts. This will require a maintenance release that will affect all sims.
I noted that the naming for the alt screenshots is standardized which should make this relatively straightforward. However, there is a non-standard number of alternative screenshots (0-3). Is this intentional or do we need to finish supplying the alternative screenshots before the rest of this work? | 1.0 | Add alternative screenshots to build process - In the new sim page design, there is a requirement to use the alternative screenshots that have been added to the sim repos. Before we can implement this, we'll need to add a processing step to add these images to the build artifacts. This will require a maintenance release that will affect all sims.
I noted that the naming for the alt screenshots is standardized which should make this relatively straightforward. However, there is a non-standard number of alternative screenshots (0-3). Is this intentional or do we need to finish supplying the alternative screenshots before the rest of this work? | priority | add alternative screenshots to build process in the new sim page design there is a requirement to use the alternative screenshots that have been added to the sim repos before we can implement this we ll need to add a processing step to add these images to the build artifacts this will require a maintenance release that will affect all sims i noted that the naming for the alt screenshots is standardized which should make this relatively straightforward however there is a non standard number of alternative screenshots is this intentional or do we need to finish supplying the alternative screenshots before the rest of this work | 1 |
665,005 | 22,295,659,052 | IssuesEvent | 2022-06-13 01:00:48 | jhudsl/intro_to_r | https://api.github.com/repos/jhudsl/intro_to_r | closed | add run previous chunks button to rstudio lecture | high priority | - also think more about getting started with first markdown document (moving file around/ just loading it into R)
- maybe a demonstration live and then the lab | 1.0 | add run previous chunks button to rstudio lecture - - also think more about getting started with first markdown document (moving file around/ just loading it into R)
- maybe a demonstration live and then the lab | priority | add run previous chunks button to rstudio lecture also think more about getting started with first markdown document moving file around just loading it into r maybe a demonstration live and then the lab | 1 |
32,665 | 2,757,465,087 | IssuesEvent | 2015-04-27 15:02:35 | tgstation/-tg-station | https://api.github.com/repos/tgstation/-tg-station | closed | Replica Pod is Broken (Hydroponics) | Bug Priority: High | Problem Description:
When attempting to revive a player via using a replica pod, it is impossible to harvest the pod. The plant grows fully, the green marker activates, stating it is ready to harvest. However no interaction allows for harvesting the pod. I have attempted this multiple times and it cannot be harvested with player blood injected and without. (Without being in an attempt to gain further seeds)
What did you expect to happen:
I expected to be able to harvest the plant. An administrator was online at the time (I don't recall the administrators name) and he admin-magicked the dead player I was trying to revive back to life after it was evident it was broken.
What happened instead:
No interaction message was received from clicking the plant to harvest. It was not harvested. The player was not revived, no seeds were returned. Eventually the plant died.
Why is this bad/What are the consequences:
It breaks a strong feature of the job role, Botanist.
Steps to reproduce the problem:
1. Plant some replica pod seeds in a Hydroponics tray.
2. Maintain the plant until full growth is reached.
3. Attempt to harvest.
Server: Sybil
Revision:
Server revision compiled on: 2015-04-02
06f09b707f3c997d5ade135540a937bc941ece40
Current Infomational Settings:
Protect Authority Roles From Traitor: 1
Protect Assistant Role From Traitor: 0
Enforce Human Authority: 1
Allow Latejoin Antagonists: 1
Protect Assistant From Antagonist: 0
Possibly related stuff (which gamemode was it? What were you doing at the time? Was anything else out of the ordinary happening?):
I have tested this during multiple rounds, at multiple different periods. It appears gamemode independent.
| 1.0 | Replica Pod is Broken (Hydroponics) - Problem Description:
When attempting to revive a player via using a replica pod, it is impossible to harvest the pod. The plant grows fully, the green marker activates, stating it is ready to harvest. However no interaction allows for harvesting the pod. I have attempted this multiple times and it cannot be harvested with player blood injected and without. (Without being in an attempt to gain further seeds)
What did you expect to happen:
I expected to be able to harvest the plant. An administrator was online at the time (I don't recall the administrators name) and he admin-magicked the dead player I was trying to revive back to life after it was evident it was broken.
What happened instead:
No interaction message was received from clicking the plant to harvest. It was not harvested. The player was not revived, no seeds were returned. Eventually the plant died.
Why is this bad/What are the consequences:
It breaks a strong feature of the job role, Botanist.
Steps to reproduce the problem:
1. Plant some replica pod seeds in a Hydroponics tray.
2. Maintain the plant until full growth is reached.
3. Attempt to harvest.
Server: Sybil
Revision:
Server revision compiled on: 2015-04-02
06f09b707f3c997d5ade135540a937bc941ece40
Current Infomational Settings:
Protect Authority Roles From Traitor: 1
Protect Assistant Role From Traitor: 0
Enforce Human Authority: 1
Allow Latejoin Antagonists: 1
Protect Assistant From Antagonist: 0
Possibly related stuff (which gamemode was it? What were you doing at the time? Was anything else out of the ordinary happening?):
I have tested this during multiple rounds, at multiple different periods. It appears gamemode independent.
| priority | replica pod is broken hydroponics problem description when attempting to revive a player via using a replica pod it is impossible to harvest the pod the plant grows fully the green marker activates stating it is ready to harvest however no interaction allows for harvesting the pod i have attempted this multiple times and it cannot be harvested with player blood injected and without without being in an attempt to gain further seeds what did you expect to happen i expected to be able to harvest the plant an administrator was online at the time i don t recall the administrators name and he admin magicked the dead player i was trying to revive back to life after it was evident it was broken what happened instead no interaction message was received from clicking the plant to harvest it was not harvested the player was not revived no seeds were returned eventually the plant died why is this bad what are the consequences it breaks a strong feature of the job role botanist steps to reproduce the problem plant some replica pod seeds in a hydroponics tray maintain the plant until full growth is reached attempt to harvest server sybil revision server revision compiled on current infomational settings protect authority roles from traitor protect assistant role from traitor enforce human authority allow latejoin antagonists protect assistant from antagonist possibly related stuff which gamemode was it what were you doing at the time was anything else out of the ordinary happening i have tested this during multiple rounds at multiple different periods it appears gamemode independent | 1 |
738,303 | 25,551,805,398 | IssuesEvent | 2022-11-30 00:49:48 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | Feature Request - loan item picks | Priority-High (Needed for work) Enhancement Help wanted | **Is your feature request related to a problem? Please describe.**
I am going to [rebuild and modernize specimen search and results](https://github.com/ArctosDB/arctos/issues/2745). The injected loan pick thing is antique and crusty and not very compatible with modern architecture.
**Describe what you're trying to accomplish**
* Not try to do crazy things to the results table, all in the name of sanity and sustainability
* Make the loan pick thing better
**Describe the solution you'd like**
I think this is best done via a 'manage' option that redirects to a loan part picker table. This would be a static/pre-defined table, but we can add WHATEVER to it, and it would be born with the correct part-centric structure. It would also be completely internal so there's lots more flexibility than there can be in anything which must also support public users.
Another possibility is a 'pick loan' button which pops open an overlay (perhaps similar to the current embedded thing). I'm not crazy about this idea - it's a potentially expensive check in a very expensive form - but I can explore how possible this is if absolutely necessary.
**Describe alternatives you've considered**
I'm up for anything.
**Additional context**
n/a
**Priority**
Relatively high - I'd rather not surprise anyone with this, and I'd very much like some guidance from the users who do this sort of thing. I'll eventually be forced to interpret silence as enthusiastic approval for whatever I'm thinking at the time, so input please!
| 1.0 | Feature Request - loan item picks - **Is your feature request related to a problem? Please describe.**
I am going to [rebuild and modernize specimen search and results](https://github.com/ArctosDB/arctos/issues/2745). The injected loan pick thing is antique and crusty and not very compatible with modern architecture.
**Describe what you're trying to accomplish**
* Not try to do crazy things to the results table, all in the name of sanity and sustainability
* Make the loan pick thing better
**Describe the solution you'd like**
I think this is best done via a 'manage' option that redirects to a loan part picker table. This would be a static/pre-defined table, but we can add WHATEVER to it, and it would be born with the correct part-centric structure. It would also be completely internal so there's lots more flexibility than there can be in anything which must also support public users.
Another possibility is a 'pick loan' button which pops open an overlay (perhaps similar to the current embedded thing). I'm not crazy about this idea - it's a potentially expensive check in a very expensive form - but I can explore how possible this is if absolutely necessary.
**Describe alternatives you've considered**
I'm up for anything.
**Additional context**
n/a
**Priority**
Relatively high - I'd rather not surprise anyone with this, and I'd very much like some guidance from the users who do this sort of thing. I'll eventually be forced to interpret silence as enthusiastic approval for whatever I'm thinking at the time, so input please!
| priority | feature request loan item picks is your feature request related to a problem please describe i am going to the injected loan pick thing is antique and crusty and not very compatible with modern architecture describe what you re trying to accomplish not try to do crazy things to the results table all in the name of sanity and sustainability make the loan pick thing better describe the solution you d like i think this is best done via a manage option that redirects to a loan part picker table this would be a static pre defined table but we can add whatever to it and it would be born with the correct part centric structure it would also be completely internal so there s lots more flexibility than there can be in anything which must also support public users another possibility is a pick loan button which pops open an overlay perhaps similar to the current embedded thing i m not crazy about this idea it s a potentially expensive check in a very expensive form but i can explore how possible this is if absolutely necessary describe alternatives you ve considered i m up for anything additional context n a priority relatively high i d rather not surprise anyone with this and i d very much like some guidance from the users who do this sort of thing i ll eventually be forced to interpret silence as enthusiastic approval for whatever i m thinking at the time so input please | 1 |
221,912 | 7,398,885,721 | IssuesEvent | 2018-03-19 08:32:16 | llmhyy/microbat | https://api.github.com/repos/llmhyy/microbat | opened | [Instrumentation] Possible Missing Step | high priority of priority | hi @lylytran
Would you please check Closure 60? The fixed trace may miss some steps. If you can test it, the 30042th step on fixed step should be followed by a step running into line 123 of NodeUtil.java. However, it is followed by a step running into line 615 of Node.java. | 2.0 | [Instrumentation] Possible Missing Step - hi @lylytran
Would you please check Closure 60? The fixed trace may miss some steps. If you can test it, the 30042th step on fixed step should be followed by a step running into line 123 of NodeUtil.java. However, it is followed by a step running into line 615 of Node.java. | priority | possible missing step hi lylytran would you please check closure the fixed trace may miss some steps if you can test it the step on fixed step should be followed by a step running into line of nodeutil java however it is followed by a step running into line of node java | 1 |
95,365 | 3,946,638,337 | IssuesEvent | 2016-04-28 06:01:16 | raml-org/raml-js-parser-2 | https://api.github.com/repos/raml-org/raml-js-parser-2 | closed | ResourceTypeRef must have better helpers | enhancement priority:high | Right now we have to jump to High-level in order to obtain resource type appliance parameters, type name could also be directly provided from ResourceTypeRef method. | 1.0 | ResourceTypeRef must have better helpers - Right now we have to jump to High-level in order to obtain resource type appliance parameters, type name could also be directly provided from ResourceTypeRef method. | priority | resourcetyperef must have better helpers right now we have to jump to high level in order to obtain resource type appliance parameters type name could also be directly provided from resourcetyperef method | 1 |
646,263 | 21,042,632,120 | IssuesEvent | 2022-03-31 13:35:57 | AY2122S2-CS2103T-T09-3/tp | https://api.github.com/repos/AY2122S2-CS2103T-T09-3/tp | closed | [DG] Update DG | type.Task priority.High | ## Details
Improve and fix the DG's relevant sections.
Things to edit for different people:
- [x] WJ - show status (Line 365) & edit comment (Line 875), do follow the format as the rest have done
- [x] SC - import export feature, add sequence diagrams (line 350), improve the feature further with more description and diagrams
- [x] SC - importing/exporting trackmon data (Line 1025 onwards), improve the format further with more description
- [x] KW - Sort, update the sort information when done with sort rework | 1.0 | [DG] Update DG - ## Details
Improve and fix the DG's relevant sections.
Things to edit for different people:
- [x] WJ - show status (Line 365) & edit comment (Line 875), do follow the format as the rest have done
- [x] SC - import export feature, add sequence diagrams (line 350), improve the feature further with more description and diagrams
- [x] SC - importing/exporting trackmon data (Line 1025 onwards), improve the format further with more description
- [x] KW - Sort, update the sort information when done with sort rework | priority | update dg details improve and fix the dg s relevant sections things to edit for different people wj show status line edit comment line do follow the format as the rest have done sc import export feature add sequence diagrams line improve the feature further with more description and diagrams sc importing exporting trackmon data line onwards improve the format further with more description kw sort update the sort information when done with sort rework | 1 |
22,036 | 2,644,467,853 | IssuesEvent | 2015-03-12 17:05:04 | TabakoffLab/PhenoGen | https://api.github.com/repos/TabakoffLab/PhenoGen | opened | WGCNA sections for region and gene interfere with eachother | bug High Priority | IDs for elements in both sections are the same once one has loaded the other cannot load. This is ok when the gene version is loaded then the region but not the other way around as it loads the gene data into the region section which is not displayed. | 1.0 | WGCNA sections for region and gene interfere with eachother - IDs for elements in both sections are the same once one has loaded the other cannot load. This is ok when the gene version is loaded then the region but not the other way around as it loads the gene data into the region section which is not displayed. | priority | wgcna sections for region and gene interfere with eachother ids for elements in both sections are the same once one has loaded the other cannot load this is ok when the gene version is loaded then the region but not the other way around as it loads the gene data into the region section which is not displayed | 1 |
101,287 | 4,112,348,619 | IssuesEvent | 2016-06-07 10:07:03 | pombase/canto | https://api.github.com/repos/pombase/canto | closed | better text for the genotype management drop down | discuss genotype_enhancements high priority next quick text change | moved from
https://github.com/pombase/canto/issues/1133#issuecomment-219400256
So, now do we still need better text for the drop down?
Annotations for this genotype -> view annotations?
Edit details -> edit genotype details?
Duplicate -> Copy and Edit?
Suggestions? | 1.0 | better text for the genotype management drop down - moved from
https://github.com/pombase/canto/issues/1133#issuecomment-219400256
So, now do we still need better text for the drop down?
Annotations for this genotype -> view annotations?
Edit details -> edit genotype details?
Duplicate -> Copy and Edit?
Suggestions? | priority | better text for the genotype management drop down moved from so now do we still need better text for the drop down annotations for this genotype view annotations edit details edit genotype details duplicate copy and edit suggestions | 1 |
563,716 | 16,704,466,113 | IssuesEvent | 2021-06-09 08:20:30 | bounswe/2021SpringGroup10 | https://api.github.com/repos/bounswe/2021SpringGroup10 | closed | Reviewing folder structure | Coding: Backend Priority: High Type: Enhancement | Hello everyone,
Please review your folder structure and add your code in the folder named "practice_app" so that we will not have a mergeing conflict. Thanks! | 1.0 | Reviewing folder structure - Hello everyone,
Please review your folder structure and add your code in the folder named "practice_app" so that we will not have a mergeing conflict. Thanks! | priority | reviewing folder structure hello everyone please review your folder structure and add your code in the folder named practice app so that we will not have a mergeing conflict thanks | 1 |
107,125 | 4,289,212,110 | IssuesEvent | 2016-07-17 23:32:25 | rdunlop/unicycling-registration | https://api.github.com/repos/rdunlop/unicycling-registration | closed | Indicate "ineligible" competitors on the announcer sheets | High Priority | Ineligible competitors are not indicated as such on the Announcer Sheet.
This caused some confusion when they were attempting to announce results. | 1.0 | Indicate "ineligible" competitors on the announcer sheets - Ineligible competitors are not indicated as such on the Announcer Sheet.
This caused some confusion when they were attempting to announce results. | priority | indicate ineligible competitors on the announcer sheets ineligible competitors are not indicated as such on the announcer sheet this caused some confusion when they were attempting to announce results | 1 |
794,560 | 28,040,354,043 | IssuesEvent | 2023-03-28 18:01:09 | minio/docs | https://api.github.com/repos/minio/docs | reopened | Update KES Errors to resolve permissions on February 2023+ releases | priority: high community | * Minio is currently not starting in our environment after upgradeing to the newest release in combination with KES
* `Error: not authorized: insufficient permissions (kes.Error)`
* We are unable to find out if there was a change in minio which needs more permissions to use KMS?
* API-Documentation of KES did not change since 2022, and we've upgraded from a Jan 23 release to the newest
* https://github.com/minio/kes/wiki/Server-API#api-overview
* These endpoints we're the only one needed before and it seems like minio needs more with newer releases?
```
- /v1/key/create/*
- /v1/key/generate/*
- /v1/key/decrypt/*
```
## Expected Behavior
* Minio should start as usual
## Current Behavior
```
Mรคr 28 13:51:57 minio2 minio[13835]: Documentation: https://min.io/docs/minio/linux/index.html
Mรคr 28 13:51:57 minio2 minio[13835]: API: SYSTEM()
Mรคr 28 13:51:57 minio2 minio[13835]: Time: 11:51:57 UTC 03/28/2023
Mรคr 28 13:51:57 minio2 minio[13835]: DeploymentID: e425c60c-3de5-42d3-81c5-09a620eaf8c6
Mรคr 28 13:51:57 minio2 minio[13835]: Error: not authorized: insufficient permissions (kes.Error)
Mรคr 28 13:51:57 minio2 minio[13835]: 5: internal/logger/logger.go:258:logger.LogIf()
Mรคr 28 13:51:57 minio2 minio[13835]: 4: cmd/api-errors.go:2280:cmd.toAPIErrorCode()
Mรคr 28 13:51:57 minio2 minio[13835]: 3: cmd/api-errors.go:2305:cmd.toAPIError()
Mรคr 28 13:51:57 minio2 minio[13835]: 2: cmd/healthcheck-handler.go:134:cmd.LivenessCheckHandler()
Mรคr 28 13:51:57 minio2 minio[13835]: 1: net/http/server.go:2109:http.HandlerFunc.ServeHTTP()
```
## Possible Solution
* After removing the "policy" section in kes config and enable admin identity minio is working again.
## Steps to Reproduce (for bugs)
* Upgrade to RELEASE.2023-03-24T21-41-23Z
* restarting minio service results in error above
* upgrade KES to 2023-02-15T14-54-37Z _did not fix it_
## Context
* Upgraded MINIO from a release of january to:
* minio version RELEASE.2023-03-24T21-41-23Z (commit-id=74040b457b50417b58eae7cb17c63428a0e2dd44)
* kes 2023-02-15T14-54-37Z (commit=8ce403b264e3c6fafb94577dd1c59ae811963e43)
## Your Environment
**We have set the following Env vars for minio regaring KES:**
```
MINIO_KMS_KES_ENDPOINT
MINIO_KMS_KES_ENDPOINT
MINIO_KMS_KES_CERT_FILE
MINIO_KMS_KES_KEY_FILE
MINIO_KMS_KES_CAPATH
MINIO_KMS_KES_KEY_NAME
```
* **This was our kes-config before the upgrade**
```
address: 0.0.0.0:7373
admin:
identity: disabled
root: disabled
tls:
key: /minio/certs/private.key
cert: /minio/certs/public.crt
policy:
minio-kes:
allow:
- /v1/key/create/*
- /v1/key/generate/*
- /v1/key/decrypt/*
identities:
- ABCD
log:
error: on
audit: on
keystore:
vault:
endpoint: https://vault:8200
engine: "minio-kes"
version: "v2"
approle:
id: "YYYYYYY"
secret: "ZZZZZZZ"
retry: 15s
status:
ping: 10s
```
* **This is our kes-config AFTER the upgrade to fix the issue**
```
address: 0.0.0.0:7373
admin:
identity: ABCD
root: disabled
tls:
key: /minio/certs/private.key
cert: /minio/certs/public.crt
log:
error: on
audit: on
keystore:
vault:
endpoint: https://vault:8200
engine: "minio-kes"
version: "v2"
approle:
id: "YYYYYYY"
secret: "ZZZZZZZ"
retry: 15s
status:
ping: 10s
```
| 1.0 | Update KES Errors to resolve permissions on February 2023+ releases - * Minio is currently not starting in our environment after upgradeing to the newest release in combination with KES
* `Error: not authorized: insufficient permissions (kes.Error)`
* We are unable to find out if there was a change in minio which needs more permissions to use KMS?
* API-Documentation of KES did not change since 2022, and we've upgraded from a Jan 23 release to the newest
* https://github.com/minio/kes/wiki/Server-API#api-overview
* These endpoints we're the only one needed before and it seems like minio needs more with newer releases?
```
- /v1/key/create/*
- /v1/key/generate/*
- /v1/key/decrypt/*
```
## Expected Behavior
* Minio should start as usual
## Current Behavior
```
Mรคr 28 13:51:57 minio2 minio[13835]: Documentation: https://min.io/docs/minio/linux/index.html
Mรคr 28 13:51:57 minio2 minio[13835]: API: SYSTEM()
Mรคr 28 13:51:57 minio2 minio[13835]: Time: 11:51:57 UTC 03/28/2023
Mรคr 28 13:51:57 minio2 minio[13835]: DeploymentID: e425c60c-3de5-42d3-81c5-09a620eaf8c6
Mรคr 28 13:51:57 minio2 minio[13835]: Error: not authorized: insufficient permissions (kes.Error)
Mรคr 28 13:51:57 minio2 minio[13835]: 5: internal/logger/logger.go:258:logger.LogIf()
Mรคr 28 13:51:57 minio2 minio[13835]: 4: cmd/api-errors.go:2280:cmd.toAPIErrorCode()
Mรคr 28 13:51:57 minio2 minio[13835]: 3: cmd/api-errors.go:2305:cmd.toAPIError()
Mรคr 28 13:51:57 minio2 minio[13835]: 2: cmd/healthcheck-handler.go:134:cmd.LivenessCheckHandler()
Mรคr 28 13:51:57 minio2 minio[13835]: 1: net/http/server.go:2109:http.HandlerFunc.ServeHTTP()
```
## Possible Solution
* After removing the "policy" section in kes config and enable admin identity minio is working again.
## Steps to Reproduce (for bugs)
* Upgrade to RELEASE.2023-03-24T21-41-23Z
* restarting minio service results in error above
* upgrade KES to 2023-02-15T14-54-37Z _did not fix it_
## Context
* Upgraded MINIO from a release of january to:
* minio version RELEASE.2023-03-24T21-41-23Z (commit-id=74040b457b50417b58eae7cb17c63428a0e2dd44)
* kes 2023-02-15T14-54-37Z (commit=8ce403b264e3c6fafb94577dd1c59ae811963e43)
## Your Environment
**We have set the following Env vars for minio regaring KES:**
```
MINIO_KMS_KES_ENDPOINT
MINIO_KMS_KES_ENDPOINT
MINIO_KMS_KES_CERT_FILE
MINIO_KMS_KES_KEY_FILE
MINIO_KMS_KES_CAPATH
MINIO_KMS_KES_KEY_NAME
```
* **This was our kes-config before the upgrade**
```
address: 0.0.0.0:7373
admin:
identity: disabled
root: disabled
tls:
key: /minio/certs/private.key
cert: /minio/certs/public.crt
policy:
minio-kes:
allow:
- /v1/key/create/*
- /v1/key/generate/*
- /v1/key/decrypt/*
identities:
- ABCD
log:
error: on
audit: on
keystore:
vault:
endpoint: https://vault:8200
engine: "minio-kes"
version: "v2"
approle:
id: "YYYYYYY"
secret: "ZZZZZZZ"
retry: 15s
status:
ping: 10s
```
* **This is our kes-config AFTER the upgrade to fix the issue**
```
address: 0.0.0.0:7373
admin:
identity: ABCD
root: disabled
tls:
key: /minio/certs/private.key
cert: /minio/certs/public.crt
log:
error: on
audit: on
keystore:
vault:
endpoint: https://vault:8200
engine: "minio-kes"
version: "v2"
approle:
id: "YYYYYYY"
secret: "ZZZZZZZ"
retry: 15s
status:
ping: 10s
```
| priority | update kes errors to resolve permissions on february releases minio is currently not starting in our environment after upgradeing to the newest release in combination with kes error not authorized insufficient permissions kes error we are unable to find out if there was a change in minio which needs more permissions to use kms api documentation of kes did not change since and we ve upgraded from a jan release to the newest these endpoints we re the only one needed before and it seems like minio needs more with newer releases key create key generate key decrypt expected behavior minio should start as usual current behavior mรคr minio documentation mรคr minio api system mรคr minio time utc mรคr minio deploymentid mรคr minio error not authorized insufficient permissions kes error mรคr minio internal logger logger go logger logif mรคr minio cmd api errors go cmd toapierrorcode mรคr minio cmd api errors go cmd toapierror mรคr minio cmd healthcheck handler go cmd livenesscheckhandler mรคr minio net http server go http handlerfunc servehttp possible solution after removing the policy section in kes config and enable admin identity minio is working again steps to reproduce for bugs upgrade to release restarting minio service results in error above upgrade kes to did not fix it context upgraded minio from a release of january to minio version release commit id kes commit your environment we have set the following env vars for minio regaring kes minio kms kes endpoint minio kms kes endpoint minio kms kes cert file minio kms kes key file minio kms kes capath minio kms kes key name this was our kes config before the upgrade address admin identity disabled root disabled tls key minio certs private key cert minio certs public crt policy minio kes allow key create key generate key decrypt identities abcd log error on audit on keystore vault endpoint engine minio kes version approle id yyyyyyy secret zzzzzzz retry status ping this is our kes config after the upgrade to fix the issue address admin identity abcd root disabled tls key minio certs private key cert minio certs public crt log error on audit on keystore vault endpoint engine minio kes version approle id yyyyyyy secret zzzzzzz retry status ping | 1 |
182,501 | 6,670,734,579 | IssuesEvent | 2017-10-04 01:49:02 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | closed | ACDC with too many parents | High Priority | Currently following workflows are skipped in vocms0308 and vocms0310 due to too many parents files need to be retrieved. (60K) Which takes up all the processing in WorkQueueMananger
We need to check whether parentage is needed or make more effient.
```
"mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053538_530", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053544_9947", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053551_9974", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053600_3305", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053608_5854", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053617_8169", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053625_8322", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053632_9709", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053641_9747", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053649_646"
``` | 1.0 | ACDC with too many parents - Currently following workflows are skipped in vocms0308 and vocms0310 due to too many parents files need to be retrieved. (60K) Which takes up all the processing in WorkQueueMananger
We need to check whether parentage is needed or make more effient.
```
"mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053538_530", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053544_9947", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053551_9974", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053600_3305", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053608_5854", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053617_8169", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053625_8322", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053632_9709", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053641_9747", "mcremone_ACDC0_task_EGM-RunIISummer17GS-00002__v1_T_170924_053649_646"
``` | priority | acdc with too many parents currently following workflows are skipped in and due to too many parents files need to be retrieved which takes up all the processing in workqueuemananger we need to check whether parentage is needed or make more effient mcremone task egm t mcremone task egm t mcremone task egm t mcremone task egm t mcremone task egm t mcremone task egm t mcremone task egm t mcremone task egm t mcremone task egm t mcremone task egm t | 1 |
267,099 | 8,379,147,762 | IssuesEvent | 2018-10-06 21:41:36 | jroal/a2dpvolume | https://api.github.com/repos/jroal/a2dpvolume | closed | App crashes when attempting to set volume | Priority-High bug help wanted | java.lang.SecurityException:
at android.os.Parcel.readException(Parcel.java:1620)
at android.os.Parcel.readException(Parcel.java:1573)
at android.media.IAudioService$Stub$Proxy.setStreamVolume(IAudioService.java:961)
at android.media.AudioManager.setStreamVolume(AudioManager.java:1340)
at a2dp.Vol.service.setVolume(service.java:1027)
at a2dp.Vol.service$7.onFinish(service.java:694)
at android.os.CountDownTimer$1.handleMessage(CountDownTimer.java:127)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:158)
at android.app.ActivityThread.main(ActivityThread.java:7225)
at java.lang.reflect.Method.invoke(Native Method:0)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1230)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1120) | 1.0 | App crashes when attempting to set volume - java.lang.SecurityException:
at android.os.Parcel.readException(Parcel.java:1620)
at android.os.Parcel.readException(Parcel.java:1573)
at android.media.IAudioService$Stub$Proxy.setStreamVolume(IAudioService.java:961)
at android.media.AudioManager.setStreamVolume(AudioManager.java:1340)
at a2dp.Vol.service.setVolume(service.java:1027)
at a2dp.Vol.service$7.onFinish(service.java:694)
at android.os.CountDownTimer$1.handleMessage(CountDownTimer.java:127)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:158)
at android.app.ActivityThread.main(ActivityThread.java:7225)
at java.lang.reflect.Method.invoke(Native Method:0)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1230)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1120) | priority | app crashes when attempting to set volume java lang securityexception at android os parcel readexception parcel java at android os parcel readexception parcel java at android media iaudioservice stub proxy setstreamvolume iaudioservice java at android media audiomanager setstreamvolume audiomanager java at vol service setvolume service java at vol service onfinish service java at android os countdowntimer handlemessage countdowntimer java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os zygoteinit methodandargscaller run zygoteinit java at com android internal os zygoteinit main zygoteinit java | 1 |
790,075 | 27,814,952,189 | IssuesEvent | 2023-03-18 15:20:10 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | Avoid package cache set every run for GitHub tags/releases | type:bug priority-2-high status:in-progress performance | ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
source
### If you're self-hosting Renovate, select which platform you are using.
github.com
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
Even when using a completely "hot" cache and nothing changes, Renovate performs a lot of package cache set commands. I added logging to see which they were:
```
WARN: Cache set (repository=renovatebot/renovate)
"namespace": "github-tags-datasource-v2",
"key": "https://api.github.com/:peter-evans:create-pull-request",
"ttlMinutes": 43096.24955
WARN: Cache set (repository=renovatebot/renovate)
"namespace": "github-releases-datasource-v2",
"key": "https://api.github.com/:actions:upload-artifact",
"ttlMinutes": 43096.23288333333
```
Using this repo as an example, it results in 24 unnecessary set commands per run when doing dryRun=lookup:
```
"set": {"count": 24, "avgMs": 1136, "medianMs": 577, "maxMs": 7438}
```
If I shortcut the set command (immediately return) then it reduces the lookup time:
Two runs without set:
```
DEBUG: Repository timing splits (milliseconds) (repository=renovatebot/renovate)
"splits": {"init": 12387, "extract": 3572, "lookup": 14219},
"total": 30336
```
```
DEBUG: Repository timing splits (milliseconds) (repository=renovatebot/renovate)
"splits": {"init": 12174, "extract": 3395, "lookup": 14245},
"total": 29949
```
Two runs with set:
```
DEBUG: Repository timing splits (milliseconds) (repository=renovatebot/renovate)
"splits": {"init": 10507, "extract": 3426, "lookup": 21464},
"total": 35540
```
```
DEBUG: Repository timing splits (milliseconds) (repository=renovatebot/renovate)
"splits": {"init": 11238, "extract": 3393, "lookup": 21498},
"total": 36281
```
i.e. it seems to add 5-6 seconds per run.
Ideas:
- Don't set if nothing has changed? or
- Don't set unless TTL remaining is less than half
### Relevant debug logs
_No response_
### Have you created a minimal reproduction repository?
I have read the minimal reproductions documentation and linked to such a repository in the bug description | 1.0 | Avoid package cache set every run for GitHub tags/releases - ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
source
### If you're self-hosting Renovate, select which platform you are using.
github.com
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
Even when using a completely "hot" cache and nothing changes, Renovate performs a lot of package cache set commands. I added logging to see which they were:
```
WARN: Cache set (repository=renovatebot/renovate)
"namespace": "github-tags-datasource-v2",
"key": "https://api.github.com/:peter-evans:create-pull-request",
"ttlMinutes": 43096.24955
WARN: Cache set (repository=renovatebot/renovate)
"namespace": "github-releases-datasource-v2",
"key": "https://api.github.com/:actions:upload-artifact",
"ttlMinutes": 43096.23288333333
```
Using this repo as an example, it results in 24 unnecessary set commands per run when doing dryRun=lookup:
```
"set": {"count": 24, "avgMs": 1136, "medianMs": 577, "maxMs": 7438}
```
If I shortcut the set command (immediately return) then it reduces the lookup time:
Two runs without set:
```
DEBUG: Repository timing splits (milliseconds) (repository=renovatebot/renovate)
"splits": {"init": 12387, "extract": 3572, "lookup": 14219},
"total": 30336
```
```
DEBUG: Repository timing splits (milliseconds) (repository=renovatebot/renovate)
"splits": {"init": 12174, "extract": 3395, "lookup": 14245},
"total": 29949
```
Two runs with set:
```
DEBUG: Repository timing splits (milliseconds) (repository=renovatebot/renovate)
"splits": {"init": 10507, "extract": 3426, "lookup": 21464},
"total": 35540
```
```
DEBUG: Repository timing splits (milliseconds) (repository=renovatebot/renovate)
"splits": {"init": 11238, "extract": 3393, "lookup": 21498},
"total": 36281
```
i.e. it seems to add 5-6 seconds per run.
Ideas:
- Don't set if nothing has changed? or
- Don't set unless TTL remaining is less than half
### Relevant debug logs
_No response_
### Have you created a minimal reproduction repository?
I have read the minimal reproductions documentation and linked to such a repository in the bug description | priority | avoid package cache set every run for github tags releases how are you running renovate self hosted if you re self hosting renovate tell us what version of renovate you run source if you re self hosting renovate select which platform you are using github com if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug even when using a completely hot cache and nothing changes renovate performs a lot of package cache set commands i added logging to see which they were warn cache set repository renovatebot renovate namespace github tags datasource key ttlminutes warn cache set repository renovatebot renovate namespace github releases datasource key ttlminutes using this repo as an example it results in unnecessary set commands per run when doing dryrun lookup set count avgms medianms maxms if i shortcut the set command immediately return then it reduces the lookup time two runs without set debug repository timing splits milliseconds repository renovatebot renovate splits init extract lookup total debug repository timing splits milliseconds repository renovatebot renovate splits init extract lookup total two runs with set debug repository timing splits milliseconds repository renovatebot renovate splits init extract lookup total debug repository timing splits milliseconds repository renovatebot renovate splits init extract lookup total i e it seems to add seconds per run ideas don t set if nothing has changed or don t set unless ttl remaining is less than half relevant debug logs no response have you created a minimal reproduction repository i have read the minimal reproductions documentation and linked to such a repository in the bug description | 1 |
524,846 | 15,224,413,010 | IssuesEvent | 2021-02-18 05:10:52 | ballerina-platform/ballerina-release | https://api.github.com/repos/ballerina-platform/ballerina-release | opened | Conduct a Mutation testing against the Test Scenarios in ditribution and installers | Priority/High Type/Task | **Description:**
Currently we have tests in ballerina-distribution repository and then we have tests in Ballerina installer tests as well.
We assume these tests will assure the functionality.
Some of tests may not catch actual bugs due to various reasons like tests are using intermediate packs and can be exit code is wrong.
Therefore we need to conduct a Mutation testing effort by introducing few bugs purposely and assure how many of them identified by our current test suite. | 1.0 | Conduct a Mutation testing against the Test Scenarios in ditribution and installers - **Description:**
Currently we have tests in ballerina-distribution repository and then we have tests in Ballerina installer tests as well.
We assume these tests will assure the functionality.
Some of tests may not catch actual bugs due to various reasons like tests are using intermediate packs and can be exit code is wrong.
Therefore we need to conduct a Mutation testing effort by introducing few bugs purposely and assure how many of them identified by our current test suite. | priority | conduct a mutation testing against the test scenarios in ditribution and installers description currently we have tests in ballerina distribution repository and then we have tests in ballerina installer tests as well we assume these tests will assure the functionality some of tests may not catch actual bugs due to various reasons like tests are using intermediate packs and can be exit code is wrong therefore we need to conduct a mutation testing effort by introducing few bugs purposely and assure how many of them identified by our current test suite | 1 |
797,508 | 28,147,467,058 | IssuesEvent | 2023-04-02 16:51:33 | AY2223S2-CS2113-T14-3/tp | https://api.github.com/repos/AY2223S2-CS2113-T14-3/tp | opened | Refine UG and DG documents | type.Task priority.High | This is to set the direction of MyLedger towards its target audience | 1.0 | Refine UG and DG documents - This is to set the direction of MyLedger towards its target audience | priority | refine ug and dg documents this is to set the direction of myledger towards its target audience | 1 |
365,694 | 10,790,709,596 | IssuesEvent | 2019-11-05 15:26:17 | flextype/flextype | https://api.github.com/repos/flextype/flextype | opened | Flextype Core: fix id's names for all generated fields. | priority: high type: bug | We should have getElementID method in Forms API to create correct fields ID's | 1.0 | Flextype Core: fix id's names for all generated fields. - We should have getElementID method in Forms API to create correct fields ID's | priority | flextype core fix id s names for all generated fields we should have getelementid method in forms api to create correct fields id s | 1 |
237,341 | 7,758,810,038 | IssuesEvent | 2018-05-31 20:51:17 | davemfish/trails-viz | https://api.github.com/repos/davemfish/trails-viz | closed | Display social media PUDs on histogram and time series | enhancement priority:high | Add the capability to view total PUDs from each social media data source as additional lines or bars on the plots. To keep the plots from getting too busy, this might require an option to switch between modeled PUDs and social media PUDs. I think users will generally prefer to look at one or the other -- and for some projects we won't have modeled PUDs, so users will be limited to viewing just PUDs from social media. I also wonder if this feature might be too much to view for multiple trails? | 1.0 | Display social media PUDs on histogram and time series - Add the capability to view total PUDs from each social media data source as additional lines or bars on the plots. To keep the plots from getting too busy, this might require an option to switch between modeled PUDs and social media PUDs. I think users will generally prefer to look at one or the other -- and for some projects we won't have modeled PUDs, so users will be limited to viewing just PUDs from social media. I also wonder if this feature might be too much to view for multiple trails? | priority | display social media puds on histogram and time series add the capability to view total puds from each social media data source as additional lines or bars on the plots to keep the plots from getting too busy this might require an option to switch between modeled puds and social media puds i think users will generally prefer to look at one or the other and for some projects we won t have modeled puds so users will be limited to viewing just puds from social media i also wonder if this feature might be too much to view for multiple trails | 1 |
434,752 | 12,522,692,774 | IssuesEvent | 2020-06-03 19:39:18 | ankidroid/Anki-Android | https://api.github.com/repos/ankidroid/Anki-Android | closed | E-ink display mode: Don't show circular progress bar | Accepted Enhancement Priority-High Stale | Originally reported on Google Code with ID 2125
```
In the 2.2 Alpha versions I see spinners used in some places:
1) When syncing.
2) When the card takes some time to display, I see a spinner for a fraction of a second
in the study screen.
When E-ink mode is selected, these spinners should not be displayed. E-ink devices
use a comparatively huge amount of system resources, such as battery, to display animations.
Thank you.
```
Reported by `dotancohen` on 2014-05-27 16:03:49
| 1.0 | E-ink display mode: Don't show circular progress bar - Originally reported on Google Code with ID 2125
```
In the 2.2 Alpha versions I see spinners used in some places:
1) When syncing.
2) When the card takes some time to display, I see a spinner for a fraction of a second
in the study screen.
When E-ink mode is selected, these spinners should not be displayed. E-ink devices
use a comparatively huge amount of system resources, such as battery, to display animations.
Thank you.
```
Reported by `dotancohen` on 2014-05-27 16:03:49
| priority | e ink display mode don t show circular progress bar originally reported on google code with id in the alpha versions i see spinners used in some places when syncing when the card takes some time to display i see a spinner for a fraction of a second in the study screen when e ink mode is selected these spinners should not be displayed e ink devices use a comparatively huge amount of system resources such as battery to display animations thank you reported by dotancohen on | 1 |
592,509 | 17,909,793,985 | IssuesEvent | 2021-09-09 02:33:00 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | opened | In-Built compiler plugins should have an order when executing | Type/Improvement Priority/High | **Description:**
In-Built compiler plugins should have an order when executing.
| 1.0 | In-Built compiler plugins should have an order when executing - **Description:**
In-Built compiler plugins should have an order when executing.
| priority | in built compiler plugins should have an order when executing description in built compiler plugins should have an order when executing | 1 |
287,602 | 8,817,228,500 | IssuesEvent | 2018-12-30 21:03:20 | uNetworking/uWebSockets | https://api.github.com/repos/uNetworking/uWebSockets | reopened | Ensure modules work individually | high priority | Zlib should be optional and never required to build the project
OpenSSL should also be optional and not required to build the project
Removing or adding any of them should work, with limited features as follows | 1.0 | Ensure modules work individually - Zlib should be optional and never required to build the project
OpenSSL should also be optional and not required to build the project
Removing or adding any of them should work, with limited features as follows | priority | ensure modules work individually zlib should be optional and never required to build the project openssl should also be optional and not required to build the project removing or adding any of them should work with limited features as follows | 1 |
408,255 | 11,943,814,494 | IssuesEvent | 2020-04-03 00:25:42 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | Temperature sync: full sync | Docs: not needed Effort: small Feature Priority: high | ## Is your feature request related to a problem? Please describe.
Related issue #2600
To do all use-cases of the temperature syncing in a single issue is a bit much. This issue will be the code for the actions to scan for sensors
## Describe the solution you'd like
- A `sync` action
- An `updateSyncProgress` action
- A `startResetLogFrequency` action
- A `completeResetLogFrequency` action
- A `errorResettingLogFreqeuncy` action
- A `resetLogFrequency` action - composition of above
- A `startResetAdvertisementFrequency` action
- A `completeResetAdvertisementFrequency` action
- A `errorResetAdvertisementFrequency` action
- A `resetAdvertisementFrequency` action - composition of above
- A `completeSync` action
## Implementation
- As above
## Describe alternatives you've considered
N/A
## Additional context
N/A
| 1.0 | Temperature sync: full sync - ## Is your feature request related to a problem? Please describe.
Related issue #2600
To do all use-cases of the temperature syncing in a single issue is a bit much. This issue will be the code for the actions to scan for sensors
## Describe the solution you'd like
- A `sync` action
- An `updateSyncProgress` action
- A `startResetLogFrequency` action
- A `completeResetLogFrequency` action
- A `errorResettingLogFreqeuncy` action
- A `resetLogFrequency` action - composition of above
- A `startResetAdvertisementFrequency` action
- A `completeResetAdvertisementFrequency` action
- A `errorResetAdvertisementFrequency` action
- A `resetAdvertisementFrequency` action - composition of above
- A `completeSync` action
## Implementation
- As above
## Describe alternatives you've considered
N/A
## Additional context
N/A
| priority | temperature sync full sync is your feature request related to a problem please describe related issue to do all use cases of the temperature syncing in a single issue is a bit much this issue will be the code for the actions to scan for sensors describe the solution you d like a sync action an updatesyncprogress action a startresetlogfrequency action a completeresetlogfrequency action a errorresettinglogfreqeuncy action a resetlogfrequency action composition of above a startresetadvertisementfrequency action a completeresetadvertisementfrequency action a errorresetadvertisementfrequency action a resetadvertisementfrequency action composition of above a completesync action implementation as above describe alternatives you ve considered n a additional context n a | 1 |
30,572 | 2,724,196,154 | IssuesEvent | 2015-04-14 16:30:19 | EFForg/https-everywhere | https://api.github.com/repos/EFForg/https-everywhere | closed | Update https://www.eff.org/https-everywhere/rulesets | High Priority | Right now https://www.eff.org/https-everywhere/rulesets doesn't reference the new test coverage requirements or style guide. We need to update it to take those into account.
As part of that process, we should submit the current version of that page to the repository and edit it here, so it is easier for contributors to make changes. The maintainer (currently me) will still have to manually copy those changes into the live site. | 1.0 | Update https://www.eff.org/https-everywhere/rulesets - Right now https://www.eff.org/https-everywhere/rulesets doesn't reference the new test coverage requirements or style guide. We need to update it to take those into account.
As part of that process, we should submit the current version of that page to the repository and edit it here, so it is easier for contributors to make changes. The maintainer (currently me) will still have to manually copy those changes into the live site. | priority | update right now doesn t reference the new test coverage requirements or style guide we need to update it to take those into account as part of that process we should submit the current version of that page to the repository and edit it here so it is easier for contributors to make changes the maintainer currently me will still have to manually copy those changes into the live site | 1 |
541,293 | 15,824,368,442 | IssuesEvent | 2021-04-06 03:06:52 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | [LS] Only type cast code action is suggestion for class field assignments | Priority/High SwanLakeDump Team/LanguageServer Type/Bug Version/SLAlpha3 | **Description:**
Consider:
```ballerina
public class ProxyClient {
private tcp:Client proxyClient;
private http:Client cl;
public function init(string proxyHost, int proxyPort) {
self.proxyClient = new(proxyHost, proxyPort);
}
}
```
At `self.proxyClient = new(proxyHost, proxyPort);`, only type cast code action is suggested. Need to suggest `Add check` code action as well.
**Steps to reproduce:**
See decription
**Affected Versions:**
SLA3
**Suggested Labels (optional):**
Version/SLAlpha3
| 1.0 | [LS] Only type cast code action is suggestion for class field assignments - **Description:**
Consider:
```ballerina
public class ProxyClient {
private tcp:Client proxyClient;
private http:Client cl;
public function init(string proxyHost, int proxyPort) {
self.proxyClient = new(proxyHost, proxyPort);
}
}
```
At `self.proxyClient = new(proxyHost, proxyPort);`, only type cast code action is suggested. Need to suggest `Add check` code action as well.
**Steps to reproduce:**
See decription
**Affected Versions:**
SLA3
**Suggested Labels (optional):**
Version/SLAlpha3
| priority | only type cast code action is suggestion for class field assignments description consider ballerina public class proxyclient private tcp client proxyclient private http client cl public function init string proxyhost int proxyport self proxyclient new proxyhost proxyport at self proxyclient new proxyhost proxyport only type cast code action is suggested need to suggest add check code action as well steps to reproduce see decription affected versions suggested labels optional version | 1 |
240,458 | 7,802,066,756 | IssuesEvent | 2018-06-10 07:36:05 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | NPE should be handled when "DataPublisher" is set to false in gateway manager's apim-manager.xml | 2.1.0 Priority/High Resolution/Fixed Severity/Major Type/Bug | **Description:**
A NPE is thrown when "DataPublisher" is set to false in gateway manager's apim-manager.xml ( pattern 6 deployment).
Ideal configuration should be set to "true" as discussed in mail thread [1\] for pattern 6 cluster deployment. However, the NPE should be handled in such situation. Find the stack-trace below.
[1\] "Error while publishing throttling events to global policy server"
```
TID: [-1234] [] [2018-01-26 14:07:41,822] ERROR {org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher} - Error while publishing throttling events to global policy server {org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher}
java.lang.NullPointerException
at org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher.publishNonThrottledEvent(ThrottleDataPublisher.java:119)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.doRoleBasedAccessThrottlingWithCEP(ThrottleHandler.java:349)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.doThrottle(ThrottleHandler.java:512)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.handleRequest(ThrottleHandler.java:460)
at org.apache.synapse.rest.API.process(API.java:325)
at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:90)
at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:69)
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:304)
at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:78)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:330)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:159)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```
**Suggested Labels:**
APIM-2.1.0 wum (1516470910911)
Type/bug
Priority/high
Sevirity/Major
**OS, DB, other environment details and versions:**
1. OS: Ubuntu 14.04
2. DB: MySQL 5.5.58
3. JDK: Oracke jdk-8u121
4. Setup: puppet pattern 6
5. Note: enabled log4jdbc
**Related Issues:**
https://github.com/wso2/puppet-apim/issues/63 | 1.0 | NPE should be handled when "DataPublisher" is set to false in gateway manager's apim-manager.xml - **Description:**
A NPE is thrown when "DataPublisher" is set to false in gateway manager's apim-manager.xml ( pattern 6 deployment).
Ideal configuration should be set to "true" as discussed in mail thread [1\] for pattern 6 cluster deployment. However, the NPE should be handled in such situation. Find the stack-trace below.
[1\] "Error while publishing throttling events to global policy server"
```
TID: [-1234] [] [2018-01-26 14:07:41,822] ERROR {org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher} - Error while publishing throttling events to global policy server {org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher}
java.lang.NullPointerException
at org.wso2.carbon.apimgt.gateway.throttling.publisher.ThrottleDataPublisher.publishNonThrottledEvent(ThrottleDataPublisher.java:119)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.doRoleBasedAccessThrottlingWithCEP(ThrottleHandler.java:349)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.doThrottle(ThrottleHandler.java:512)
at org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler.handleRequest(ThrottleHandler.java:460)
at org.apache.synapse.rest.API.process(API.java:325)
at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:90)
at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:69)
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:304)
at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:78)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:330)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:159)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```
**Suggested Labels:**
APIM-2.1.0 wum (1516470910911)
Type/bug
Priority/high
Sevirity/Major
**OS, DB, other environment details and versions:**
1. OS: Ubuntu 14.04
2. DB: MySQL 5.5.58
3. JDK: Oracke jdk-8u121
4. Setup: puppet pattern 6
5. Note: enabled log4jdbc
**Related Issues:**
https://github.com/wso2/puppet-apim/issues/63 | priority | npe should be handled when datapublisher is set to false in gateway manager s apim manager xml description a npe is thrown when datapublisher is set to false in gateway manager s apim manager xml pattern deployment ideal configuration should be set to true as discussed in mail thread for pattern cluster deployment however the npe should be handled in such situation find the stack trace below error while publishing throttling events to global policy server tid error org carbon apimgt gateway throttling publisher throttledatapublisher error while publishing throttling events to global policy server org carbon apimgt gateway throttling publisher throttledatapublisher java lang nullpointerexception at org carbon apimgt gateway throttling publisher throttledatapublisher publishnonthrottledevent throttledatapublisher java at org carbon apimgt gateway handlers throttling throttlehandler dorolebasedaccessthrottlingwithcep throttlehandler java at org carbon apimgt gateway handlers throttling throttlehandler dothrottle throttlehandler java at org carbon apimgt gateway handlers throttling throttlehandler handlerequest throttlehandler java at org apache synapse rest api process api java at org apache synapse rest restrequesthandler dispatchtoapi restrequesthandler java at org apache synapse rest restrequesthandler process restrequesthandler java at org apache synapse core injectmessage java at org apache synapse core synapsemessagereceiver receive synapsemessagereceiver java at org apache engine axisengine receive axisengine java at org apache synapse transport passthru serverworker processnonentityenclosingresthandler serverworker java at org apache synapse transport passthru serverworker run serverworker java at org apache transport base threads nativeworkerpool run nativeworkerpool java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java suggested labels apim wum type bug priority high sevirity major os db other environment details and versions os ubuntu db mysql jdk oracke jdk setup puppet pattern note enabled related issues | 1 |
758,850 | 26,571,204,146 | IssuesEvent | 2023-01-21 06:41:57 | ShafSpecs/remix-pwa | https://api.github.com/repos/ShafSpecs/remix-pwa | closed | ServiceWorker never registered because entry.client.jsx#window.addEventListener("load", ...) is never triggered | high priority | Reopening issue https://github.com/ShafSpecs/remix-pwa/issues/42.
--
Hello,
I've just installed remix-pwa@latest on a pristine remix@1.7.2 app (no template, using remix server). The service worker is never registered, I managed to pinpoint the problem to the lack of triggering of the window.addEventListener("load", ...) but can't manage to solve the issue.
This happens both in dev and prod deployments.
Changing to window.addEventListener("click", ...) does work.


 | 1.0 | ServiceWorker never registered because entry.client.jsx#window.addEventListener("load", ...) is never triggered - Reopening issue https://github.com/ShafSpecs/remix-pwa/issues/42.
--
Hello,
I've just installed remix-pwa@latest on a pristine remix@1.7.2 app (no template, using remix server). The service worker is never registered, I managed to pinpoint the problem to the lack of triggering of the window.addEventListener("load", ...) but can't manage to solve the issue.
This happens both in dev and prod deployments.
Changing to window.addEventListener("click", ...) does work.


 | priority | serviceworker never registered because entry client jsx window addeventlistener load is never triggered reopening issue hello i ve just installed remix pwa latest on a pristine remix app no template using remix server the service worker is never registered i managed to pinpoint the problem to the lack of triggering of the window addeventlistener load but can t manage to solve the issue this happens both in dev and prod deployments changing to window addeventlistener click does work | 1 |
462,101 | 13,240,921,644 | IssuesEvent | 2020-08-19 07:19:19 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | Update the Web UI and ETL process in Namibia production to read the uploaded targets | Priority: High | - [ ] We need to update the Web UI and ETL process in Namibia production to read the uploaded targets per jurisdiction instead of structure counts. The uploaded target structures should show under the "Structured Targets" on the IRS dashboard. | 1.0 | Update the Web UI and ETL process in Namibia production to read the uploaded targets - - [ ] We need to update the Web UI and ETL process in Namibia production to read the uploaded targets per jurisdiction instead of structure counts. The uploaded target structures should show under the "Structured Targets" on the IRS dashboard. | priority | update the web ui and etl process in namibia production to read the uploaded targets we need to update the web ui and etl process in namibia production to read the uploaded targets per jurisdiction instead of structure counts the uploaded target structures should show under the structured targets on the irs dashboard | 1 |
471,614 | 13,595,475,859 | IssuesEvent | 2020-09-22 03:12:59 | Technolgap/technolgap-website | https://api.github.com/repos/Technolgap/technolgap-website | closed | Create Navigation Bar | high-priority | **Context**
We need a way for users to navigate our website. A navigation bar is the perfect way to go.
**Problem**
We don't have one...yet.
**Acceptance Criteria:**
* The appearance of this component's desktop version must be exact or (very) similar to the approved Figma mock-up (https://www.figma.com/file/ciTL29R7gAaKkipH6v8QLL/Technolgap-official-Mockups?node-id=511%3A661)
<img width="619" alt="navBar" src="https://user-images.githubusercontent.com/33503336/90978230-3a679a00-e51a-11ea-9fc1-14286939de78.png">
* When user clicks on the **Home**, it should lead them to the home/index page
* When user clicks on the **About**, it should lead them to the about page
* When the user is in mobile mode, it should display a hamburger menu
<img width="402" alt="navBar - mobile" src="https://user-images.githubusercontent.com/33503336/90978245-70a51980-e51a-11ea-8153-09f246c13c3e.png">
* When the user is in mobile mode and clicks on the hamburger menu, it should display the two pages available and be able to navigate
> NOTE: the below image is a reference image. Some obvious changes might be to have the link when clicked by the user is to be the misty pink colour and that pages should be "home" and "about"
<img width="156" alt="ReferenceImageForNavBar" src="https://user-images.githubusercontent.com/33503336/90978345-36884780-e51b-11ea-90a0-d982e16428e7.png">
* When the user selects the page from the hamburger menu, it should become a misty-pink colour | 1.0 | Create Navigation Bar - **Context**
We need a way for users to navigate our website. A navigation bar is the perfect way to go.
**Problem**
We don't have one...yet.
**Acceptance Criteria:**
* The appearance of this component's desktop version must be exact or (very) similar to the approved Figma mock-up (https://www.figma.com/file/ciTL29R7gAaKkipH6v8QLL/Technolgap-official-Mockups?node-id=511%3A661)
<img width="619" alt="navBar" src="https://user-images.githubusercontent.com/33503336/90978230-3a679a00-e51a-11ea-9fc1-14286939de78.png">
* When user clicks on the **Home**, it should lead them to the home/index page
* When user clicks on the **About**, it should lead them to the about page
* When the user is in mobile mode, it should display a hamburger menu
<img width="402" alt="navBar - mobile" src="https://user-images.githubusercontent.com/33503336/90978245-70a51980-e51a-11ea-8153-09f246c13c3e.png">
* When the user is in mobile mode and clicks on the hamburger menu, it should display the two pages available and be able to navigate
> NOTE: the below image is a reference image. Some obvious changes might be to have the link when clicked by the user is to be the misty pink colour and that pages should be "home" and "about"
<img width="156" alt="ReferenceImageForNavBar" src="https://user-images.githubusercontent.com/33503336/90978345-36884780-e51b-11ea-90a0-d982e16428e7.png">
* When the user selects the page from the hamburger menu, it should become a misty-pink colour | priority | create navigation bar context we need a way for users to navigate our website a navigation bar is the perfect way to go problem we don t have one yet acceptance criteria the appearance of this component s desktop version must be exact or very similar to the approved figma mock up img width alt navbar src when user clicks on the home it should lead them to the home index page when user clicks on the about it should lead them to the about page when the user is in mobile mode it should display a hamburger menu img width alt navbar mobile src when the user is in mobile mode and clicks on the hamburger menu it should display the two pages available and be able to navigate note the below image is a reference image some obvious changes might be to have the link when clicked by the user is to be the misty pink colour and that pages should be home and about img width alt referenceimagefornavbar src when the user selects the page from the hamburger menu it should become a misty pink colour | 1 |
555,293 | 16,451,043,996 | IssuesEvent | 2021-05-21 05:49:54 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | opened | Send-Tabs is blocking reconnect | Priority: High | It might not be the only thing, but this I'm confident of.
BYOND blocks reconnects if a (non-winset) call is made, such as a verb, during reconnect.
We cleared all these from tgchat and friends, but the stat panel is doing it now too. I've previously confirmed this through reverse engineering, but now have definitive evidence of it through BYOND itself.

I had wesoda delete my verbs, and suddenly I could reconnect. | 1.0 | Send-Tabs is blocking reconnect - It might not be the only thing, but this I'm confident of.
BYOND blocks reconnects if a (non-winset) call is made, such as a verb, during reconnect.
We cleared all these from tgchat and friends, but the stat panel is doing it now too. I've previously confirmed this through reverse engineering, but now have definitive evidence of it through BYOND itself.

I had wesoda delete my verbs, and suddenly I could reconnect. | priority | send tabs is blocking reconnect it might not be the only thing but this i m confident of byond blocks reconnects if a non winset call is made such as a verb during reconnect we cleared all these from tgchat and friends but the stat panel is doing it now too i ve previously confirmed this through reverse engineering but now have definitive evidence of it through byond itself i had wesoda delete my verbs and suddenly i could reconnect | 1 |
390,127 | 11,525,222,152 | IssuesEvent | 2020-02-15 06:38:45 | wso2/docs-is | https://api.github.com/repos/wso2/docs-is | opened | Screenshots in password recovery page needs to be updated with new styling | Priority/Highest Severity/Blocker | **Description:**
Relevant page https://is.docs.wso2.com/en/next/learn/password-recovery/ | 1.0 | Screenshots in password recovery page needs to be updated with new styling - **Description:**
Relevant page https://is.docs.wso2.com/en/next/learn/password-recovery/ | priority | screenshots in password recovery page needs to be updated with new styling description relevant page | 1 |
665,358 | 22,310,232,409 | IssuesEvent | 2022-06-13 16:17:49 | VEuPathDB/EbrcModelCommon | https://api.github.com/repos/VEuPathDB/EbrcModelCommon | closed | Implement isSecret property to hide study card from the live & legacy sites | high priority | The grayed-out study card for UMSP should not be visible on our live site. We still want UMSP available on the QA and QA.restricted sites.
We decided against commenting out the injector for UMSP on the branch because it is likely that we wonโt want the UMSP card showing up on the live or legacy sites for ~ the next year. If we commented out the injector for UMSP, someone would need to remember to comment it out before every release.
@aurreco-uga will add a new property `isSecret` to the clinepi.xml file (**on Tuesday May 31 or later**) and will test to make sure it doesnโt break anything
- `isSecret = T` will allow studies to appear on the QA and QA.restricted sites, but there will be no study card (neither active nor grayed out) on the live & legacy sites
- `isSecret = F` will allow either active or grayed out study cards to appear on the live & legacy sites
_**Note, David Roos is in California this week showing ClinEpi to PRISM and UMSP providers, so please donโt push any changes before Tuesday May 31!**_
after Cristina has tested isSecret, @dmfalke and/or @jtlong3rd will update the client, preventing any cards (active & grayed out) from showing up on the live & legacy sites
| 1.0 | Implement isSecret property to hide study card from the live & legacy sites - The grayed-out study card for UMSP should not be visible on our live site. We still want UMSP available on the QA and QA.restricted sites.
We decided against commenting out the injector for UMSP on the branch because it is likely that we wonโt want the UMSP card showing up on the live or legacy sites for ~ the next year. If we commented out the injector for UMSP, someone would need to remember to comment it out before every release.
@aurreco-uga will add a new property `isSecret` to the clinepi.xml file (**on Tuesday May 31 or later**) and will test to make sure it doesnโt break anything
- `isSecret = T` will allow studies to appear on the QA and QA.restricted sites, but there will be no study card (neither active nor grayed out) on the live & legacy sites
- `isSecret = F` will allow either active or grayed out study cards to appear on the live & legacy sites
_**Note, David Roos is in California this week showing ClinEpi to PRISM and UMSP providers, so please donโt push any changes before Tuesday May 31!**_
after Cristina has tested isSecret, @dmfalke and/or @jtlong3rd will update the client, preventing any cards (active & grayed out) from showing up on the live & legacy sites
| priority | implement issecret property to hide study card from the live legacy sites the grayed out study card for umsp should not be visible on our live site we still want umsp available on the qa and qa restricted sites we decided against commenting out the injector for umsp on the branch because it is likely that we wonโt want the umsp card showing up on the live or legacy sites for the next year if we commented out the injector for umsp someone would need to remember to comment it out before every release aurreco uga will add a new property issecret to the clinepi xml file on tuesday may or later and will test to make sure it doesnโt break anything issecret t will allow studies to appear on the qa and qa restricted sites but there will be no study card neither active nor grayed out on the live legacy sites issecret f will allow either active or grayed out study cards to appear on the live legacy sites note david roos is in california this week showing clinepi to prism and umsp providers so please donโt push any changes before tuesday may after cristina has tested issecret dmfalke and or will update the client preventing any cards active grayed out from showing up on the live legacy sites | 1 |
605,843 | 18,741,007,238 | IssuesEvent | 2021-11-04 13:36:33 | rsherrera/project-caja | https://api.github.com/repos/rsherrera/project-caja | opened | Plan de Pago. No se puede dar de alta PP para afiliado 4113. | bug Priority-High mantimiento | El sistema arroja mensaje de error (Valor de Timeout caducado. El perรญodo de tiempo de espera caducรณ antes de completar la operaciรณn o el servidor no responde. ) luego de varios segundos en espera de dar de alta un Plan de pago para el af. 4113. El af. tiene mas de 30 cuotas impagas. | 1.0 | Plan de Pago. No se puede dar de alta PP para afiliado 4113. - El sistema arroja mensaje de error (Valor de Timeout caducado. El perรญodo de tiempo de espera caducรณ antes de completar la operaciรณn o el servidor no responde. ) luego de varios segundos en espera de dar de alta un Plan de pago para el af. 4113. El af. tiene mas de 30 cuotas impagas. | priority | plan de pago no se puede dar de alta pp para afiliado el sistema arroja mensaje de error valor de timeout caducado el perรญodo de tiempo de espera caducรณ antes de completar la operaciรณn o el servidor no responde luego de varios segundos en espera de dar de alta un plan de pago para el af el af tiene mas de cuotas impagas | 1 |
806,849 | 29,923,139,878 | IssuesEvent | 2023-06-22 01:26:40 | themoment-team/hello-gsm-back-v2 | https://api.github.com/repos/themoment-team/hello-gsm-back-v2 | closed | [domain/application] ์ํํ ์ถ๋ ฅ ๊ธฐ๋ฅ ์ถ๊ฐ | feature priority: high | ## ๊ฐ์
์ํํ์ ๋ค์ด๊ฐ ๋ฐ์ดํฐ๋ฅผ ๋ณด์ฌ์ฃผ๋ ๊ธฐ๋ฅ์ ์ถ๊ฐํฉ๋๋ค
## ํ ์ผ
- [ ] Repository ํ์ํ ๋ฉ์๋ ๊ตฌํ
- [ ] TicketsQuery ์์ฑ ๋ฐ ๋ฉ์๋ ๊ตฌํ
- [ ] Controller ํ์ํ ๋ฉ์๋ ๊ตฌํ
- [ ] ํ์ํ Dto ๊ตฌํ | 1.0 | [domain/application] ์ํํ ์ถ๋ ฅ ๊ธฐ๋ฅ ์ถ๊ฐ - ## ๊ฐ์
์ํํ์ ๋ค์ด๊ฐ ๋ฐ์ดํฐ๋ฅผ ๋ณด์ฌ์ฃผ๋ ๊ธฐ๋ฅ์ ์ถ๊ฐํฉ๋๋ค
## ํ ์ผ
- [ ] Repository ํ์ํ ๋ฉ์๋ ๊ตฌํ
- [ ] TicketsQuery ์์ฑ ๋ฐ ๋ฉ์๋ ๊ตฌํ
- [ ] Controller ํ์ํ ๋ฉ์๋ ๊ตฌํ
- [ ] ํ์ํ Dto ๊ตฌํ | priority | ์ํํ ์ถ๋ ฅ ๊ธฐ๋ฅ ์ถ๊ฐ ๊ฐ์ ์ํํ์ ๋ค์ด๊ฐ ๋ฐ์ดํฐ๋ฅผ ๋ณด์ฌ์ฃผ๋ ๊ธฐ๋ฅ์ ์ถ๊ฐํฉ๋๋ค ํ ์ผ repository ํ์ํ ๋ฉ์๋ ๊ตฌํ ticketsquery ์์ฑ ๋ฐ ๋ฉ์๋ ๊ตฌํ controller ํ์ํ ๋ฉ์๋ ๊ตฌํ ํ์ํ dto ๊ตฌํ | 1 |
657,702 | 21,801,755,105 | IssuesEvent | 2022-05-16 06:22:52 | ChicoState/Happy-Hour-Finder | https://api.github.com/repos/ChicoState/Happy-Hour-Finder | closed | As a user, I want to be sure that the information provided is available at any time I want so I can rely on it | High Priority Medium Difficulty | Create database snapshots and/or backups | 1.0 | As a user, I want to be sure that the information provided is available at any time I want so I can rely on it - Create database snapshots and/or backups | priority | as a user i want to be sure that the information provided is available at any time i want so i can rely on it create database snapshots and or backups | 1 |
117,173 | 4,711,928,863 | IssuesEvent | 2016-10-14 15:14:06 | onaio/onadata | https://api.github.com/repos/onaio/onadata | closed | Add meta permissions for Can Edit (editor) and Can Submit (dataentry) roles on Forms | 2016 Permissions Priority: High Size: Medium (2-3) | PLD:
Add meta-permissions for `editor` and `dataentry` roles to modify how they should view with the data.
The different permission options for the two roles are:
**dataentry** role:
- **Allow access to all data**
User will be able to view and download data submitted by everyone.
- **Block access to data submitted by other users**
User will be able to view and download only the data they have submitted.
- **Block access to all data**
User will only be able to submit data and will not be able to view or download data.
**editor** role:
- **Allow access to all data**
User will be able to view, edit, and download data submitted by everyone.
- **Block access to data submitted by other users**
User will be able to view, edit, and download only the data they have submitted.
The approaches to implement this are:
1. Create new roles
2. Add new meta fields to XForm to filter data based on the roles:
- dataentry-all: true or false
- dataentry-own: true or false
- dataentry-none: true or false
- editor-all: true or false
- editor-own: true or false
3. Create dynamic roles based on need. This will allow us to add custom roles when needed.
@ivermac @denniswambua you can add what I missed out to the description.
Which is the best approach?
This blocks https://github.com/onaio/zebra/issues/4459
ME:
After having a chat with @pld we thought that it would be better to go with the second approach that includes adding meta fields to an xform. The cleaner approach in our opinion would be to have a dictionary that would have `can-submit` and `can-edit` keys whose values can be `all`, `own` or `none`. Any other suggestions are welcomed | 1.0 | Add meta permissions for Can Edit (editor) and Can Submit (dataentry) roles on Forms - PLD:
Add meta-permissions for `editor` and `dataentry` roles to modify how they should view with the data.
The different permission options for the two roles are:
**dataentry** role:
- **Allow access to all data**
User will be able to view and download data submitted by everyone.
- **Block access to data submitted by other users**
User will be able to view and download only the data they have submitted.
- **Block access to all data**
User will only be able to submit data and will not be able to view or download data.
**editor** role:
- **Allow access to all data**
User will be able to view, edit, and download data submitted by everyone.
- **Block access to data submitted by other users**
User will be able to view, edit, and download only the data they have submitted.
The approaches to implement this are:
1. Create new roles
2. Add new meta fields to XForm to filter data based on the roles:
- dataentry-all: true or false
- dataentry-own: true or false
- dataentry-none: true or false
- editor-all: true or false
- editor-own: true or false
3. Create dynamic roles based on need. This will allow us to add custom roles when needed.
@ivermac @denniswambua you can add what I missed out to the description.
Which is the best approach?
This blocks https://github.com/onaio/zebra/issues/4459
ME:
After having a chat with @pld we thought that it would be better to go with the second approach that includes adding meta fields to an xform. The cleaner approach in our opinion would be to have a dictionary that would have `can-submit` and `can-edit` keys whose values can be `all`, `own` or `none`. Any other suggestions are welcomed | priority | add meta permissions for can edit editor and can submit dataentry roles on forms pld add meta permissions for editor and dataentry roles to modify how they should view with the data the different permission options for the two roles are dataentry role allow access to all data user will be able to view and download data submitted by everyone block access to data submitted by other users user will be able to view and download only the data they have submitted block access to all data user will only be able to submit data and will not be able to view or download data editor role allow access to all data user will be able to view edit and download data submitted by everyone block access to data submitted by other users user will be able to view edit and download only the data they have submitted the approaches to implement this are create new roles add new meta fields to xform to filter data based on the roles dataentry all true or false dataentry own true or false dataentry none true or false editor all true or false editor own true or false create dynamic roles based on need this will allow us to add custom roles when needed ivermac denniswambua you can add what i missed out to the description which is the best approach this blocks me after having a chat with pld we thought that it would be better to go with the second approach that includes adding meta fields to an xform the cleaner approach in our opinion would be to have a dictionary that would have can submit and can edit keys whose values can be all own or none any other suggestions are welcomed | 1 |
62,014 | 3,164,193,051 | IssuesEvent | 2015-09-20 23:46:54 | cs2103aug2015-w15-3j/main | https://api.github.com/repos/cs2103aug2015-w15-3j/main | opened | Missing Controller Files in UI package | priority.high type.bug | Please create the 3 controller files to prevent any compilation error. Implementations can wait, but at least we have placeholders for these controllers | 1.0 | Missing Controller Files in UI package - Please create the 3 controller files to prevent any compilation error. Implementations can wait, but at least we have placeholders for these controllers | priority | missing controller files in ui package please create the controller files to prevent any compilation error implementations can wait but at least we have placeholders for these controllers | 1 |
133,715 | 5,207,115,637 | IssuesEvent | 2017-01-24 22:30:59 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Instantiat a torch.Size with torch.Tensor breaks kernel | bug high priority | the following code `torch.Size(torch.ones(3))` creates a segmentation fault and breaks the kernel | 1.0 | Instantiat a torch.Size with torch.Tensor breaks kernel - the following code `torch.Size(torch.ones(3))` creates a segmentation fault and breaks the kernel | priority | instantiat a torch size with torch tensor breaks kernel the following code torch size torch ones creates a segmentation fault and breaks the kernel | 1 |
819,861 | 30,753,286,273 | IssuesEvent | 2023-07-28 21:42:47 | geopm/geopm | https://api.github.com/repos/geopm/geopm | closed | Set GEOPM_TIMEOUT to 0 if not launching geopm | bug bug-priority-low bug-exposure-low bug-quality-high | If a user links their application with libgeopm but runs the application without geopm, they will still get a 30 second timeout waiting for the controller. This should be set up so that there is no wait when running without geopm, and increase the timeout to the default 30s when using the launcher or when GEOPM_CTL/GEOPM_REPORT/GEOPM_PROFILE are set. | 1.0 | Set GEOPM_TIMEOUT to 0 if not launching geopm - If a user links their application with libgeopm but runs the application without geopm, they will still get a 30 second timeout waiting for the controller. This should be set up so that there is no wait when running without geopm, and increase the timeout to the default 30s when using the launcher or when GEOPM_CTL/GEOPM_REPORT/GEOPM_PROFILE are set. | priority | set geopm timeout to if not launching geopm if a user links their application with libgeopm but runs the application without geopm they will still get a second timeout waiting for the controller this should be set up so that there is no wait when running without geopm and increase the timeout to the default when using the launcher or when geopm ctl geopm report geopm profile are set | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.