Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
741,306 | 25,787,984,397 | IssuesEvent | 2022-12-09 22:54:36 | Automattic/abacus | https://api.github.com/repos/Automattic/abacus | closed | Impact on email experiments are incorrect | [!priority] medium [type] bug [section] experiment results [!team] explat [!milestone] current | <!-- Brief description/ context of issue. Provide links to p2 posts and relevant GitHub issues and PRs. -->
I was looking at the impact of the recent black friday email experiment and noticed that the impact is incorrect as we are extrapolating the experiment over time when the experiment is a one-time thing.
We should disable impact for email experiments. | 1.0 | Impact on email experiments are incorrect - <!-- Brief description/ context of issue. Provide links to p2 posts and relevant GitHub issues and PRs. -->
I was looking at the impact of the recent black friday email experiment and noticed that the impact is incorrect as we are extrapolating the experiment over time when the experiment is a one-time thing.
We should disable impact for email experiments. | priority | impact on email experiments are incorrect i was looking at the impact of the recent black friday email experiment and noticed that the impact is incorrect as we are extrapolating the experiment over time when the experiment is a one time thing we should disable impact for email experiments | 1 |
174,695 | 6,542,465,365 | IssuesEvent | 2017-09-02 07:11:07 | uclouvain/openjpeg | https://api.github.com/repos/uclouvain/openjpeg | closed | set reduce_factor_may_fail | bug Priority-Medium | Originally reported on Google Code with ID 474
```
This is the simplest solution (patch attached):
bin/opj_decompress -i Bretagne2.j2k -o Bretagne2.j2k.png -r 8
[INFO] Start to read j2k main header (0).
[ERROR] Error decoding component 0.
The number of resolutions 8 to remove is >= the maximum of resolutions 6 of this component
=== Reduce to 5 ===
[ERROR] Error reading COD marker
[ERROR] Marker handler function failed to read the marker segment
ERROR -> opj_decompress: failed to read the header
winfried
```
Reported by [szukw000](https://github.com/szukw000) on 2015-02-14 06:12:08
<hr>
- _Attachment: [reduce_factor_may_fail.dif](https://storage.googleapis.com/google-code-attachments/openjpeg/issue-474/comment-0/reduce_factor_may_fail.dif)_
| 1.0 | set reduce_factor_may_fail - Originally reported on Google Code with ID 474
```
This is the simplest solution (patch attached):
bin/opj_decompress -i Bretagne2.j2k -o Bretagne2.j2k.png -r 8
[INFO] Start to read j2k main header (0).
[ERROR] Error decoding component 0.
The number of resolutions 8 to remove is >= the maximum of resolutions 6 of this component
=== Reduce to 5 ===
[ERROR] Error reading COD marker
[ERROR] Marker handler function failed to read the marker segment
ERROR -> opj_decompress: failed to read the header
winfried
```
Reported by [szukw000](https://github.com/szukw000) on 2015-02-14 06:12:08
<hr>
- _Attachment: [reduce_factor_may_fail.dif](https://storage.googleapis.com/google-code-attachments/openjpeg/issue-474/comment-0/reduce_factor_may_fail.dif)_
| priority | set reduce factor may fail originally reported on google code with id this is the simplest solution patch attached bin opj decompress i o png r start to read main header error decoding component the number of resolutions to remove is the maximum of resolutions of this component reduce to error reading cod marker marker handler function failed to read the marker segment error opj decompress failed to read the header winfried reported by on attachment | 1 |
53,923 | 3,052,400,515 | IssuesEvent | 2015-08-12 14:32:54 | jkall/qgis-midvatten-plugin | https://api.github.com/repos/jkall/qgis-midvatten-plugin | closed | allow null values in w_levels_logger "level_masl" | enhancement Priority-Medium | I can not see no reason to set default value -999 | 1.0 | allow null values in w_levels_logger "level_masl" - I can not see no reason to set default value -999 | priority | allow null values in w levels logger level masl i can not see no reason to set default value | 1 |
766,854 | 26,901,845,758 | IssuesEvent | 2023-02-06 16:09:35 | BIDMCDigitalPsychiatry/LAMP-platform | https://api.github.com/repos/BIDMCDigitalPsychiatry/LAMP-platform | opened | Step Count Contains Duplicates and Inaccuracies | bug native core priority MEDIUM | Recently we have noticed that for multiple participants, some of their daily step count values calculated using `cortex.secondary.step_count.step_count` have been incredibly high (>200,000 on some days). We investigated this issue by looking at data from `cortex.raw.steps.steps`.
iPhone example:
Output from this cortex function shows that there are some duplicate rows. In addition, there are two categories of value in the 'source' column: com.apple.health, and 'null'. The values in rows with com.apple.health as the source seem reasonable, although there are some duplicates. However, values in rows with 'null' as the source appear to be an accumulation of all the steps taken so far that day.
For example, the screenshot below shows `cortex.raw.steps.steps` for one participant in pd.DataFrame form, and filtered by` 'type' = step_count`. Rows 0 and 1, and 22 and 23 are exact duplicates of each other. Rows 6-21 are all from the same day, but the values appear to be building on each other. The same is happening with rows 28-63.
We would like for duplicates to not appear in the raw data, and to not have accumulating values.
<img width="856" alt="Screen Shot 2023-02-06 at 10 16 26 AM" src="https://user-images.githubusercontent.com/89207083/217010091-10304fdc-21a9-484a-a2b4-bbe2497fda57.png">
Android example:
Android devices do not seem to have the same issues with duplicates and 'null' accumulating source values that iPhones do. However, some Android step counts still seem unbelievably high. Below is output from `cortex.raw.steps.steps` for a participant with an Android phone (output is in DataFrame form and filtered by day and` 'type' = step_count`). Their step count for the day that is shown in the image was over 55,000. As you can see from the image, the step counts seem inaccurate - one example is in rows 3250 and 3251, in which 86 steps were recorded in .005 seconds.
We would like to know if there is anything that can be done to make the Android steps more accurate.
<img width="714" alt="Screen Shot 2023-02-06 at 10 50 22 AM" src="https://user-images.githubusercontent.com/89207083/217018597-9790b734-d09c-451b-8d3b-d87fbb580b7f.png">
| 1.0 | Step Count Contains Duplicates and Inaccuracies - Recently we have noticed that for multiple participants, some of their daily step count values calculated using `cortex.secondary.step_count.step_count` have been incredibly high (>200,000 on some days). We investigated this issue by looking at data from `cortex.raw.steps.steps`.
iPhone example:
Output from this cortex function shows that there are some duplicate rows. In addition, there are two categories of value in the 'source' column: com.apple.health, and 'null'. The values in rows with com.apple.health as the source seem reasonable, although there are some duplicates. However, values in rows with 'null' as the source appear to be an accumulation of all the steps taken so far that day.
For example, the screenshot below shows `cortex.raw.steps.steps` for one participant in pd.DataFrame form, and filtered by` 'type' = step_count`. Rows 0 and 1, and 22 and 23 are exact duplicates of each other. Rows 6-21 are all from the same day, but the values appear to be building on each other. The same is happening with rows 28-63.
We would like for duplicates to not appear in the raw data, and to not have accumulating values.
<img width="856" alt="Screen Shot 2023-02-06 at 10 16 26 AM" src="https://user-images.githubusercontent.com/89207083/217010091-10304fdc-21a9-484a-a2b4-bbe2497fda57.png">
Android example:
Android devices do not seem to have the same issues with duplicates and 'null' accumulating source values that iPhones do. However, some Android step counts still seem unbelievably high. Below is output from `cortex.raw.steps.steps` for a participant with an Android phone (output is in DataFrame form and filtered by day and` 'type' = step_count`). Their step count for the day that is shown in the image was over 55,000. As you can see from the image, the step counts seem inaccurate - one example is in rows 3250 and 3251, in which 86 steps were recorded in .005 seconds.
We would like to know if there is anything that can be done to make the Android steps more accurate.
<img width="714" alt="Screen Shot 2023-02-06 at 10 50 22 AM" src="https://user-images.githubusercontent.com/89207083/217018597-9790b734-d09c-451b-8d3b-d87fbb580b7f.png">
| priority | step count contains duplicates and inaccuracies recently we have noticed that for multiple participants some of their daily step count values calculated using cortex secondary step count step count have been incredibly high on some days we investigated this issue by looking at data from cortex raw steps steps iphone example output from this cortex function shows that there are some duplicate rows in addition there are two categories of value in the source column com apple health and null the values in rows with com apple health as the source seem reasonable although there are some duplicates however values in rows with null as the source appear to be an accumulation of all the steps taken so far that day for example the screenshot below shows cortex raw steps steps for one participant in pd dataframe form and filtered by type step count rows and and and are exact duplicates of each other rows are all from the same day but the values appear to be building on each other the same is happening with rows we would like for duplicates to not appear in the raw data and to not have accumulating values img width alt screen shot at am src android example android devices do not seem to have the same issues with duplicates and null accumulating source values that iphones do however some android step counts still seem unbelievably high below is output from cortex raw steps steps for a participant with an android phone output is in dataframe form and filtered by day and type step count their step count for the day that is shown in the image was over as you can see from the image the step counts seem inaccurate one example is in rows and in which steps were recorded in seconds we would like to know if there is anything that can be done to make the android steps more accurate img width alt screen shot at am src | 1 |
653,429 | 21,582,037,190 | IssuesEvent | 2022-05-02 19:50:42 | vdjagilev/nmap-formatter | https://api.github.com/repos/vdjagilev/nmap-formatter | closed | Custom variables for custom templates | priority/medium tech/go type/feature tech/html | Add a possibility to pass custom variables for custom templates.
Example:
`--x-opt "foo=${bar}"`
Then this value can be used in a custom template like this:
```html
Some custom variables:
<ul>
<li><b>Foo value:</b> {{.custom.foo}}</li>
</ul>
```
This could be used in automated environments (pipelines?) to pass some values to the custom templates.
Can be implemented only after #23 | 1.0 | Custom variables for custom templates - Add a possibility to pass custom variables for custom templates.
Example:
`--x-opt "foo=${bar}"`
Then this value can be used in a custom template like this:
```html
Some custom variables:
<ul>
<li><b>Foo value:</b> {{.custom.foo}}</li>
</ul>
```
This could be used in automated environments (pipelines?) to pass some values to the custom templates.
Can be implemented only after #23 | priority | custom variables for custom templates add a possibility to pass custom variables for custom templates example x opt foo bar then this value can be used in a custom template like this html some custom variables foo value custom foo this could be used in automated environments pipelines to pass some values to the custom templates can be implemented only after | 1 |
388,505 | 11,488,471,173 | IssuesEvent | 2020-02-11 13:58:08 | DigitalCampus/oppia-mobile-android | https://api.github.com/repos/DigitalCampus/oppia-mobile-android | closed | Re-organise settings screen | Medium priority enhancement est-4-hours good-first-issue | It's grown and expanded quite a bit, so the ordering is a bit scattered now.
Suggested re-arrangement (keep on same page for now, unless very easy to divide into separate pages:
Visualization
- Preferred language
- Text size
- Display suggested activities
- Num suggested activities to display
- Highlight completed activities
- Show section numbers
- Show progress bar
- Show course description
Connection
- Server
- Username
- Download media via cellular network
- Background activity tracker
- Disable notifications
Security
- Enable logout from homepage
- Enable deleting of courses
- Protect admin actions with password
- Admin password
Advanced
- Storage location
- Connection timeout
- Response timeout
- Enable device admin (in device admin flavor)
- View tech info: OS version, SDK, etc... | 1.0 | Re-organise settings screen - It's grown and expanded quite a bit, so the ordering is a bit scattered now.
Suggested re-arrangement (keep on same page for now, unless very easy to divide into separate pages:
Visualization
- Preferred language
- Text size
- Display suggested activities
- Num suggested activities to display
- Highlight completed activities
- Show section numbers
- Show progress bar
- Show course description
Connection
- Server
- Username
- Download media via cellular network
- Background activity tracker
- Disable notifications
Security
- Enable logout from homepage
- Enable deleting of courses
- Protect admin actions with password
- Admin password
Advanced
- Storage location
- Connection timeout
- Response timeout
- Enable device admin (in device admin flavor)
- View tech info: OS version, SDK, etc... | priority | re organise settings screen it s grown and expanded quite a bit so the ordering is a bit scattered now suggested re arrangement keep on same page for now unless very easy to divide into separate pages visualization preferred language text size display suggested activities num suggested activities to display highlight completed activities show section numbers show progress bar show course description connection server username download media via cellular network background activity tracker disable notifications security enable logout from homepage enable deleting of courses protect admin actions with password admin password advanced storage location connection timeout response timeout enable device admin in device admin flavor view tech info os version sdk etc | 1 |
526,639 | 15,297,377,891 | IssuesEvent | 2021-02-24 08:20:25 | erlang/otp | https://api.github.com/repos/erlang/otp | closed | ERL-1365: macOS build fails: call_error_handler_size < sizeof(BeamInstr) | bug priority:medium team:VM |
Original reporter: `dmorneau`
Affected version: `OTP-24.0`
Component: `erts`
Migrated from: https://bugs.erlang.org/browse/ERL-1365
---
```
I get this compilation error building the master branch on macOS 11.0 beta:
{code:java}
beam/jit/beam_asm.cpp:726:beamasm_emit_call_error_handler() Assertion failed: buff_len - call_error_handler_size < sizeof(BeamInstr){code}
```
| 1.0 | ERL-1365: macOS build fails: call_error_handler_size < sizeof(BeamInstr) -
Original reporter: `dmorneau`
Affected version: `OTP-24.0`
Component: `erts`
Migrated from: https://bugs.erlang.org/browse/ERL-1365
---
```
I get this compilation error building the master branch on macOS 11.0 beta:
{code:java}
beam/jit/beam_asm.cpp:726:beamasm_emit_call_error_handler() Assertion failed: buff_len - call_error_handler_size < sizeof(BeamInstr){code}
```
| priority | erl macos build fails call error handler size sizeof beaminstr original reporter dmorneau affected version otp component erts migrated from i get this compilation error building the master branch on macos beta code java beam jit beam asm cpp beamasm emit call error handler assertion failed buff len call error handler size sizeof beaminstr code | 1 |
625,258 | 19,723,450,410 | IssuesEvent | 2022-01-13 17:29:20 | hashicorp/terraform-cdk | https://api.github.com/repos/hashicorp/terraform-cdk | closed | Use `local` backend instead of `-state` CLI option for local state | enhancement cdktf priority/important-longterm size/medium | <!--- Please keep this note for the community --->
### Community Note
- Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
We [currently specify the `-state` option](https://github.com/hashicorp/terraform-cdk/blob/6f7dbdd90403867a5a6e42d501c403371c03a370/packages/cdktf-cli/bin/cmds/ui/models/terraform-cli.ts#L121) for setting the Terraform state file location for local state. We even seem to set this when another backend is set (but it has no effect then).
The `-state` option is deprecated and we should stop using it. We should use the explicit [`local` backend](https://www.terraform.io/language/settings/backends/local) instead.
<!--- Please leave a helpful description of the feature request here. --->
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
### References
* https://github.com/hashicorp/terraform/issues/30195
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
| 1.0 | Use `local` backend instead of `-state` CLI option for local state - <!--- Please keep this note for the community --->
### Community Note
- Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
We [currently specify the `-state` option](https://github.com/hashicorp/terraform-cdk/blob/6f7dbdd90403867a5a6e42d501c403371c03a370/packages/cdktf-cli/bin/cmds/ui/models/terraform-cli.ts#L121) for setting the Terraform state file location for local state. We even seem to set this when another backend is set (but it has no effect then).
The `-state` option is deprecated and we should stop using it. We should use the explicit [`local` backend](https://www.terraform.io/language/settings/backends/local) instead.
<!--- Please leave a helpful description of the feature request here. --->
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
### References
* https://github.com/hashicorp/terraform/issues/30195
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
| priority | use local backend instead of state cli option for local state community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description we for setting the terraform state file location for local state we even seem to set this when another backend is set but it has no effect then the state option is deprecated and we should stop using it we should use the explicit instead references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation | 1 |
48,069 | 2,990,142,532 | IssuesEvent | 2015-07-21 07:16:00 | jayway/rest-assured | https://api.github.com/repos/jayway/rest-assured | closed | Add JsonPath support for Spring Hateoas (EnableHateoas) | Duplicate enhancement imported Priority-Medium | _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on December 05, 2013 12:39:09_
https://github.com/spring-projects/spring-hateoas#enablehypermediasupport
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=287_ | 1.0 | Add JsonPath support for Spring Hateoas (EnableHateoas) - _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on December 05, 2013 12:39:09_
https://github.com/spring-projects/spring-hateoas#enablehypermediasupport
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=287_ | priority | add jsonpath support for spring hateoas enablehateoas from on december original issue | 1 |
665,222 | 22,304,025,588 | IssuesEvent | 2022-06-13 11:23:07 | virtual-tech-school/youtube-clone | https://api.github.com/repos/virtual-tech-school/youtube-clone | opened | Channel Page | :pushpin: task :orange_square: priority: medium :orange_circle: level: medium | Create a channel page, similar to how it is on YouTube. Reference - https://www.youtube.com/c/ApoorvGoyalMain
The page would container -
- [ ] Banner (with links on top)
- [ ] Profile Image, Channel Name, Subscribe Button, etc.
- [ ] Tabs (Home, Videos, Playlists, etc.)
Branch Name - issue-9 | 1.0 | Channel Page - Create a channel page, similar to how it is on YouTube. Reference - https://www.youtube.com/c/ApoorvGoyalMain
The page would container -
- [ ] Banner (with links on top)
- [ ] Profile Image, Channel Name, Subscribe Button, etc.
- [ ] Tabs (Home, Videos, Playlists, etc.)
Branch Name - issue-9 | priority | channel page create a channel page similar to how it is on youtube reference the page would container banner with links on top profile image channel name subscribe button etc tabs home videos playlists etc branch name issue | 1 |
617,273 | 19,346,655,463 | IssuesEvent | 2021-12-15 11:32:16 | numba/numba | https://api.github.com/repos/numba/numba | closed | Unify more of the CPU and GPU implementations | mediumpriority feature_request | Is there redundant code between the different targets? (Probably yes.)
| 1.0 | Unify more of the CPU and GPU implementations - Is there redundant code between the different targets? (Probably yes.)
| priority | unify more of the cpu and gpu implementations is there redundant code between the different targets probably yes | 1 |
174,287 | 6,538,859,443 | IssuesEvent | 2017-09-01 08:35:40 | apache/incubator-openwhisk-wskdeploy | https://api.github.com/repos/apache/incubator-openwhisk-wskdeploy | closed | Add more CI test cases. | priority: medium | Now as we started the job of support CI with Openwhisk deployments, we still lack util codes to support CI test, we need to add this infra/shim. | 1.0 | Add more CI test cases. - Now as we started the job of support CI with Openwhisk deployments, we still lack util codes to support CI test, we need to add this infra/shim. | priority | add more ci test cases now as we started the job of support ci with openwhisk deployments we still lack util codes to support ci test we need to add this infra shim | 1 |
363,489 | 10,741,716,869 | IssuesEvent | 2019-10-29 20:50:16 | PlasmaPy/PlasmaPy | https://api.github.com/repos/PlasmaPy/PlasmaPy | opened | Where should we put example interfaces to simulation packages? | Needs decision Priority: medium simulations | While discussing PlasmaPy top-level package structure during our community meeting today and at the APS DPP meeting last week (https://github.com/PlasmaPy/PlasmaPy-PLEPs/pull/26), the issue came up of where to put example interfaces to existing simulation packages.
For some background, our current plan is to create classes in `plasmapy.simulation` that can be used to describe the initial conditions, boundary conditions, and computational domain independently of the numerics. We also want to create an abstract interface that can be implemented with different plasma simulation codes (including the ones we develop along with codes from the community).
It would be really helpful to the community for us to have examples for how to build implementations for these interfaces for different codes that are already in existence. We weren't able to reach a consensus on where to put the example interfaces, so we're raising this issue. Where would be the best place to put these example interfaces to plasma simulation codes that have already been developed by members of the broader plasma physics community?
| 1.0 | Where should we put example interfaces to simulation packages? - While discussing PlasmaPy top-level package structure during our community meeting today and at the APS DPP meeting last week (https://github.com/PlasmaPy/PlasmaPy-PLEPs/pull/26), the issue came up of where to put example interfaces to existing simulation packages.
For some background, our current plan is to create classes in `plasmapy.simulation` that can be used to describe the initial conditions, boundary conditions, and computational domain independently of the numerics. We also want to create an abstract interface that can be implemented with different plasma simulation codes (including the ones we develop along with codes from the community).
It would be really helpful to the community for us to have examples for how to build implementations for these interfaces for different codes that are already in existence. We weren't able to reach a consensus on where to put the example interfaces, so we're raising this issue. Where would be the best place to put these example interfaces to plasma simulation codes that have already been developed by members of the broader plasma physics community?
| priority | where should we put example interfaces to simulation packages while discussing plasmapy top level package structure during our community meeting today and at the aps dpp meeting last week the issue came up of where to put example interfaces to existing simulation packages for some background our current plan is to create classes in plasmapy simulation that can be used to describe the initial conditions boundary conditions and computational domain independently of the numerics we also want to create an abstract interface that can be implemented with different plasma simulation codes including the ones we develop along with codes from the community it would be really helpful to the community for us to have examples for how to build implementations for these interfaces for different codes that are already in existence we weren t able to reach a consensus on where to put the example interfaces so we re raising this issue where would be the best place to put these example interfaces to plasma simulation codes that have already been developed by members of the broader plasma physics community | 1 |
584,386 | 17,422,783,979 | IssuesEvent | 2021-08-04 05:00:50 | staynomad/Nomad-Back | https://api.github.com/repos/staynomad/Nomad-Back | closed | sendVerificationEmail() has incorrect parameters | dev:bug difficulty:easy priority:medium | # Background
<!--- Put any relevant background information here. --->
`sendVerificationEmail` should have name, email, and userId as parameters. All instances of `sendVerificationEmail` only have two parameters (email and userId).
# Task
<!--- Put the task here (ideally bullet points). --->
- Update all instances of `sendVerificationEmail` to pass in 3 parameters - name, email, userId
- ctrl-F for `sendVerificationEmail` should show all cases where it's used
# Done When
<!--- Put the completion criteria for the issue here. --->
- `sendVerificationEmail` works as expected
- Changing back to a host does not result in error
| 1.0 | sendVerificationEmail() has incorrect parameters - # Background
<!--- Put any relevant background information here. --->
`sendVerificationEmail` should have name, email, and userId as parameters. All instances of `sendVerificationEmail` only have two parameters (email and userId).
# Task
<!--- Put the task here (ideally bullet points). --->
- Update all instances of `sendVerificationEmail` to pass in 3 parameters - name, email, userId
- ctrl-F for `sendVerificationEmail` should show all cases where it's used
# Done When
<!--- Put the completion criteria for the issue here. --->
- `sendVerificationEmail` works as expected
- Changing back to a host does not result in error
| priority | sendverificationemail has incorrect parameters background sendverificationemail should have name email and userid as parameters all instances of sendverificationemail only have two parameters email and userid task update all instances of sendverificationemail to pass in parameters name email userid ctrl f for sendverificationemail should show all cases where it s used done when sendverificationemail works as expected changing back to a host does not result in error | 1 |
721,566 | 24,831,656,173 | IssuesEvent | 2022-10-26 04:26:47 | AY2223S1-CS2103T-T12-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-T12-4/tp | closed | Add information for patient showing medication allergies | type.Story priority.Medium type.NoteBased | As a private nurse I want to know what type of medication my patient is allergic to so that I can avoid any potential mistake
| 1.0 | Add information for patient showing medication allergies - As a private nurse I want to know what type of medication my patient is allergic to so that I can avoid any potential mistake
| priority | add information for patient showing medication allergies as a private nurse i want to know what type of medication my patient is allergic to so that i can avoid any potential mistake | 1 |
492,384 | 14,201,511,201 | IssuesEvent | 2020-11-16 07:49:41 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | opened | Add ability to Edit the Activities on Draft Case Triggered Plans | Priority: Medium | Presently, a user cannot edit (remove) the activities on draft a case triggered plan. The client would want this feature developed. | 1.0 | Add ability to Edit the Activities on Draft Case Triggered Plans - Presently, a user cannot edit (remove) the activities on draft a case triggered plan. The client would want this feature developed. | priority | add ability to edit the activities on draft case triggered plans presently a user cannot edit remove the activities on draft a case triggered plan the client would want this feature developed | 1 |
289,220 | 8,861,854,675 | IssuesEvent | 2019-01-10 02:43:42 | sussol/mobile | https://api.github.com/repos/sussol/mobile | opened | "Monthly usage" in current stock page row expansion not translated | Effort small Priority: Medium | Build Number: 2.2.0-rc0
Description:
https://github.com/sussol/mobile/blob/529c4459f61c7f934c9af3d8272a14c75242929b/src/pages/StockPage.js#L94
This line doesn't use localisation, and is a typo "Montly Usage"
Should have typo corrected for English and should use localisation.
Reproducible: Yes
Reproduction Steps: Go there and look
Comments:
| 1.0 | "Monthly usage" in current stock page row expansion not translated - Build Number: 2.2.0-rc0
Description:
https://github.com/sussol/mobile/blob/529c4459f61c7f934c9af3d8272a14c75242929b/src/pages/StockPage.js#L94
This line doesn't use localisation, and is a typo "Montly Usage"
Should have typo corrected for English and should use localisation.
Reproducible: Yes
Reproduction Steps: Go there and look
Comments:
| priority | monthly usage in current stock page row expansion not translated build number description this line doesn t use localisation and is a typo montly usage should have typo corrected for english and should use localisation reproducible yes reproduction steps go there and look comments | 1 |
205,670 | 7,104,589,060 | IssuesEvent | 2018-01-16 10:29:33 | AnSyn/ansyn | https://api.github.com/repos/AnSyn/ansyn | closed | Bug- shadow mouse- cursor on inactive screen | Bug Priority: High Severity: Medium |
**Current behavior**
When hovering an inactive screen with the cursor- the user can see both the shadow mouse cross and the cursor in the same screen.
**Expected behavior**
the shadow mouse cross should not appear, so the user wont be confused.
**Minimal reproduction of the problem with instructions**
open more then one screen
activate shadow mouse
hover the inactive screen
| 1.0 | Bug- shadow mouse- cursor on inactive screen -
**Current behavior**
When hovering an inactive screen with the cursor- the user can see both the shadow mouse cross and the cursor in the same screen.
**Expected behavior**
the shadow mouse cross should not appear, so the user wont be confused.
**Minimal reproduction of the problem with instructions**
open more then one screen
activate shadow mouse
hover the inactive screen
| priority | bug shadow mouse cursor on inactive screen current behavior when hovering an inactive screen with the cursor the user can see both the shadow mouse cross and the cursor in the same screen expected behavior the shadow mouse cross should not appear so the user wont be confused minimal reproduction of the problem with instructions open more then one screen activate shadow mouse hover the inactive screen | 1 |
448,252 | 12,946,283,832 | IssuesEvent | 2020-07-18 18:26:12 | status-im/nim-beacon-chain | https://api.github.com/repos/status-im/nim-beacon-chain | opened | [SEC] Brittle AES-GCM Tag/IV Construction in Nimcrypto | difficulty:medium low priority nbc-audit-2020-0 :passport_control: security status:reported | ## Description
The [`bcmode.nim`](https://github.com/cheatfate/nimcrypto/blob/f767595f4ddec2b5570b5194feb96954c00a6499/nimcrypto/bcmode.nim) source file provides support for large range of block ciphers including AES-ECB and AES-GCM.
For the AES-GCM cipher mode, the code provides a `getTag()` function that gives the caller access to the authentication tag for both encryption and decryption, and tag checks are demonstrated in the [`examples/gcm.nim`](https://github.com/cheatfate/nimcrypto/blob/f767595f4ddec2b5570b5194feb96954c00a6499/examples/gcm.nim) file. Thus, the sender is required to retrieve the tag after encryption and transmit it, while the recipient must then check the received tag against the locally calculated tag after decryption. This has historically been a source of errors due to weaknesses involving:
* No other (implemented) cipher mode requires these extra steps, so users may be unaware.
* The need to take extra steps to verify tags is not always well [documented](https://cheatfate.github.io/nimcrypto/nimcrypto/bcmode.html) and/or the documentation is not read.
* [NIST 800-38D](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-38d.pdf) indicates that `decrypt()` functionality either returns plaintext or FAIL on a mismatched tag.
* [Examples](https://github.com/cheatfate/nimcrypto/blob/f767595f4ddec2b5570b5194feb96954c00a6499/examples/gcm.nim) can use an assert which is largely a test primitive and easily overlooked in functional code.
Similarly, as AES-GCM is a stream cipher, encrypting with a random and unique initialization vector (IV) of sufficient length per message is critical, though potentially easily overlooked. Plaintext is (generally) easily extracted from the ciphertext produced by a stream cipher with a fixed IV. The implementation requires the user to supply and manage IVs. NIST 800-38D recommends an IV of 96 bits but allows for more via an extra hashing step. The code correctly handles a 96-bit IV and the extra hashing case, but does allow a very small IV to fall into the latter case. There is a material risk that the user supplies a fixed or reused IB potentially of insufficient size.
## Exploit Scenario
A user forgetting to source, transmit and check tags on decryption or appropriately manage the initialization vectors on encryption will severely damage the integrity and confidentiality assurances provided by AES-GCM.
## Mitigation Recommendation
Adapt the AES-GCM functionality to remove the need for users to handle tags and source IVs by:
* Attach tags to the encrypted message.
* Check the tags inside the decrypt functionality and signal FAIL with no plaintext returned on error.
* Offer a default internal unique 96-bit IV from a cryptographically secure source of randomness per outbound message.
* Document the importance of tag verification and unique/random IVs.
Separately and as an informational aside, older cipher modes such as AES-ECB should be removed wherever possible, to prevent users from being attracted to the first, simplest and/or most familiar option that may be less secure.
## References
References are linked above. Note that @cheatfate asked me to file nimcrypto issues under nim-beacon-chain | 1.0 | [SEC] Brittle AES-GCM Tag/IV Construction in Nimcrypto - ## Description
The [`bcmode.nim`](https://github.com/cheatfate/nimcrypto/blob/f767595f4ddec2b5570b5194feb96954c00a6499/nimcrypto/bcmode.nim) source file provides support for large range of block ciphers including AES-ECB and AES-GCM.
For the AES-GCM cipher mode, the code provides a `getTag()` function that gives the caller access to the authentication tag for both encryption and decryption, and tag checks are demonstrated in the [`examples/gcm.nim`](https://github.com/cheatfate/nimcrypto/blob/f767595f4ddec2b5570b5194feb96954c00a6499/examples/gcm.nim) file. Thus, the sender is required to retrieve the tag after encryption and transmit it, while the recipient must then check the received tag against the locally calculated tag after decryption. This has historically been a source of errors due to weaknesses involving:
* No other (implemented) cipher mode requires these extra steps, so users may be unaware.
* The need to take extra steps to verify tags is not always well [documented](https://cheatfate.github.io/nimcrypto/nimcrypto/bcmode.html) and/or the documentation is not read.
* [NIST 800-38D](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-38d.pdf) indicates that `decrypt()` functionality either returns plaintext or FAIL on a mismatched tag.
* [Examples](https://github.com/cheatfate/nimcrypto/blob/f767595f4ddec2b5570b5194feb96954c00a6499/examples/gcm.nim) can use an assert which is largely a test primitive and easily overlooked in functional code.
Similarly, as AES-GCM is a stream cipher, encrypting with a random and unique initialization vector (IV) of sufficient length per message is critical, though potentially easily overlooked. Plaintext is (generally) easily extracted from the ciphertext produced by a stream cipher with a fixed IV. The implementation requires the user to supply and manage IVs. NIST 800-38D recommends an IV of 96 bits but allows for more via an extra hashing step. The code correctly handles a 96-bit IV and the extra hashing case, but does allow a very small IV to fall into the latter case. There is a material risk that the user supplies a fixed or reused IB potentially of insufficient size.
## Exploit Scenario
A user forgetting to source, transmit and check tags on decryption or appropriately manage the initialization vectors on encryption will severely damage the integrity and confidentiality assurances provided by AES-GCM.
## Mitigation Recommendation
Adapt the AES-GCM functionality to remove the need for users to handle tags and source IVs by:
* Attach tags to the encrypted message.
* Check the tags inside the decrypt functionality and signal FAIL with no plaintext returned on error.
* Offer a default internal unique 96-bit IV from a cryptographically secure source of randomness per outbound message.
* Document the importance of tag verification and unique/random IVs.
Separately and as an informational aside, older cipher modes such as AES-ECB should be removed wherever possible, to prevent users from being attracted to the first, simplest and/or most familiar option that may be less secure.
## References
References are linked above. Note that @cheatfate asked me to file nimcrypto issues under nim-beacon-chain | priority | brittle aes gcm tag iv construction in nimcrypto description the source file provides support for large range of block ciphers including aes ecb and aes gcm for the aes gcm cipher mode the code provides a gettag function that gives the caller access to the authentication tag for both encryption and decryption and tag checks are demonstrated in the file thus the sender is required to retrieve the tag after encryption and transmit it while the recipient must then check the received tag against the locally calculated tag after decryption this has historically been a source of errors due to weaknesses involving no other implemented cipher mode requires these extra steps so users may be unaware the need to take extra steps to verify tags is not always well and or the documentation is not read indicates that decrypt functionality either returns plaintext or fail on a mismatched tag can use an assert which is largely a test primitive and easily overlooked in functional code similarly as aes gcm is a stream cipher encrypting with a random and unique initialization vector iv of sufficient length per message is critical though potentially easily overlooked plaintext is generally easily extracted from the ciphertext produced by a stream cipher with a fixed iv the implementation requires the user to supply and manage ivs nist recommends an iv of bits but allows for more via an extra hashing step the code correctly handles a bit iv and the extra hashing case but does allow a very small iv to fall into the latter case there is a material risk that the user supplies a fixed or reused ib potentially of insufficient size exploit scenario a user forgetting to source transmit and check tags on decryption or appropriately manage the initialization vectors on encryption will severely damage the integrity and confidentiality assurances provided by aes gcm mitigation recommendation adapt the aes gcm functionality to remove the need for users to handle tags and source ivs by attach tags to the encrypted message check the tags inside the decrypt functionality and signal fail with no plaintext returned on error offer a default internal unique bit iv from a cryptographically secure source of randomness per outbound message document the importance of tag verification and unique random ivs separately and as an informational aside older cipher modes such as aes ecb should be removed wherever possible to prevent users from being attracted to the first simplest and or most familiar option that may be less secure references references are linked above note that cheatfate asked me to file nimcrypto issues under nim beacon chain | 1 |
522,633 | 15,164,147,480 | IssuesEvent | 2021-02-12 13:19:28 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.2.0 beta develop-111]Map Filter UX improvement: Adding folders/sorting for different layers by category | Category: UI Priority: Medium Squad: Wild Turkey Status: Fixed Type: Quality of Life | Would like to see a massive quality of life improvement when using the map filters and looking at different layers.
Right now for example when looking into Plants we have a list of 120+ items which is very annoying when interacting with the map
this could get some very good improvements by categorizing further into sub folders.
Plants Group ( not sure but maybe rename this into something more sciency? This category does not have functional overlays anyway at the moment)
|>Trees
------ |>Population
------ |>Yield Potential
|>Plants
------ |>Population
------ |>Yield Potential
We also have Animals that are a bit unruly where we could split them into Population vs capacity to make it easier for players
where we could also further separate this into Land based and Sea/Water based animals which would help the management when more animals are added.
Animals
|> Terrestrial Animals
------ |> Population
------ |> Capacity
|> Aquatic Animals
------ |> Population
------ |> Capacity
| 1.0 | [0.9.2.0 beta develop-111]Map Filter UX improvement: Adding folders/sorting for different layers by category - Would like to see a massive quality of life improvement when using the map filters and looking at different layers.
Right now for example when looking into Plants we have a list of 120+ items which is very annoying when interacting with the map
this could get some very good improvements by categorizing further into sub folders.
Plants Group ( not sure but maybe rename this into something more sciency? This category does not have functional overlays anyway at the moment)
|>Trees
------ |>Population
------ |>Yield Potential
|>Plants
------ |>Population
------ |>Yield Potential
We also have Animals that are a bit unruly where we could split them into Population vs capacity to make it easier for players
where we could also further separate this into Land based and Sea/Water based animals which would help the management when more animals are added.
Animals
|> Terrestrial Animals
------ |> Population
------ |> Capacity
|> Aquatic Animals
------ |> Population
------ |> Capacity
| priority | map filter ux improvement adding folders sorting for different layers by category would like to see a massive quality of life improvement when using the map filters and looking at different layers right now for example when looking into plants we have a list of items which is very annoying when interacting with the map this could get some very good improvements by categorizing further into sub folders plants group not sure but maybe rename this into something more sciency this category does not have functional overlays anyway at the moment trees population yield potential plants population yield potential we also have animals that are a bit unruly where we could split them into population vs capacity to make it easier for players where we could also further separate this into land based and sea water based animals which would help the management when more animals are added animals terrestrial animals population capacity aquatic animals population capacity | 1 |
177,811 | 6,587,465,975 | IssuesEvent | 2017-09-13 21:11:17 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [craftercms] Add craftercms-test-suite to the craftercms bundle | Priority: Medium task | Please add https://github.com/craftercms/craftercms-test-suite to the core bundle and allow testing.
`gradle init` should pull this repo along with others.
`gradle test` should check if started, if not, error out and ask the user to `start` first (I'm open to suggestions here) -- this should also tell the user where the test reports are
| 1.0 | [craftercms] Add craftercms-test-suite to the craftercms bundle - Please add https://github.com/craftercms/craftercms-test-suite to the core bundle and allow testing.
`gradle init` should pull this repo along with others.
`gradle test` should check if started, if not, error out and ask the user to `start` first (I'm open to suggestions here) -- this should also tell the user where the test reports are
| priority | add craftercms test suite to the craftercms bundle please add to the core bundle and allow testing gradle init should pull this repo along with others gradle test should check if started if not error out and ask the user to start first i m open to suggestions here this should also tell the user where the test reports are | 1 |
91,907 | 3,863,517,333 | IssuesEvent | 2016-04-08 09:45:49 | iamxavier/elmah | https://api.github.com/repos/iamxavier/elmah | closed | Add a timestamp column to the database schema | auto-migrated Priority-Medium Type-Enhancement | ```
What new or enhanced feature are you proposing?
Add a timestamp column to the database schema.
What goal would this enhancement help you achieve?
It would support Linq-To-Sql better, by allowing it to intrinsically
resolve optimistic concurrency when using a disconnected Linq-To-Sql model.
Furthermore, this would provide custom replication support, concurrency
support, etc.
The downside is that "timestamp" may not exist in some DBs. Ug.
Well, we should try anyway, IMHO, if it is possible.
For more details, see one of the many pieces of writing on the matter on
the web, suchj as...
http://www.west-wind.com/WebLog/ShowPost.aspx?id=134095
http://www.west-wind.com/weblog/posts/135659.aspx
...etc.
Just an idea.
Thank you.
-- Mark Kamoski
```
Original issue reported on code.google.com by `mkamo...@gmail.com` on 14 Jul 2009 at 2:07 | 1.0 | Add a timestamp column to the database schema - ```
What new or enhanced feature are you proposing?
Add a timestamp column to the database schema.
What goal would this enhancement help you achieve?
It would support Linq-To-Sql better, by allowing it to intrinsically
resolve optimistic concurrency when using a disconnected Linq-To-Sql model.
Furthermore, this would provide custom replication support, concurrency
support, etc.
The downside is that "timestamp" may not exist in some DBs. Ug.
Well, we should try anyway, IMHO, if it is possible.
For more details, see one of the many pieces of writing on the matter on
the web, suchj as...
http://www.west-wind.com/WebLog/ShowPost.aspx?id=134095
http://www.west-wind.com/weblog/posts/135659.aspx
...etc.
Just an idea.
Thank you.
-- Mark Kamoski
```
Original issue reported on code.google.com by `mkamo...@gmail.com` on 14 Jul 2009 at 2:07 | priority | add a timestamp column to the database schema what new or enhanced feature are you proposing add a timestamp column to the database schema what goal would this enhancement help you achieve it would support linq to sql better by allowing it to intrinsically resolve optimistic concurrency when using a disconnected linq to sql model furthermore this would provide custom replication support concurrency support etc the downside is that timestamp may not exist in some dbs ug well we should try anyway imho if it is possible for more details see one of the many pieces of writing on the matter on the web suchj as etc just an idea thank you mark kamoski original issue reported on code google com by mkamo gmail com on jul at | 1 |
87,299 | 3,744,701,288 | IssuesEvent | 2016-03-10 03:30:15 | rettigs/cs-senior-capstone | https://api.github.com/repos/rettigs/cs-senior-capstone | closed | Make manual overrides apply until next rule change rather than set time period | enhancement priority:medium | Currently, manual overrides last for a configurable amount of time (2 hours by default). They should last until a rule would have otherwise changed the state. For example, if your lights are set to turn off every morning at 6am, but you manually turn them on at 7am, they will stay on all day until the next morning at 6am. | 1.0 | Make manual overrides apply until next rule change rather than set time period - Currently, manual overrides last for a configurable amount of time (2 hours by default). They should last until a rule would have otherwise changed the state. For example, if your lights are set to turn off every morning at 6am, but you manually turn them on at 7am, they will stay on all day until the next morning at 6am. | priority | make manual overrides apply until next rule change rather than set time period currently manual overrides last for a configurable amount of time hours by default they should last until a rule would have otherwise changed the state for example if your lights are set to turn off every morning at but you manually turn them on at they will stay on all day until the next morning at | 1 |
451,624 | 13,039,427,997 | IssuesEvent | 2020-07-28 16:44:04 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | opened | AOD Cases should sort to the top | Priority: Medium Product: caseflow-queue Stakeholder: BVA Team: Echo 🐬 Type: Bug | ## Description
While investigating a seperate issue in production, I observed that AOD cases are not bubbling up to the top of a users queue, like we want. Specifically, non-original cases were appearing first in a Judges Assign queue.
## Acceptance criteria
- [ ] AOD cases should always sort to the top of a users queue on load.
## Background/context/resources
## Technical notes
| 1.0 | AOD Cases should sort to the top - ## Description
While investigating a seperate issue in production, I observed that AOD cases are not bubbling up to the top of a users queue, like we want. Specifically, non-original cases were appearing first in a Judges Assign queue.
## Acceptance criteria
- [ ] AOD cases should always sort to the top of a users queue on load.
## Background/context/resources
## Technical notes
| priority | aod cases should sort to the top description while investigating a seperate issue in production i observed that aod cases are not bubbling up to the top of a users queue like we want specifically non original cases were appearing first in a judges assign queue acceptance criteria aod cases should always sort to the top of a users queue on load background context resources technical notes | 1 |
181,312 | 6,658,276,638 | IssuesEvent | 2017-09-30 17:15:11 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Outgoing webhooks shouldn't throw an exception on network errors | area: api area: bots bug in progress priority: medium | I think we try to suppress these in `do_rest_call`, but the exception list is clearly not complete.
```
Traceback (most recent call last):
File "/home/zulip/deployments/2017-08-14-16-02-48/zerver/lib/outgoing_webhook.py", line 192, in do_rest_call
response = requests.request(http_method, final_url, data=request_data, **request_kwargs)
File "/home/zulip/deployments/2017-08-14-16-02-48/zulip-venv/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/zulip/deployments/2017-08-14-16-02-48/zulip-venv/lib/python2.7/site-packages/requests/sessions.py", line 502, in request
resp = self.send(prep, **send_kwargs)
File "/home/zulip/deployments/2017-08-14-16-02-48/zulip-venv/lib/python2.7/site-packages/requests/sessions.py", line 612, in send
r = adapter.send(request, **kwargs)
File "/home/zulip/deployments/2017-08-14-16-02-48/zulip-venv/lib/python2.7/site-packages/requests/adapters.py", line 504, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPSConnectionPool(host='zulipbotserver.herokuapp.com', port=33507): Max retries exceeded with url: /bots/converter (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fc5ebd84490>: Failed to establish a new connection: [Errno 111] Connection refused',))
```
This is a fairly high-priority issue, since it will result in a lot of error email spam to administrators of servers using outgoing webhooks (at least if their users configure them wrong...).
A secondary issue we should think about after resolving the initial problem is how to send feedback to the user about the fact that their webhook isn't working. I think it might make sense to have them get a PM from the bot or something notifying them. | 1.0 | Outgoing webhooks shouldn't throw an exception on network errors - I think we try to suppress these in `do_rest_call`, but the exception list is clearly not complete.
```
Traceback (most recent call last):
File "/home/zulip/deployments/2017-08-14-16-02-48/zerver/lib/outgoing_webhook.py", line 192, in do_rest_call
response = requests.request(http_method, final_url, data=request_data, **request_kwargs)
File "/home/zulip/deployments/2017-08-14-16-02-48/zulip-venv/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/zulip/deployments/2017-08-14-16-02-48/zulip-venv/lib/python2.7/site-packages/requests/sessions.py", line 502, in request
resp = self.send(prep, **send_kwargs)
File "/home/zulip/deployments/2017-08-14-16-02-48/zulip-venv/lib/python2.7/site-packages/requests/sessions.py", line 612, in send
r = adapter.send(request, **kwargs)
File "/home/zulip/deployments/2017-08-14-16-02-48/zulip-venv/lib/python2.7/site-packages/requests/adapters.py", line 504, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPSConnectionPool(host='zulipbotserver.herokuapp.com', port=33507): Max retries exceeded with url: /bots/converter (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fc5ebd84490>: Failed to establish a new connection: [Errno 111] Connection refused',))
```
This is a fairly high-priority issue, since it will result in a lot of error email spam to administrators of servers using outgoing webhooks (at least if their users configure them wrong...).
A secondary issue we should think about after resolving the initial problem is how to send feedback to the user about the fact that their webhook isn't working. I think it might make sense to have them get a PM from the bot or something notifying them. | priority | outgoing webhooks shouldn t throw an exception on network errors i think we try to suppress these in do rest call but the exception list is clearly not complete traceback most recent call last file home zulip deployments zerver lib outgoing webhook py line in do rest call response requests request http method final url data request data request kwargs file home zulip deployments zulip venv lib site packages requests api py line in request return session request method method url url kwargs file home zulip deployments zulip venv lib site packages requests sessions py line in request resp self send prep send kwargs file home zulip deployments zulip venv lib site packages requests sessions py line in send r adapter send request kwargs file home zulip deployments zulip venv lib site packages requests adapters py line in send raise connectionerror e request request connectionerror httpsconnectionpool host zulipbotserver herokuapp com port max retries exceeded with url bots converter caused by newconnectionerror failed to establish a new connection connection refused this is a fairly high priority issue since it will result in a lot of error email spam to administrators of servers using outgoing webhooks at least if their users configure them wrong a secondary issue we should think about after resolving the initial problem is how to send feedback to the user about the fact that their webhook isn t working i think it might make sense to have them get a pm from the bot or something notifying them | 1 |
787,635 | 27,725,294,521 | IssuesEvent | 2023-03-15 01:21:52 | vrchatapi/vrchatapi.github.io | https://api.github.com/repos/vrchatapi/vrchatapi.github.io | closed | Missing Websocket events | Type: Undocumented Endpoint Priority: Medium | The defined ones on web are:
notification, notification-v2, notification-v2-update, notification-v2-delete, see-notification, hide-notification, clear-notification, friend-add, friend-delete, friend-online, friend-active, friend-offline, friend-update, friend-location, user-update, user-location
The new ones are:
group-joined, group-left
We are missing documentation on several of these. | 1.0 | Missing Websocket events - The defined ones on web are:
notification, notification-v2, notification-v2-update, notification-v2-delete, see-notification, hide-notification, clear-notification, friend-add, friend-delete, friend-online, friend-active, friend-offline, friend-update, friend-location, user-update, user-location
The new ones are:
group-joined, group-left
We are missing documentation on several of these. | priority | missing websocket events the defined ones on web are notification notification notification update notification delete see notification hide notification clear notification friend add friend delete friend online friend active friend offline friend update friend location user update user location the new ones are group joined group left we are missing documentation on several of these | 1 |
690,808 | 23,673,098,027 | IssuesEvent | 2022-08-27 17:16:00 | crcn/tandem | https://api.github.com/repos/crcn/tandem | opened | support different rendering engines | priority: medium effort: medium | Good exercise for figuring out a generic library.
Examples:
- ThreeJS
Testing how to:
- Add custom components
- Generic sidebar controls
- generic canvas controls
| 1.0 | support different rendering engines - Good exercise for figuring out a generic library.
Examples:
- ThreeJS
Testing how to:
- Add custom components
- Generic sidebar controls
- generic canvas controls
| priority | support different rendering engines good exercise for figuring out a generic library examples threejs testing how to add custom components generic sidebar controls generic canvas controls | 1 |
419,624 | 12,226,075,035 | IssuesEvent | 2020-05-03 09:06:51 | dita-ot/dita-ot | https://api.github.com/repos/dita-ot/dita-ot | closed | Provide topicref to topics handled in processTopic mode called from generatePageSequenceFromTopicref | enhancement plugin/pdf priority/medium stale | org.dita.pdf2 plugin
## Expected Behavior
Templates handling topics have easy access to the topicref that referred to the topic.
## Actual Behavior
Topicref is not available.
## Possible Solution
Pass the topicref as a tunnel parameter to templates in mode processTopic in root-processing.xsl:
```
<xsl:template match="*[contains(@class,' map/topicref ')]" mode="generatePageSequenceFromTopicref">
<xsl:variable name="referencedTopic" select="key('topic-id', @id)" as="element()*"/>
<xsl:choose>
<xsl:when test="empty($referencedTopic)">
<xsl:apply-templates select="*[contains(@class, ' map/topicref ')]" mode="generatePageSequenceFromTopicref"/>
</xsl:when>
<xsl:otherwise>
<xsl:apply-templates select="$referencedTopic" mode="processTopic">
<xsl:with-param name="topicref" as="element()" tunnel="yes" select="."/>
</xsl:apply-templates>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
```
| 1.0 | Provide topicref to topics handled in processTopic mode called from generatePageSequenceFromTopicref - org.dita.pdf2 plugin
## Expected Behavior
Templates handling topics have easy access to the topicref that referred to the topic.
## Actual Behavior
Topicref is not available.
## Possible Solution
Pass the topicref as a tunnel parameter to templates in mode processTopic in root-processing.xsl:
```
<xsl:template match="*[contains(@class,' map/topicref ')]" mode="generatePageSequenceFromTopicref">
<xsl:variable name="referencedTopic" select="key('topic-id', @id)" as="element()*"/>
<xsl:choose>
<xsl:when test="empty($referencedTopic)">
<xsl:apply-templates select="*[contains(@class, ' map/topicref ')]" mode="generatePageSequenceFromTopicref"/>
</xsl:when>
<xsl:otherwise>
<xsl:apply-templates select="$referencedTopic" mode="processTopic">
<xsl:with-param name="topicref" as="element()" tunnel="yes" select="."/>
</xsl:apply-templates>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
```
| priority | provide topicref to topics handled in processtopic mode called from generatepagesequencefromtopicref org dita plugin expected behavior templates handling topics have easy access to the topicref that referred to the topic actual behavior topicref is not available possible solution pass the topicref as a tunnel parameter to templates in mode processtopic in root processing xsl | 1 |
413,788 | 12,092,155,253 | IssuesEvent | 2020-04-19 14:35:40 | lorenzwalthert/precommit | https://api.github.com/repos/lorenzwalthert/precommit | closed | Allow to choose installation environment | Complexity: Medium Priority: High Status: WIP Type: Enhancement | As mentioned in [#113 ](https://github.com/lorenzwalthert/precommit/issues/113#issuecomment-603808455). Recently I've tried to run `keras::install_keras()` inside the Docker image with already installed precommit. Looks like both keras and tensorflow are now [installed by default to `r-reticulate`](https://github.com/rstudio/keras/issues/1014).
This results in the following error:
>Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.
Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.
Examining setuptools: 7%|▋ | 2/30 [00:00<00:00, 2603.54it/s]
Comparing specs that have this dependency: 0%| | 0/15 [00:00<?, ?it/s]
Finding conflict paths: 0%| | 0/2 [00:00<?, ?it/s]
Finding shortest conflict path for setuptools: 0%| | 0/2 [00:00<?, ?it/s]
Finding shortest conflict path for setuptools: 50%|█████ | 1/2 [00:01<00:01, 1.99s/it]
Finding shortest conflict path for setuptools: 100%|██████████| 2/2 [00:01<00:00, 1.01it/s]
...truncated... | 1.0 | Allow to choose installation environment - As mentioned in [#113 ](https://github.com/lorenzwalthert/precommit/issues/113#issuecomment-603808455). Recently I've tried to run `keras::install_keras()` inside the Docker image with already installed precommit. Looks like both keras and tensorflow are now [installed by default to `r-reticulate`](https://github.com/rstudio/keras/issues/1014).
This results in the following error:
>Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.
Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.
Examining setuptools: 7%|▋ | 2/30 [00:00<00:00, 2603.54it/s]
Comparing specs that have this dependency: 0%| | 0/15 [00:00<?, ?it/s]
Finding conflict paths: 0%| | 0/2 [00:00<?, ?it/s]
Finding shortest conflict path for setuptools: 0%| | 0/2 [00:00<?, ?it/s]
Finding shortest conflict path for setuptools: 50%|█████ | 1/2 [00:01<00:01, 1.99s/it]
Finding shortest conflict path for setuptools: 100%|██████████| 2/2 [00:01<00:00, 1.01it/s]
...truncated... | priority | allow to choose installation environment as mentioned in recently i ve tried to run keras install keras inside the docker image with already installed precommit looks like both keras and tensorflow are now this results in the following error collecting package metadata current repodata json working done solving environment working failed with initial frozen solve retrying with flexible solve solving environment working failed with repodata from current repodata json will retry with next repodata source collecting package metadata repodata json working done solving environment working failed with initial frozen solve retrying with flexible solve examining setuptools ▋ comparing specs that have this dependency finding conflict paths finding shortest conflict path for setuptools finding shortest conflict path for setuptools █████ finding shortest conflict path for setuptools ██████████ truncated | 1 |
224,254 | 7,468,230,018 | IssuesEvent | 2018-04-02 18:14:25 | IfyAniefuna/experiment_metadata | https://api.github.com/repos/IfyAniefuna/experiment_metadata | closed | Implement drag and drop functionality for Yara and Richard's master spreadsheet | medium priority | Include the feature to specify which rows (I believe it was the serial number that was used as a label) in the imported CSV to output, avoiding generating CSVs for a massive spreadsheet. | 1.0 | Implement drag and drop functionality for Yara and Richard's master spreadsheet - Include the feature to specify which rows (I believe it was the serial number that was used as a label) in the imported CSV to output, avoiding generating CSVs for a massive spreadsheet. | priority | implement drag and drop functionality for yara and richard s master spreadsheet include the feature to specify which rows i believe it was the serial number that was used as a label in the imported csv to output avoiding generating csvs for a massive spreadsheet | 1 |
77,076 | 3,506,258,706 | IssuesEvent | 2016-01-08 05:02:24 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | .server restart (BB #137) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** digerago
**Original Date:** 02.05.2010 17:18:46 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/137
<hr>
I have long observed this bug.
when I use the command. server restart (* TIME *) server just shuts down after time. Data Management - Error in my_thread_global_end (): 1 threads didn't exit
and server does not restart
principle the not very important | 1.0 | .server restart (BB #137) - This issue was migrated from bitbucket.
**Original Reporter:** digerago
**Original Date:** 02.05.2010 17:18:46 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/137
<hr>
I have long observed this bug.
when I use the command. server restart (* TIME *) server just shuts down after time. Data Management - Error in my_thread_global_end (): 1 threads didn't exit
and server does not restart
principle the not very important | priority | server restart bb this issue was migrated from bitbucket original reporter digerago original date gmt original priority major original type bug original state resolved direct link i have long observed this bug when i use the command server restart time server just shuts down after time data management error in my thread global end threads didn t exit and server does not restart principle the not very important | 1 |
36,158 | 2,796,141,580 | IssuesEvent | 2015-05-12 04:21:59 | twogee/ant-http | https://api.github.com/repos/twogee/ant-http | opened | Create test cases for ant task | auto-migrated Milestone-1.2 Priority-Medium Project-ant-http Type-Task | <a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)**
_Monday May 11, 2015 at 22:05 GMT_
_Originally opened as https://github.com/twogee/missing-link/issues/9_
----
```
Create test cases for ant task
```
Original issue reported on code.google.com by `alex.she...@gmail.com` on 19 Mar 2011 at 12:04
| 1.0 | Create test cases for ant task - <a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)**
_Monday May 11, 2015 at 22:05 GMT_
_Originally opened as https://github.com/twogee/missing-link/issues/9_
----
```
Create test cases for ant task
```
Original issue reported on code.google.com by `alex.she...@gmail.com` on 19 Mar 2011 at 12:04
| priority | create test cases for ant task issue by monday may at gmt originally opened as create test cases for ant task original issue reported on code google com by alex she gmail com on mar at | 1 |
48,053 | 2,990,137,908 | IssuesEvent | 2015-07-21 07:13:31 | jayway/rest-assured | https://api.github.com/repos/jayway/rest-assured | closed | Add peek and prettyPeek to XmlPath | enhancement imported Priority-Medium | _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on November 21, 2013 16:41:00_
Should print the value to the console but return an instance of XmlPath
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=271_ | 1.0 | Add peek and prettyPeek to XmlPath - _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on November 21, 2013 16:41:00_
Should print the value to the console but return an instance of XmlPath
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=271_ | priority | add peek and prettypeek to xmlpath from on november should print the value to the console but return an instance of xmlpath original issue | 1 |
600,064 | 18,288,680,677 | IssuesEvent | 2021-10-05 13:09:42 | qlicker/qlicker | https://api.github.com/repos/qlicker/qlicker | closed | Bug in group managements | bug Medium priority | Suppose you go to Manage Groups, and there are 10 groups numbered 1 to 10. Suppose you delete group #5.
Now go to Group 1, and click the Next Group -> button. You will go Group 1, Group 2, Group 3, Group 4, Group 6, Group 8, Group 10.
The <- Previous Group button only works when you are on Groups 1-4, but doesn't do anything when you are in Groups 6-10 | 1.0 | Bug in group managements - Suppose you go to Manage Groups, and there are 10 groups numbered 1 to 10. Suppose you delete group #5.
Now go to Group 1, and click the Next Group -> button. You will go Group 1, Group 2, Group 3, Group 4, Group 6, Group 8, Group 10.
The <- Previous Group button only works when you are on Groups 1-4, but doesn't do anything when you are in Groups 6-10 | priority | bug in group managements suppose you go to manage groups and there are groups numbered to suppose you delete group now go to group and click the next group button you will go group group group group group group group the previous group button only works when you are on groups but doesn t do anything when you are in groups | 1 |
283,012 | 8,712,895,467 | IssuesEvent | 2018-12-06 23:59:53 | aowen87/TicketTester | https://api.github.com/repos/aowen87/TicketTester | closed | VisIt hangs during re-execution prompted by pick. | bug crash likelihood medium priority reviewed severity high wrong results | This is a bug Bruce Hammel claims to have been experiencing with VisIt for years, so I've set the priority as high, now that I have reproducible steps.
This seems only to occur with 2 nodes. Multiple processors on single node does not replicate.
Also seems only to occur in conjunction with CoordSwap operator, and with Pick var being set to an expression (and not the pipeline var)
On surface: (ensure parallel engine with 2 nodes)
Open multi_curv2d.silo
Add PC Plot of d
Add CoordSwap operator, swap x and y coords
Draw.
Create a scalar expression d+p,
Open Pick window, set variable to d+p.
Apply
Do a Zone Pick
Using Navigation, change the view either by zooming or panning.
Do another Zone Pick.
Engine will hang, must cancel the engine_par job in order to interact with VisIt again.
Information window shows Pick wanting to re-execute, and a merge exception:
++++++++++++++++
VisIt does not have all the information it needs to perform a pick. Please wait while the necessary information is calculated. All current pick selections have been cached and will be performed when calculations are complete. VisIt will notify you when it is fully ready for more picks.
Shortly thereafter, the following occured...
Pseudocolor: (InvalidMergeException)
viewer: Cannot merge datasets because of an incompatible field 1 and 2.
Pick mode now fully ready.
+++++++++++++++++++++
This shows error seeming to come from the viewer, but if you run with -debug 5, then 2 processors' log files will show this error:
+++++++++++++++++++++++++++++++++++++
This source should not load balance the data.
Exception: (InvalidMergeException) /usr/tmp/brugger/aztec/visitbuild/visit2.8.2/src/avt/Pipeline/Data/avtDataAttributes.C, line 1360: Cannot merge datasets because of an incompatible field 1 and 2.
catch(VisItException) /usr/tmp/brugger/aztec/visitbuild/visit2.8.2/src/engine/main/Executors.h:1027
++++++++++++++++++++++++++++++
This is consistent with what I saw with Bruce's real data, running on Muir in pdebug with 10 nodes and 120 processors, but also 2 nodes and 24 processors.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2169
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: VisIt hangs during re-execution prompted by pick.
Assigned to: Kathleen Biagas
Category:
Target version: 2.9.1
Author: Kathleen Biagas
Start: 03/03/2015
Due date:
% Done: 100
Estimated time:
Created: 03/03/2015 04:55 pm
Updated: 03/20/2015 05:44 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.8.2
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
This is a bug Bruce Hammel claims to have been experiencing with VisIt for years, so I've set the priority as high, now that I have reproducible steps.
This seems only to occur with 2 nodes. Multiple processors on single node does not replicate.
Also seems only to occur in conjunction with CoordSwap operator, and with Pick var being set to an expression (and not the pipeline var)
On surface: (ensure parallel engine with 2 nodes)
Open multi_curv2d.silo
Add PC Plot of d
Add CoordSwap operator, swap x and y coords
Draw.
Create a scalar expression d+p,
Open Pick window, set variable to d+p.
Apply
Do a Zone Pick
Using Navigation, change the view either by zooming or panning.
Do another Zone Pick.
Engine will hang, must cancel the engine_par job in order to interact with VisIt again.
Information window shows Pick wanting to re-execute, and a merge exception:
++++++++++++++++
VisIt does not have all the information it needs to perform a pick. Please wait while the necessary information is calculated. All current pick selections have been cached and will be performed when calculations are complete. VisIt will notify you when it is fully ready for more picks.
Shortly thereafter, the following occured...
Pseudocolor: (InvalidMergeException)
viewer: Cannot merge datasets because of an incompatible field 1 and 2.
Pick mode now fully ready.
+++++++++++++++++++++
This shows error seeming to come from the viewer, but if you run with -debug 5, then 2 processors' log files will show this error:
+++++++++++++++++++++++++++++++++++++
This source should not load balance the data.
Exception: (InvalidMergeException) /usr/tmp/brugger/aztec/visitbuild/visit2.8.2/src/avt/Pipeline/Data/avtDataAttributes.C, line 1360: Cannot merge datasets because of an incompatible field 1 and 2.
catch(VisItException) /usr/tmp/brugger/aztec/visitbuild/visit2.8.2/src/engine/main/Executors.h:1027
++++++++++++++++++++++++++++++
This is consistent with what I saw with Bruce's real data, running on Muir in pdebug with 10 nodes and 120 processors, but also 2 nodes and 24 processors.
Comments:
Turns out that Pick was sending the secondary variable request to only 1 processor, which caused an 'Invalid Merge' exception during pipeline re-execution.I modified Pick to request SecondaryVars of all processors.SVN update 25986 (2.9RC), 25988 (trunk)M /src/avt/Queries/Pick/avtPickQuery.C
| 1.0 | VisIt hangs during re-execution prompted by pick. - This is a bug Bruce Hammel claims to have been experiencing with VisIt for years, so I've set the priority as high, now that I have reproducible steps.
This seems only to occur with 2 nodes. Multiple processors on single node does not replicate.
Also seems only to occur in conjunction with CoordSwap operator, and with Pick var being set to an expression (and not the pipeline var)
On surface: (ensure parallel engine with 2 nodes)
Open multi_curv2d.silo
Add PC Plot of d
Add CoordSwap operator, swap x and y coords
Draw.
Create a scalar expression d+p,
Open Pick window, set variable to d+p.
Apply
Do a Zone Pick
Using Navigation, change the view either by zooming or panning.
Do another Zone Pick.
Engine will hang, must cancel the engine_par job in order to interact with VisIt again.
Information window shows Pick wanting to re-execute, and a merge exception:
++++++++++++++++
VisIt does not have all the information it needs to perform a pick. Please wait while the necessary information is calculated. All current pick selections have been cached and will be performed when calculations are complete. VisIt will notify you when it is fully ready for more picks.
Shortly thereafter, the following occured...
Pseudocolor: (InvalidMergeException)
viewer: Cannot merge datasets because of an incompatible field 1 and 2.
Pick mode now fully ready.
+++++++++++++++++++++
This shows error seeming to come from the viewer, but if you run with -debug 5, then 2 processors' log files will show this error:
+++++++++++++++++++++++++++++++++++++
This source should not load balance the data.
Exception: (InvalidMergeException) /usr/tmp/brugger/aztec/visitbuild/visit2.8.2/src/avt/Pipeline/Data/avtDataAttributes.C, line 1360: Cannot merge datasets because of an incompatible field 1 and 2.
catch(VisItException) /usr/tmp/brugger/aztec/visitbuild/visit2.8.2/src/engine/main/Executors.h:1027
++++++++++++++++++++++++++++++
This is consistent with what I saw with Bruce's real data, running on Muir in pdebug with 10 nodes and 120 processors, but also 2 nodes and 24 processors.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2169
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: VisIt hangs during re-execution prompted by pick.
Assigned to: Kathleen Biagas
Category:
Target version: 2.9.1
Author: Kathleen Biagas
Start: 03/03/2015
Due date:
% Done: 100
Estimated time:
Created: 03/03/2015 04:55 pm
Updated: 03/20/2015 05:44 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.8.2
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
This is a bug Bruce Hammel claims to have been experiencing with VisIt for years, so I've set the priority as high, now that I have reproducible steps.
This seems only to occur with 2 nodes. Multiple processors on single node does not replicate.
Also seems only to occur in conjunction with CoordSwap operator, and with Pick var being set to an expression (and not the pipeline var)
On surface: (ensure parallel engine with 2 nodes)
Open multi_curv2d.silo
Add PC Plot of d
Add CoordSwap operator, swap x and y coords
Draw.
Create a scalar expression d+p,
Open Pick window, set variable to d+p.
Apply
Do a Zone Pick
Using Navigation, change the view either by zooming or panning.
Do another Zone Pick.
Engine will hang, must cancel the engine_par job in order to interact with VisIt again.
Information window shows Pick wanting to re-execute, and a merge exception:
++++++++++++++++
VisIt does not have all the information it needs to perform a pick. Please wait while the necessary information is calculated. All current pick selections have been cached and will be performed when calculations are complete. VisIt will notify you when it is fully ready for more picks.
Shortly thereafter, the following occured...
Pseudocolor: (InvalidMergeException)
viewer: Cannot merge datasets because of an incompatible field 1 and 2.
Pick mode now fully ready.
+++++++++++++++++++++
This shows error seeming to come from the viewer, but if you run with -debug 5, then 2 processors' log files will show this error:
+++++++++++++++++++++++++++++++++++++
This source should not load balance the data.
Exception: (InvalidMergeException) /usr/tmp/brugger/aztec/visitbuild/visit2.8.2/src/avt/Pipeline/Data/avtDataAttributes.C, line 1360: Cannot merge datasets because of an incompatible field 1 and 2.
catch(VisItException) /usr/tmp/brugger/aztec/visitbuild/visit2.8.2/src/engine/main/Executors.h:1027
++++++++++++++++++++++++++++++
This is consistent with what I saw with Bruce's real data, running on Muir in pdebug with 10 nodes and 120 processors, but also 2 nodes and 24 processors.
Comments:
Turns out that Pick was sending the secondary variable request to only 1 processor, which caused an 'Invalid Merge' exception during pipeline re-execution.I modified Pick to request SecondaryVars of all processors.SVN update 25986 (2.9RC), 25988 (trunk)M /src/avt/Queries/Pick/avtPickQuery.C
| priority | visit hangs during re execution prompted by pick this is a bug bruce hammel claims to have been experiencing with visit for years so i ve set the priority as high now that i have reproducible steps this seems only to occur with nodes multiple processors on single node does not replicate also seems only to occur in conjunction with coordswap operator and with pick var being set to an expression and not the pipeline var on surface ensure parallel engine with nodes open multi silo add pc plot of d add coordswap operator swap x and y coords draw create a scalar expression d p open pick window set variable to d p apply do a zone pick using navigation change the view either by zooming or panning do another zone pick engine will hang must cancel the engine par job in order to interact with visit again information window shows pick wanting to re execute and a merge exception visit does not have all the information it needs to perform a pick please wait while the necessary information is calculated all current pick selections have been cached and will be performed when calculations are complete visit will notify you when it is fully ready for more picks shortly thereafter the following occured pseudocolor invalidmergeexception viewer cannot merge datasets because of an incompatible field and pick mode now fully ready this shows error seeming to come from the viewer but if you run with debug then processors log files will show this error this source should not load balance the data exception invalidmergeexception usr tmp brugger aztec visitbuild src avt pipeline data avtdataattributes c line cannot merge datasets because of an incompatible field and catch visitexception usr tmp brugger aztec visitbuild src engine main executors h this is consistent with what i saw with bruce s real data running on muir in pdebug with nodes and processors but also nodes and processors redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject visit hangs during re execution prompted by pick assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description this is a bug bruce hammel claims to have been experiencing with visit for years so i ve set the priority as high now that i have reproducible steps this seems only to occur with nodes multiple processors on single node does not replicate also seems only to occur in conjunction with coordswap operator and with pick var being set to an expression and not the pipeline var on surface ensure parallel engine with nodes open multi silo add pc plot of d add coordswap operator swap x and y coords draw create a scalar expression d p open pick window set variable to d p apply do a zone pick using navigation change the view either by zooming or panning do another zone pick engine will hang must cancel the engine par job in order to interact with visit again information window shows pick wanting to re execute and a merge exception visit does not have all the information it needs to perform a pick please wait while the necessary information is calculated all current pick selections have been cached and will be performed when calculations are complete visit will notify you when it is fully ready for more picks shortly thereafter the following occured pseudocolor invalidmergeexception viewer cannot merge datasets because of an incompatible field and pick mode now fully ready this shows error seeming to come from the viewer but if you run with debug then processors log files will show this error this source should not load balance the data exception invalidmergeexception usr tmp brugger aztec visitbuild src avt pipeline data avtdataattributes c line cannot merge datasets because of an incompatible field and catch visitexception usr tmp brugger aztec visitbuild src engine main executors h this is consistent with what i saw with bruce s real data running on muir in pdebug with nodes and processors but also nodes and processors comments turns out that pick was sending the secondary variable request to only processor which caused an invalid merge exception during pipeline re execution i modified pick to request secondaryvars of all processors svn update trunk m src avt queries pick avtpickquery c | 1 |
174,071 | 6,536,220,774 | IssuesEvent | 2017-08-31 17:17:28 | OperationCode/operationcode_frontend | https://api.github.com/repos/OperationCode/operationcode_frontend | closed | Update Raven Config | beginner friendly Priority: Medium Type: Feature | <!-- Please fill out one of the sections below based on the type of issue you're creating -->
# Feature
## Why is this feature being added?
<!-- What problem is it solving? What value does it add? -->
Raven is reporting into the backend project in Sentry and there's no good way to separate out issues.
## What should your feature do?
Replace the existing raven config with this line:
`Raven.config('https://a566cf6623524f11990720bcff927f75@sentry.io/207359').install()`
| 1.0 | Update Raven Config - <!-- Please fill out one of the sections below based on the type of issue you're creating -->
# Feature
## Why is this feature being added?
<!-- What problem is it solving? What value does it add? -->
Raven is reporting into the backend project in Sentry and there's no good way to separate out issues.
## What should your feature do?
Replace the existing raven config with this line:
`Raven.config('https://a566cf6623524f11990720bcff927f75@sentry.io/207359').install()`
| priority | update raven config feature why is this feature being added raven is reporting into the backend project in sentry and there s no good way to separate out issues what should your feature do replace the existing raven config with this line raven config | 1 |
85,498 | 3,691,042,387 | IssuesEvent | 2016-02-25 22:18:03 | BCGamer/website | https://api.github.com/repos/BCGamer/website | closed | Replace built-in mezzanine account/email | enhancement medium priority | -Signup Verify - This email is sent when a user signs up, this is OOB and looks pretty terrible. Not a good start for the LANtasy user experience.
-Account Approved - This is the email that is sent when a user has been verified.
-Password Reset - This email is sent when a user need to reset their password.
Will need to create a parent template to use for this (the one used for order email). | 1.0 | Replace built-in mezzanine account/email - -Signup Verify - This email is sent when a user signs up, this is OOB and looks pretty terrible. Not a good start for the LANtasy user experience.
-Account Approved - This is the email that is sent when a user has been verified.
-Password Reset - This email is sent when a user need to reset their password.
Will need to create a parent template to use for this (the one used for order email). | priority | replace built in mezzanine account email signup verify this email is sent when a user signs up this is oob and looks pretty terrible not a good start for the lantasy user experience account approved this is the email that is sent when a user has been verified password reset this email is sent when a user need to reset their password will need to create a parent template to use for this the one used for order email | 1 |
450,792 | 13,019,374,203 | IssuesEvent | 2020-07-26 22:13:21 | dreadnoughtsix/dbmakerpy | https://api.github.com/repos/dreadnoughtsix/dbmakerpy | opened | Handle databases in the current working directory | priority: medium status: in progress type: feature | As the client uses the script in a specific directory, the DBMakerPy must be able to handle `.db` files.
- [ ] Check the working directory for `.db` files
- [ ] Ask the client if they want to access one of the specific databases | 1.0 | Handle databases in the current working directory - As the client uses the script in a specific directory, the DBMakerPy must be able to handle `.db` files.
- [ ] Check the working directory for `.db` files
- [ ] Ask the client if they want to access one of the specific databases | priority | handle databases in the current working directory as the client uses the script in a specific directory the dbmakerpy must be able to handle db files check the working directory for db files ask the client if they want to access one of the specific databases | 1 |
753,617 | 26,355,879,999 | IssuesEvent | 2023-01-11 09:40:27 | pystardust/ani-cli | https://api.github.com/repos/pystardust/ani-cli | opened | ani-cli -U not working No search results found ani-cli | type: bug priority 2: medium | **Metadata (please complete the following information)**
Version: [e.g. 2.0.2]
OS: [e.g. Windows 10 / Linux Mint 20.3]
Shell: [e.g. zsh, run `readlink /bin/sh` to get your shell]
Anime: [e.g. flcl] (if applicable)
**Describe the bug**
Downloading is broken.
It says something about an unsupported protocol, see screenshot.
**Steps To Reproduce**
1. Run `ani-cli -d flcl`
2. Choose 2 (fooly-cooly)
3. Choose episode 1

| 1.0 | ani-cli -U not working No search results found ani-cli - **Metadata (please complete the following information)**
Version: [e.g. 2.0.2]
OS: [e.g. Windows 10 / Linux Mint 20.3]
Shell: [e.g. zsh, run `readlink /bin/sh` to get your shell]
Anime: [e.g. flcl] (if applicable)
**Describe the bug**
Downloading is broken.
It says something about an unsupported protocol, see screenshot.
**Steps To Reproduce**
1. Run `ani-cli -d flcl`
2. Choose 2 (fooly-cooly)
3. Choose episode 1

| priority | ani cli u not working no search results found ani cli metadata please complete the following information version os shell anime if applicable describe the bug downloading is broken it says something about an unsupported protocol see screenshot steps to reproduce run ani cli d flcl choose fooly cooly choose episode | 1 |
20,590 | 2,622,854,065 | IssuesEvent | 2015-03-04 08:06:48 | max99x/pagemon-chrome-ext | https://api.github.com/repos/max99x/pagemon-chrome-ext | closed | email notifications | auto-migrated Priority-Medium | ```
This seems like the perfect app for what I need except that I need to get
notifications by text or email so that I can be aware of changes immediately
even when I am not near my computer. Is that possible with this app?
```
Original issue reported on code.google.com by `flordech...@gmail.com` on 6 Feb 2014 at 5:49
* Merged into: #75 | 1.0 | email notifications - ```
This seems like the perfect app for what I need except that I need to get
notifications by text or email so that I can be aware of changes immediately
even when I am not near my computer. Is that possible with this app?
```
Original issue reported on code.google.com by `flordech...@gmail.com` on 6 Feb 2014 at 5:49
* Merged into: #75 | priority | email notifications this seems like the perfect app for what i need except that i need to get notifications by text or email so that i can be aware of changes immediately even when i am not near my computer is that possible with this app original issue reported on code google com by flordech gmail com on feb at merged into | 1 |
3,229 | 2,537,516,740 | IssuesEvent | 2015-01-26 21:08:07 | web2py/web2py | https://api.github.com/repos/web2py/web2py | opened | T.is_writable cannot be disabled | 1 star bug imported Priority-Medium | _From [mca..._at_gmail.com](https://code.google.com/u/109910092030153265550/) on May 20, 2014 21:48:48_
* What steps will reproduce the problem?
-Test this in a controller:
T.is_writable = False
def index():
db = DAL('sqlite:memory:')
db.define_table('event',
Field('date_time', 'datetime')
)
return dict(html=SQLFORM.grid(db.event, user_signature=False))
* What is the expected output? What do you see instead?
-Expect no changes at language file, but this entry is automatically added:
'Date Time': 'Date Time',
* What version of the product are you using? On what operating system?
-Ubuntu 10.04 Version 2.9.5-trunk+timestamp.2014.05.09.15.41.38
* Please provide any additional information below.
-Related discussion: https://groups.google.com/forum/#!topic/web2py/xlLmnw8cBvg
_Original issue: http://code.google.com/p/web2py/issues/detail?id=1936_ | 1.0 | T.is_writable cannot be disabled - _From [mca..._at_gmail.com](https://code.google.com/u/109910092030153265550/) on May 20, 2014 21:48:48_
* What steps will reproduce the problem?
-Test this in a controller:
T.is_writable = False
def index():
db = DAL('sqlite:memory:')
db.define_table('event',
Field('date_time', 'datetime')
)
return dict(html=SQLFORM.grid(db.event, user_signature=False))
* What is the expected output? What do you see instead?
-Expect no changes at language file, but this entry is automatically added:
'Date Time': 'Date Time',
* What version of the product are you using? On what operating system?
-Ubuntu 10.04 Version 2.9.5-trunk+timestamp.2014.05.09.15.41.38
* Please provide any additional information below.
-Related discussion: https://groups.google.com/forum/#!topic/web2py/xlLmnw8cBvg
_Original issue: http://code.google.com/p/web2py/issues/detail?id=1936_ | priority | t is writable cannot be disabled from on may what steps will reproduce the problem test this in a controller t is writable false def index db dal sqlite memory db define table event field date time datetime return dict html sqlform grid db event user signature false what is the expected output what do you see instead expect no changes at language file but this entry is automatically added date time date time what version of the product are you using on what operating system ubuntu version trunk timestamp please provide any additional information below related discussion original issue | 1 |
133,184 | 5,198,494,482 | IssuesEvent | 2017-01-23 18:16:34 | Innovate-Inc/EnviroAtlas | https://api.github.com/repos/Innovate-Inc/EnviroAtlas | closed | Enable both National and Community geography toggle switches at start | Medium Priority MVP tableOfContents | This is a very tiny request, but creating an issue so we don't forget and can check it off the list! | 1.0 | Enable both National and Community geography toggle switches at start - This is a very tiny request, but creating an issue so we don't forget and can check it off the list! | priority | enable both national and community geography toggle switches at start this is a very tiny request but creating an issue so we don t forget and can check it off the list | 1 |
49,348 | 3,002,141,395 | IssuesEvent | 2015-07-24 15:32:33 | jayway/powermock | https://api.github.com/repos/jayway/powermock | opened | PowerMockRule interacts poorly with mocked member variables | bug imported Priority-Medium | _From [nachtr...@gmail.com](https://code.google.com/u/112111327311455846174/) on January 18, 2012 23:15:32_
What steps will reproduce the problem? Following the methodology here: https://code.google.com/p/powermock/wiki/PowerMockRule Set up a test that mocks some sort of static class. In this case it's a custom SQLUtils object that takes a java.sql.Connection and a string. For our purposes here I am using xstream, but I have been able to reproduce it with objenesis. Here's a standard pattern for testing:
@PrepareForTest({SQLUtils.class})
public class PowerMockTest {
@Rule
public final PowerMockRule powermock = new PowerMockRule ();
private final Connection conn = mock(Connection.class);
@Before
public void setUp() throws Exception {
reset(conn);
PowerMockito.mockStatic(SQLUtils.class);
}
@Test
public void testCall() throws Exception {
SQLUtils.execute(conn, "my sql");
}
} What is the expected output? What do you see instead? I'd expect it to run. Instead it throws the attached exception.
What version of the product are you using?
1.4.11 with Mockito 1.9 and junit 4.10. Running in Eclipse, dependencies being managed by Maven. Please provide any additional information below. The following code executes without issue:
@PrepareForTest({SQLUtils.class})
public class PowerMockTest {
@Rule
public final PowerMockRule powermock = new PowerMockRule ();
private Connection conn;
@Before
public void setUp() throws Exception {
conn = mock(Connection.class);
PowerMockito.mockStatic(SQLUtils.class);
}
@Test
public void testCall() throws Exception {
SQLUtils.execute(conn, "my sql");
}
}
It also executes without any issue if I use the @RunWith(PowerMockRunner.class) annotation instead of the PowerMockRule .
The following things were tried to get the Rule to work:
1) I could not find any combination of things I could put into @PowerMockIgnore that would fix the issue (attempting ignoring java.sql.* java.sql.Connection org.mockito.* etc). Bizarrely, ignoring SQLUtils worked, but then the class wasn't mocked.
2) Initializing the objects via a constructor.
3) Marking the rule as Static.
4) Changing the dependency ordering (if this is the problem, I couldn't find the exact combination).
5) Putting the Rule declaration in a parent class (along with putting the declarations in a parent class).
So far the only solution I've found is to initialize all of my mocks in the setUp() method. I can do this as a workaround, but I couldn't find any documentation indicating that what I am doing is not supported.
**Attachment:** [stacktrace.txt](http://code.google.com/p/powermock/issues/detail?id=362)
_Original issue: http://code.google.com/p/powermock/issues/detail?id=362_ | 1.0 | PowerMockRule interacts poorly with mocked member variables - _From [nachtr...@gmail.com](https://code.google.com/u/112111327311455846174/) on January 18, 2012 23:15:32_
What steps will reproduce the problem? Following the methodology here: https://code.google.com/p/powermock/wiki/PowerMockRule Set up a test that mocks some sort of static class. In this case it's a custom SQLUtils object that takes a java.sql.Connection and a string. For our purposes here I am using xstream, but I have been able to reproduce it with objenesis. Here's a standard pattern for testing:
@PrepareForTest({SQLUtils.class})
public class PowerMockTest {
@Rule
public final PowerMockRule powermock = new PowerMockRule ();
private final Connection conn = mock(Connection.class);
@Before
public void setUp() throws Exception {
reset(conn);
PowerMockito.mockStatic(SQLUtils.class);
}
@Test
public void testCall() throws Exception {
SQLUtils.execute(conn, "my sql");
}
} What is the expected output? What do you see instead? I'd expect it to run. Instead it throws the attached exception.
What version of the product are you using?
1.4.11 with Mockito 1.9 and junit 4.10. Running in Eclipse, dependencies being managed by Maven. Please provide any additional information below. The following code executes without issue:
@PrepareForTest({SQLUtils.class})
public class PowerMockTest {
@Rule
public final PowerMockRule powermock = new PowerMockRule ();
private Connection conn;
@Before
public void setUp() throws Exception {
conn = mock(Connection.class);
PowerMockito.mockStatic(SQLUtils.class);
}
@Test
public void testCall() throws Exception {
SQLUtils.execute(conn, "my sql");
}
}
It also executes without any issue if I use the @RunWith(PowerMockRunner.class) annotation instead of the PowerMockRule .
The following things were tried to get the Rule to work:
1) I could not find any combination of things I could put into @PowerMockIgnore that would fix the issue (attempting ignoring java.sql.* java.sql.Connection org.mockito.* etc). Bizarrely, ignoring SQLUtils worked, but then the class wasn't mocked.
2) Initializing the objects via a constructor.
3) Marking the rule as Static.
4) Changing the dependency ordering (if this is the problem, I couldn't find the exact combination).
5) Putting the Rule declaration in a parent class (along with putting the declarations in a parent class).
So far the only solution I've found is to initialize all of my mocks in the setUp() method. I can do this as a workaround, but I couldn't find any documentation indicating that what I am doing is not supported.
**Attachment:** [stacktrace.txt](http://code.google.com/p/powermock/issues/detail?id=362)
_Original issue: http://code.google.com/p/powermock/issues/detail?id=362_ | priority | powermockrule interacts poorly with mocked member variables from on january what steps will reproduce the problem following the methodology here set up a test that mocks some sort of static class in this case it s a custom sqlutils object that takes a java sql connection and a string for our purposes here i am using xstream but i have been able to reproduce it with objenesis here s a standard pattern for testing preparefortest sqlutils class public class powermocktest rule public final powermockrule powermock new powermockrule private final connection conn mock connection class before public void setup throws exception reset conn powermockito mockstatic sqlutils class test public void testcall throws exception sqlutils execute conn my sql what is the expected output what do you see instead i d expect it to run instead it throws the attached exception what version of the product are you using with mockito and junit running in eclipse dependencies being managed by maven please provide any additional information below the following code executes without issue preparefortest sqlutils class public class powermocktest rule public final powermockrule powermock new powermockrule private connection conn before public void setup throws exception conn mock connection class powermockito mockstatic sqlutils class test public void testcall throws exception sqlutils execute conn my sql it also executes without any issue if i use the runwith powermockrunner class annotation instead of the powermockrule the following things were tried to get the rule to work i could not find any combination of things i could put into powermockignore that would fix the issue attempting ignoring java sql java sql connection org mockito etc bizarrely ignoring sqlutils worked but then the class wasn t mocked initializing the objects via a constructor marking the rule as static changing the dependency ordering if this is the problem i couldn t find the exact combination putting the rule declaration in a parent class along with putting the declarations in a parent class so far the only solution i ve found is to initialize all of my mocks in the setup method i can do this as a workaround but i couldn t find any documentation indicating that what i am doing is not supported attachment original issue | 1 |
140,577 | 5,412,556,667 | IssuesEvent | 2017-03-01 14:50:20 | stats4sd/wordcloud-app | https://api.github.com/repos/stats4sd/wordcloud-app | opened | Add ability to edit own comments | 1 - Ready Impact-Medium Priority-Medium Size-Medium Type-feature | Would be nice for users to be able to review and edit the comments they've submitted.
- edit comment text
- add / edit tags attached to the comment
**How:**
- Generate comment IDs locally to store them in local cache
- update newComment <firebase-document> to push key instead of letting firebase generate the keys.
- there is a page that currently shows a full list of comments. This can be edited to show only your own comments (with the locally created IDs)
| 1.0 | Add ability to edit own comments - Would be nice for users to be able to review and edit the comments they've submitted.
- edit comment text
- add / edit tags attached to the comment
**How:**
- Generate comment IDs locally to store them in local cache
- update newComment <firebase-document> to push key instead of letting firebase generate the keys.
- there is a page that currently shows a full list of comments. This can be edited to show only your own comments (with the locally created IDs)
| priority | add ability to edit own comments would be nice for users to be able to review and edit the comments they ve submitted edit comment text add edit tags attached to the comment how generate comment ids locally to store them in local cache update newcomment to push key instead of letting firebase generate the keys there is a page that currently shows a full list of comments this can be edited to show only your own comments with the locally created ids | 1 |
614,822 | 19,190,335,074 | IssuesEvent | 2021-12-05 22:01:05 | RE-SS3D/SS3D | https://api.github.com/repos/RE-SS3D/SS3D | opened | Implement Wire Adjacency Connections | Type: Feature (Addition) Asset: Script Coding: C# Priority: 2 - High Difficulty: 2 - Medium System: Tilemaps | <!-- The notes within these arrows are for you but can be deleted. -->
## Summary
Implement a new tilemap adjacency connection script (similar to the others) for "wire connections". This should follow the design located in the link below.
## Goal
This will allow for wires to be added to the map via the editor and perform intended connections.
https://docs.google.com/document/d/1ful7_gIJo7e74i9LMQuYMpjZMH2V1aT90mwlrQ0hcgE/#heading=h.15jb46xsi8l2 | 1.0 | Implement Wire Adjacency Connections - <!-- The notes within these arrows are for you but can be deleted. -->
## Summary
Implement a new tilemap adjacency connection script (similar to the others) for "wire connections". This should follow the design located in the link below.
## Goal
This will allow for wires to be added to the map via the editor and perform intended connections.
https://docs.google.com/document/d/1ful7_gIJo7e74i9LMQuYMpjZMH2V1aT90mwlrQ0hcgE/#heading=h.15jb46xsi8l2 | priority | implement wire adjacency connections summary implement a new tilemap adjacency connection script similar to the others for wire connections this should follow the design located in the link below goal this will allow for wires to be added to the map via the editor and perform intended connections | 1 |
769,569 | 27,012,078,603 | IssuesEvent | 2023-02-10 16:09:28 | TimidBagel/Group-Clicker-Game | https://api.github.com/repos/TimidBagel/Group-Clicker-Game | closed | Buildings go outside of their container | bug CSS/Styling Medium Priority HTML | 
The buildings go outside of the container instead of hiding them and making a scroll bar. | 1.0 | Buildings go outside of their container - 
The buildings go outside of the container instead of hiding them and making a scroll bar. | priority | buildings go outside of their container the buildings go outside of the container instead of hiding them and making a scroll bar | 1 |
642,261 | 20,871,942,716 | IssuesEvent | 2022-03-22 12:47:48 | owncloud/web | https://api.github.com/repos/owncloud/web | closed | Renaming object via hamburger icon does't work | Type:Bug Priority:p3-medium | web: branch v5.2.0-rc.3:
ocis: docker v.1.17.0
Steps:
- admin creates folder
- admin tries to rename folder via hamburger icon
Expected: succufuly
Actual: method `https://localhost:9200/remote.php/dav/Shares` is used instead of `https://localhost:9200/remote.php/dav/files/marie/Shares`

https://user-images.githubusercontent.com/84779829/156556640-ad50b32c-969a-4254-8a98-3ba07a4433bd.mov
| 1.0 | Renaming object via hamburger icon does't work - web: branch v5.2.0-rc.3:
ocis: docker v.1.17.0
Steps:
- admin creates folder
- admin tries to rename folder via hamburger icon
Expected: succufuly
Actual: method `https://localhost:9200/remote.php/dav/Shares` is used instead of `https://localhost:9200/remote.php/dav/files/marie/Shares`

https://user-images.githubusercontent.com/84779829/156556640-ad50b32c-969a-4254-8a98-3ba07a4433bd.mov
| priority | renaming object via hamburger icon does t work web branch rc ocis docker v steps admin creates folder admin tries to rename folder via hamburger icon expected succufuly actual method is used instead of | 1 |
826,147 | 31,558,721,638 | IssuesEvent | 2023-09-03 01:25:11 | wanderer-moe/site | https://api.github.com/repos/wanderer-moe/site | closed | search by [multiple] tags | api priority: medium db | allow for searching by multiple tags in addition to the existing new search feature that allows for searching multiple games / categories at once | 1.0 | search by [multiple] tags - allow for searching by multiple tags in addition to the existing new search feature that allows for searching multiple games / categories at once | priority | search by tags allow for searching by multiple tags in addition to the existing new search feature that allows for searching multiple games categories at once | 1 |
825,778 | 31,471,370,388 | IssuesEvent | 2023-08-30 07:47:12 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | Review/revise logger warn and error messages | priority-3-medium type:refactor status:ready | ### Describe the proposed change(s).
We should clarify the correct situations for warns and errors. Importantly:
- Warn messages will result in _repository users_ being alerted, e.g. in their Dependency Dashboard
- Error messages will result in the _bot admin_ being alerted, because the bot will exit with a non-zero code
I think we should error whenever we reach code _which shouldn't happen_, e.g. it represents a bug which needs fixing in Renovate source code. Alerting bot admins to this is beneficial because then we are more likely to discover it.
Errors naturally should also include bot admin misconfiguration, because the bot admin should be aware of those.
Errors should include permanent or serious external errors, such as unavailability of github, bitbucket, or whatever they're using. Example: `bitbucket getRepos error`. A challenge with this is that it's a grey area because it's hard for us to know what's permanent and what's temporary.
Errors should not include _repository_ misconfiguration, as bot admins do not have control over those - they should be warnings.
Today we have 55 instances of `logger.error` in the source code. An example of one which I think is a mistake is `'Error updating yarn offline packages'`.
### Describe why we need/want these change(s).
Consistency | 1.0 | Review/revise logger warn and error messages - ### Describe the proposed change(s).
We should clarify the correct situations for warns and errors. Importantly:
- Warn messages will result in _repository users_ being alerted, e.g. in their Dependency Dashboard
- Error messages will result in the _bot admin_ being alerted, because the bot will exit with a non-zero code
I think we should error whenever we reach code _which shouldn't happen_, e.g. it represents a bug which needs fixing in Renovate source code. Alerting bot admins to this is beneficial because then we are more likely to discover it.
Errors naturally should also include bot admin misconfiguration, because the bot admin should be aware of those.
Errors should include permanent or serious external errors, such as unavailability of github, bitbucket, or whatever they're using. Example: `bitbucket getRepos error`. A challenge with this is that it's a grey area because it's hard for us to know what's permanent and what's temporary.
Errors should not include _repository_ misconfiguration, as bot admins do not have control over those - they should be warnings.
Today we have 55 instances of `logger.error` in the source code. An example of one which I think is a mistake is `'Error updating yarn offline packages'`.
### Describe why we need/want these change(s).
Consistency | priority | review revise logger warn and error messages describe the proposed change s we should clarify the correct situations for warns and errors importantly warn messages will result in repository users being alerted e g in their dependency dashboard error messages will result in the bot admin being alerted because the bot will exit with a non zero code i think we should error whenever we reach code which shouldn t happen e g it represents a bug which needs fixing in renovate source code alerting bot admins to this is beneficial because then we are more likely to discover it errors naturally should also include bot admin misconfiguration because the bot admin should be aware of those errors should include permanent or serious external errors such as unavailability of github bitbucket or whatever they re using example bitbucket getrepos error a challenge with this is that it s a grey area because it s hard for us to know what s permanent and what s temporary errors should not include repository misconfiguration as bot admins do not have control over those they should be warnings today we have instances of logger error in the source code an example of one which i think is a mistake is error updating yarn offline packages describe why we need want these change s consistency | 1 |
739,447 | 25,597,449,542 | IssuesEvent | 2022-12-01 17:13:49 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] TServer crash when a table is accessed again in a transaction after another session drops it | kind/bug area/ysql priority/medium | Jira Link: [DB-4368](https://yugabyte.atlassian.net/browse/DB-4368)
### Description
Steps to reproduce:
(1) create a table
```
ysqlsh (11.2-YB-2.17.1.0-b0)
Type "help" for help.
yugabyte=# create table t(id int);
CREATE TABLE
yugabyte=# \q
```
(2) In a new session, start a transaction with
```
ysqlsh (11.2-YB-2.17.1.0-b0)
Type "help" for help.
yugabyte=# begin;
BEGIN
yugabyte=# select * from t;
id
----
(0 rows)
yugabyte=#
```
(3) Start another ysqlsh connection and drop the table t.
```
ysqlsh (11.2-YB-2.17.1.0-b0)
Type "help" for help.
yugabyte=# drop table t;
DROP TABLE
yugabyte=# \q
```
(4) Go back to the session in (2), do another select.
```
yugabyte=# select * from t;
FATAL: terminating connection due to administrator command
ERROR: Aborted: Shutdown connection
yugabyte=# \q
```
(5) Look at tserver log and observed a FATAL:
```
F1129 23:16:22.007297 16086 rpc_context.cc:137] RpcContext is destroyed, but response has not been sent, for call: Call yb.tserver.PgClientService.Perform 127.0.0.1:48327 => 127.0.0.1:9100 (request call id 41)
```
| 1.0 | [YSQL] TServer crash when a table is accessed again in a transaction after another session drops it - Jira Link: [DB-4368](https://yugabyte.atlassian.net/browse/DB-4368)
### Description
Steps to reproduce:
(1) create a table
```
ysqlsh (11.2-YB-2.17.1.0-b0)
Type "help" for help.
yugabyte=# create table t(id int);
CREATE TABLE
yugabyte=# \q
```
(2) In a new session, start a transaction with
```
ysqlsh (11.2-YB-2.17.1.0-b0)
Type "help" for help.
yugabyte=# begin;
BEGIN
yugabyte=# select * from t;
id
----
(0 rows)
yugabyte=#
```
(3) Start another ysqlsh connection and drop the table t.
```
ysqlsh (11.2-YB-2.17.1.0-b0)
Type "help" for help.
yugabyte=# drop table t;
DROP TABLE
yugabyte=# \q
```
(4) Go back to the session in (2), do another select.
```
yugabyte=# select * from t;
FATAL: terminating connection due to administrator command
ERROR: Aborted: Shutdown connection
yugabyte=# \q
```
(5) Look at tserver log and observed a FATAL:
```
F1129 23:16:22.007297 16086 rpc_context.cc:137] RpcContext is destroyed, but response has not been sent, for call: Call yb.tserver.PgClientService.Perform 127.0.0.1:48327 => 127.0.0.1:9100 (request call id 41)
```
| priority | tserver crash when a table is accessed again in a transaction after another session drops it jira link description steps to reproduce create a table ysqlsh yb type help for help yugabyte create table t id int create table yugabyte q in a new session start a transaction with ysqlsh yb type help for help yugabyte begin begin yugabyte select from t id rows yugabyte start another ysqlsh connection and drop the table t ysqlsh yb type help for help yugabyte drop table t drop table yugabyte q go back to the session in do another select yugabyte select from t fatal terminating connection due to administrator command error aborted shutdown connection yugabyte q look at tserver log and observed a fatal rpc context cc rpccontext is destroyed but response has not been sent for call call yb tserver pgclientservice perform request call id | 1 |
614,397 | 19,181,887,324 | IssuesEvent | 2021-12-04 14:50:25 | BlueBubblesApp/bluebubbles-app | https://api.github.com/repos/BlueBubblesApp/bluebubbles-app | opened | Fix issue where image disappears after sending. Comes back after leave and re enter | Bug priority: high Alpha Difficulty: Medium | Not sure if this will work for you, but here is what I did:
1. Send an image with text
2. Before the image fully sends, leave the app
3. Wait a sec
4. Re enter the app
5. Image seems to disappear or flicker
6. Send a message and it comes back | 1.0 | Fix issue where image disappears after sending. Comes back after leave and re enter - Not sure if this will work for you, but here is what I did:
1. Send an image with text
2. Before the image fully sends, leave the app
3. Wait a sec
4. Re enter the app
5. Image seems to disappear or flicker
6. Send a message and it comes back | priority | fix issue where image disappears after sending comes back after leave and re enter not sure if this will work for you but here is what i did send an image with text before the image fully sends leave the app wait a sec re enter the app image seems to disappear or flicker send a message and it comes back | 1 |
311,740 | 9,538,586,448 | IssuesEvent | 2019-04-30 15:01:18 | Terrastories/terrastories | https://api.github.com/repos/Terrastories/terrastories | opened | [CMS] Ensure that duplicate items cannot be added to the database (importer) | difficulty: easy priority: medium status: help wanted type: cms | Currently when the stories csv file is imported, if it's run again it will add duplicate records.
Add validations to the stories model to ensure that duplicates cannot be created. Consider adding a database migration to add unique indexes on these fields as well.
Please add tests to verify this behavior as well.
Some discussion on this subject took place at Ruby by the Bay 2019: https://github.com/rubyforgood/terrastories/issues/20 | 1.0 | [CMS] Ensure that duplicate items cannot be added to the database (importer) - Currently when the stories csv file is imported, if it's run again it will add duplicate records.
Add validations to the stories model to ensure that duplicates cannot be created. Consider adding a database migration to add unique indexes on these fields as well.
Please add tests to verify this behavior as well.
Some discussion on this subject took place at Ruby by the Bay 2019: https://github.com/rubyforgood/terrastories/issues/20 | priority | ensure that duplicate items cannot be added to the database importer currently when the stories csv file is imported if it s run again it will add duplicate records add validations to the stories model to ensure that duplicates cannot be created consider adding a database migration to add unique indexes on these fields as well please add tests to verify this behavior as well some discussion on this subject took place at ruby by the bay | 1 |
40,793 | 2,868,942,498 | IssuesEvent | 2015-06-05 22:06:03 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | pub: cannot use git dependency except for leaf packages. | bug duplicate Priority-Medium | <a href="https://github.com/jmesserly"><img src="https://avatars.githubusercontent.com/u/1081711?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [jmesserly](https://github.com/jmesserly)**
_Originally opened as dart-lang/sdk#8553_
----
Use case: you want to use a "git" dependency for web_ui. You're also using "foo" that depends on web_ui via hosted dep. It won't work.
What is best course of action here?
It seems hard to ever use a git dependency if the package is hosted on Pub.
Should we be using "git" dependencies in all of our git repos, and the transformation to hosted deps happens as part of "pub lish"?
Reported here:
https://github.com/dart-lang/csslib/issues/6#issuecomment-13591480
| 1.0 | pub: cannot use git dependency except for leaf packages. - <a href="https://github.com/jmesserly"><img src="https://avatars.githubusercontent.com/u/1081711?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [jmesserly](https://github.com/jmesserly)**
_Originally opened as dart-lang/sdk#8553_
----
Use case: you want to use a "git" dependency for web_ui. You're also using "foo" that depends on web_ui via hosted dep. It won't work.
What is best course of action here?
It seems hard to ever use a git dependency if the package is hosted on Pub.
Should we be using "git" dependencies in all of our git repos, and the transformation to hosted deps happens as part of "pub lish"?
Reported here:
https://github.com/dart-lang/csslib/issues/6#issuecomment-13591480
| priority | pub cannot use git dependency except for leaf packages issue by originally opened as dart lang sdk use case you want to use a quot git quot dependency for web ui you re also using quot foo quot that depends on web ui via hosted dep it won t work what is best course of action here it seems hard to ever use a git dependency if the package is hosted on pub should we be using quot git quot dependencies in all of our git repos and the transformation to hosted deps happens as part of quot pub lish quot reported here | 1 |
610,789 | 18,924,461,652 | IssuesEvent | 2021-11-17 07:56:45 | MadsBalslev/P3 | https://api.github.com/repos/MadsBalslev/P3 | closed | Refactor ```frontend.Shared.Manager``` and assoiciated files | Type: Maintenance Priority: Medium Domain: Client | Limit this do a one person, one day assignment. | 1.0 | Refactor ```frontend.Shared.Manager``` and assoiciated files - Limit this do a one person, one day assignment. | priority | refactor frontend shared manager and assoiciated files limit this do a one person one day assignment | 1 |
725,358 | 24,959,796,839 | IssuesEvent | 2022-11-01 14:41:43 | AY2223S1-CS2103T-T15-3/tp | https://api.github.com/repos/AY2223S1-CS2103T-T15-3/tp | closed | [PE-D][Tester A] Edit after filter | wontfix priority.Medium severity.Medium | I think it is quite inconvenient that after every edit, the list would reset back to all tutors. I don't think this is optimised for convenience as I would have to look for the edited person again. The user can decide to list all the tutors again if he chooses instead of being forced to filter/sort the whole list again.
<!--session: 1666944102466-50c9b61e-451f-454d-9fa3-2270e2961556-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.Medium` `type.FeatureFlaw`
original: HoJunHao2000/ped#4 | 1.0 | [PE-D][Tester A] Edit after filter - I think it is quite inconvenient that after every edit, the list would reset back to all tutors. I don't think this is optimised for convenience as I would have to look for the edited person again. The user can decide to list all the tutors again if he chooses instead of being forced to filter/sort the whole list again.
<!--session: 1666944102466-50c9b61e-451f-454d-9fa3-2270e2961556-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.Medium` `type.FeatureFlaw`
original: HoJunHao2000/ped#4 | priority | edit after filter i think it is quite inconvenient that after every edit the list would reset back to all tutors i don t think this is optimised for convenience as i would have to look for the edited person again the user can decide to list all the tutors again if he chooses instead of being forced to filter sort the whole list again labels severity medium type featureflaw original ped | 1 |
31,381 | 2,732,898,844 | IssuesEvent | 2015-04-17 10:04:54 | tiku01/oryx-editor | https://api.github.com/repos/tiku01/oryx-editor | closed | Edit common properties of multiple selected shapes | auto-migrated Component-Editor OpSys-All Priority-Medium Type-Enhancement | ```
If multiple shapes are selected, it should be possible to edit the
properties they have in common. That would save a great deal of work,
e.g. when setting colors.
```
Original issue reported on code.google.com by `falko.me...@gmail.com` on 22 Oct 2008 at 3:11 | 1.0 | Edit common properties of multiple selected shapes - ```
If multiple shapes are selected, it should be possible to edit the
properties they have in common. That would save a great deal of work,
e.g. when setting colors.
```
Original issue reported on code.google.com by `falko.me...@gmail.com` on 22 Oct 2008 at 3:11 | priority | edit common properties of multiple selected shapes if multiple shapes are selected it should be possible to edit the properties they have in common that would save a great deal of work e g when setting colors original issue reported on code google com by falko me gmail com on oct at | 1 |
593,349 | 17,971,035,608 | IssuesEvent | 2021-09-14 02:03:22 | hackforla/design-systems | https://api.github.com/repos/hackforla/design-systems | closed | Week of September 13 2021 | priority: medium Role: Whole DS team Feature - Agenda | ### Overview
This issue tracks the agenda for the design systems meetings and weekly roll call for projects.
Please check in on the [Roll Call Spreadsheet](https://docs.google.com/spreadsheets/d/1JtJGxSpVQR3t3wN8P5iHtLZ-QEk-NtewGurNTikw1t0/edit#gid=0)
### Action Items
- [x] House keeping
- [x] New team member introduction
- [x] Change sizing on Issues
- [ ] Update from Research
- [ ] Update from UX/UI Team
| 1.0 | Week of September 13 2021 - ### Overview
This issue tracks the agenda for the design systems meetings and weekly roll call for projects.
Please check in on the [Roll Call Spreadsheet](https://docs.google.com/spreadsheets/d/1JtJGxSpVQR3t3wN8P5iHtLZ-QEk-NtewGurNTikw1t0/edit#gid=0)
### Action Items
- [x] House keeping
- [x] New team member introduction
- [x] Change sizing on Issues
- [ ] Update from Research
- [ ] Update from UX/UI Team
| priority | week of september overview this issue tracks the agenda for the design systems meetings and weekly roll call for projects please check in on the action items house keeping new team member introduction change sizing on issues update from research update from ux ui team | 1 |
97,605 | 3,996,592,606 | IssuesEvent | 2016-05-10 19:18:13 | datreant/datreant.core | https://api.github.com/repos/datreant/datreant.core | closed | provide conda packages | packaging/distribution priority medium | Since MDAnalysis has joined the conda hotness I thought datreant could do the some. I got packages for all dependencies already except for scandir.
https://anaconda.org/kain88-de/packages | 1.0 | provide conda packages - Since MDAnalysis has joined the conda hotness I thought datreant could do the some. I got packages for all dependencies already except for scandir.
https://anaconda.org/kain88-de/packages | priority | provide conda packages since mdanalysis has joined the conda hotness i thought datreant could do the some i got packages for all dependencies already except for scandir | 1 |
653,673 | 21,610,170,395 | IssuesEvent | 2022-05-04 09:16:58 | MadsBalslev/SW4-2022 | https://api.github.com/repos/MadsBalslev/SW4-2022 | closed | 4.2.x.x Step declaration transition system | Priority: Medium | - [ ] Simon
- [ ] Nicolai
- [ ] Mads
- [ ] Lukas
- [ ] Patrick
- [x] Casper
- [ ] Sean | 1.0 | 4.2.x.x Step declaration transition system - - [ ] Simon
- [ ] Nicolai
- [ ] Mads
- [ ] Lukas
- [ ] Patrick
- [x] Casper
- [ ] Sean | priority | x x step declaration transition system simon nicolai mads lukas patrick casper sean | 1 |
797,736 | 28,153,920,379 | IssuesEvent | 2023-04-03 05:26:16 | space-wizards/RobustToolbox | https://api.github.com/repos/space-wizards/RobustToolbox | opened | Making your game too narrow crashes it | Issue: Bug Priority: 2-Important Difficulty: 2-Medium | Repro:
Turn on separated chat
Make game narrow
Game hard locks
Seems to be some kind of infinite loop in measure.

| 1.0 | Making your game too narrow crashes it - Repro:
Turn on separated chat
Make game narrow
Game hard locks
Seems to be some kind of infinite loop in measure.

| priority | making your game too narrow crashes it repro turn on separated chat make game narrow game hard locks seems to be some kind of infinite loop in measure | 1 |
710,034 | 24,401,607,318 | IssuesEvent | 2022-10-05 02:20:10 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio] Editorial BP search script with multiple categories not work | bug priority: medium triage validate | ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [X] The issue is in the latest released 4.0.x
- [ ] The issue is in the latest released 3.1.x
### Describe the issue
Search script `scripts/rest/search.get.groovy` with multiple categories value does not work.
### Steps to reproduce
Steps:
1. Create a new site from Editorial BP
2. Search with `categories[]` parameter to include both `health` and `style`. Ex: `GET http://localhost:8080/api/search.json?categories%5B%5D=health%20style`
3. Notice that the result is empty
### Relevant log output
_No response_
### Screenshots and/or videos
_No response_ | 1.0 | [studio] Editorial BP search script with multiple categories not work - ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [X] The issue is in the latest released 4.0.x
- [ ] The issue is in the latest released 3.1.x
### Describe the issue
Search script `scripts/rest/search.get.groovy` with multiple categories value does not work.
### Steps to reproduce
Steps:
1. Create a new site from Editorial BP
2. Search with `categories[]` parameter to include both `health` and `style`. Ex: `GET http://localhost:8080/api/search.json?categories%5B%5D=health%20style`
3. Notice that the result is empty
### Relevant log output
_No response_
### Screenshots and/or videos
_No response_ | priority | editorial bp search script with multiple categories not work duplicates i have searched the existing issues latest version the issue is in the latest released x the issue is in the latest released x describe the issue search script scripts rest search get groovy with multiple categories value does not work steps to reproduce steps create a new site from editorial bp search with categories parameter to include both health and style ex get notice that the result is empty relevant log output no response screenshots and or videos no response | 1 |
301,245 | 9,217,992,852 | IssuesEvent | 2019-03-11 12:16:15 | Bill-Ferny/CS1830-LightsOut | https://api.github.com/repos/Bill-Ferny/CS1830-LightsOut | closed | 007: handling scores in game_over | Medium priority task | # Handling scores in game_over
* Connect the scores, add_score method to game_over | 1.0 | 007: handling scores in game_over - # Handling scores in game_over
* Connect the scores, add_score method to game_over | priority | handling scores in game over handling scores in game over connect the scores add score method to game over | 1 |
2,686 | 2,532,468,343 | IssuesEvent | 2015-01-23 16:18:00 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | Download button - one file selected | Priority: Medium Status: Dev Type: Feature | per Gustavo, opening a new ticket in relation to https://github.com/IQSS/dataverse/issues/1310 last question.
Download button action:
when one file is selected > actions grayed out > it should enable 'download selected' | 1.0 | Download button - one file selected - per Gustavo, opening a new ticket in relation to https://github.com/IQSS/dataverse/issues/1310 last question.
Download button action:
when one file is selected > actions grayed out > it should enable 'download selected' | priority | download button one file selected per gustavo opening a new ticket in relation to last question download button action when one file is selected actions grayed out it should enable download selected | 1 |
91,908 | 3,863,517,336 | IssuesEvent | 2016-04-08 09:45:49 | iamxavier/elmah | https://api.github.com/repos/iamxavier/elmah | closed | Create a FAQ Page | auto-migrated Priority-Medium Type-Enhancement | ```
What new or enhanced feature are you proposing?
A FAQ page in the wiki
What goal would this enhancement help you achieve?
Reduce the amount of head-scratching required to use ELMAH
```
Original issue reported on code.google.com by `Zian.C...@gmail.com` on 3 Sep 2009 at 3:20 | 1.0 | Create a FAQ Page - ```
What new or enhanced feature are you proposing?
A FAQ page in the wiki
What goal would this enhancement help you achieve?
Reduce the amount of head-scratching required to use ELMAH
```
Original issue reported on code.google.com by `Zian.C...@gmail.com` on 3 Sep 2009 at 3:20 | priority | create a faq page what new or enhanced feature are you proposing a faq page in the wiki what goal would this enhancement help you achieve reduce the amount of head scratching required to use elmah original issue reported on code google com by zian c gmail com on sep at | 1 |
285,647 | 8,767,494,867 | IssuesEvent | 2018-12-17 19:55:57 | webhintio/hint | https://api.github.com/repos/webhintio/hint | closed | [create hint] EventEmitter warning + script never ends on macOS and node 10+ | priority:medium type:bug | Node: v10.10.0
npm: 6.4.1
hint: v3.4.10
```sh
$ npm create hint
npx: installed 1 in 1.18s
? Is this a package with multiple hints? (yes) No
? What's the name of this new hint? tmp
? What's the description of this new hint 'tmp'? ...
? Please select the category of this new hint: development
? Please select the category of use case: (Use arrow keys)
❯ DOM
? Please select the category of use case: DOM
? What DOM element does the hint need access to? (div) (node:6037) Error: Possible EventEmitter memory leak detected. 11 keypress listeners added. Use emitter.setMaxListeners() to increase limit
? What DOM element does the hint need access to? div
? Please select the scope of this new hint: any
Creating new hint in /Users/alrra/server/4/hint-copyright
External files copied
Creating new hint in /Users/alrra/server/4/hint-copyright
External files copied
New hint copyright created in /Users/alrra/server/4/hint-copyright
--------------------------------------
---- How to use ----
--------------------------------------
1. Go to the folder 'hint-copyright'.
2. Run 'npm run init' to install all the dependencies and build the project.
3. Run 'npm run hint -- https://YourUrl' to analyze you site.
```
Also at the end it never returns the prompt. | 1.0 | [create hint] EventEmitter warning + script never ends on macOS and node 10+ - Node: v10.10.0
npm: 6.4.1
hint: v3.4.10
```sh
$ npm create hint
npx: installed 1 in 1.18s
? Is this a package with multiple hints? (yes) No
? What's the name of this new hint? tmp
? What's the description of this new hint 'tmp'? ...
? Please select the category of this new hint: development
? Please select the category of use case: (Use arrow keys)
❯ DOM
? Please select the category of use case: DOM
? What DOM element does the hint need access to? (div) (node:6037) Error: Possible EventEmitter memory leak detected. 11 keypress listeners added. Use emitter.setMaxListeners() to increase limit
? What DOM element does the hint need access to? div
? Please select the scope of this new hint: any
Creating new hint in /Users/alrra/server/4/hint-copyright
External files copied
Creating new hint in /Users/alrra/server/4/hint-copyright
External files copied
New hint copyright created in /Users/alrra/server/4/hint-copyright
--------------------------------------
---- How to use ----
--------------------------------------
1. Go to the folder 'hint-copyright'.
2. Run 'npm run init' to install all the dependencies and build the project.
3. Run 'npm run hint -- https://YourUrl' to analyze you site.
```
Also at the end it never returns the prompt. | priority | eventemitter warning script never ends on macos and node node npm hint sh npm create hint npx installed in is this a package with multiple hints yes no what s the name of this new hint tmp what s the description of this new hint tmp please select the category of this new hint development please select the category of use case use arrow keys ❯ dom please select the category of use case dom what dom element does the hint need access to div node error possible eventemitter memory leak detected keypress listeners added use emitter setmaxlisteners to increase limit what dom element does the hint need access to div please select the scope of this new hint any creating new hint in users alrra server hint copyright external files copied creating new hint in users alrra server hint copyright external files copied new hint copyright created in users alrra server hint copyright how to use go to the folder hint copyright run npm run init to install all the dependencies and build the project run npm run hint to analyze you site also at the end it never returns the prompt | 1 |
499,795 | 14,479,380,752 | IssuesEvent | 2020-12-10 09:44:03 | amanianai/covid-conscious | https://api.github.com/repos/amanianai/covid-conscious | closed | Add categories /home to the database | Development Medium Priority ToDo | **Category/Keyword tags on /austin (city homepage) view**
Feel free to implement this the easiest way possible :)
**Tasks**
- [ ] add categories to the database via seed files (wellness, food, etc.)
- [ ] clicking on one of the thumbnails should run a query based on that category or keyword tag
**Categories thumbnails on browse view**

**Excel Spreadsheet**

| 1.0 | Add categories /home to the database - **Category/Keyword tags on /austin (city homepage) view**
Feel free to implement this the easiest way possible :)
**Tasks**
- [ ] add categories to the database via seed files (wellness, food, etc.)
- [ ] clicking on one of the thumbnails should run a query based on that category or keyword tag
**Categories thumbnails on browse view**

**Excel Spreadsheet**

| priority | add categories home to the database category keyword tags on austin city homepage view feel free to implement this the easiest way possible tasks add categories to the database via seed files wellness food etc clicking on one of the thumbnails should run a query based on that category or keyword tag categories thumbnails on browse view excel spreadsheet | 1 |
588,231 | 17,650,345,215 | IssuesEvent | 2021-08-20 12:22:31 | francheska-vicente/cssweng | https://api.github.com/repos/francheska-vicente/cssweng | opened | Dates that are not today are highlighted the same as today's date | bug priority: medium issue: front-end severity: low | ### Summary:
The dates that are not today but have the same day as today gets highlighted the same color as today's date.
E.g. if today is August 20, then September 20 also gets the same highlight
### Steps to Reproduce:
1. Login
2. Go to calendar
3. Go through various months
### Visual Proof:

### Expected Results:
The date that are not today's date should not be highlighted the same. If necessary to be highlighted, they should be in a different color. Otherwise, it should not be highlighted
### Actual Results:
The dates that are not today but have the same day as today gets highlighted the same color as today's date.
| Additional Information | |
| ----------- | ----------- |
| Defect ID | 15-0023 |
| Platform | V8 engine (Google) |
| Operating System | Windows 10 | | 1.0 | Dates that are not today are highlighted the same as today's date - ### Summary:
The dates that are not today but have the same day as today gets highlighted the same color as today's date.
E.g. if today is August 20, then September 20 also gets the same highlight
### Steps to Reproduce:
1. Login
2. Go to calendar
3. Go through various months
### Visual Proof:

### Expected Results:
The date that are not today's date should not be highlighted the same. If necessary to be highlighted, they should be in a different color. Otherwise, it should not be highlighted
### Actual Results:
The dates that are not today but have the same day as today gets highlighted the same color as today's date.
| Additional Information | |
| ----------- | ----------- |
| Defect ID | 15-0023 |
| Platform | V8 engine (Google) |
| Operating System | Windows 10 | | priority | dates that are not today are highlighted the same as today s date summary the dates that are not today but have the same day as today gets highlighted the same color as today s date e g if today is august then september also gets the same highlight steps to reproduce login go to calendar go through various months visual proof expected results the date that are not today s date should not be highlighted the same if necessary to be highlighted they should be in a different color otherwise it should not be highlighted actual results the dates that are not today but have the same day as today gets highlighted the same color as today s date additional information defect id platform engine google operating system windows | 1 |
410,370 | 11,986,859,663 | IssuesEvent | 2020-04-07 20:06:07 | opt-out-tools/website | https://api.github.com/repos/opt-out-tools/website | closed | Initial Website Content Plan | medium priority medium task | **Objective**
Devise initial website content
**Description**
Review the contents of the websites such as iheartmob.com and worldbrain.io on mobile and develop content for the website
**Skills**
**Tools**
**Time Estimation**
**Tasks**
- [ ] Devise website content inventory
- [ ] Write initial content relies on https://github.com/opt-out-tools/theory-of-online-misogyny/issues/21
- [ ] Iterate and add SEO
| 1.0 | Initial Website Content Plan - **Objective**
Devise initial website content
**Description**
Review the contents of the websites such as iheartmob.com and worldbrain.io on mobile and develop content for the website
**Skills**
**Tools**
**Time Estimation**
**Tasks**
- [ ] Devise website content inventory
- [ ] Write initial content relies on https://github.com/opt-out-tools/theory-of-online-misogyny/issues/21
- [ ] Iterate and add SEO
| priority | initial website content plan objective devise initial website content description review the contents of the websites such as iheartmob com and worldbrain io on mobile and develop content for the website skills tools time estimation tasks devise website content inventory write initial content relies on iterate and add seo | 1 |
668,946 | 22,605,101,485 | IssuesEvent | 2022-06-29 12:38:06 | zowe/api-layer | https://api.github.com/repos/zowe/api-layer | closed | Allow customization of URL on an instance of the API ML | enhancement Priority: Medium extender | **Is your feature request related to a problem? Please describe.**
As a user, I don't want to remember complex paths to the services I use. With the current conformance criteria the UI paths needs to contain the name of the company which makes for relatively complex paths such as:
/ui/v1/broadcomacf2
/ui/v1/zowezlux
/ui/ibmdb2
**Describe the solution you'd like**
I want to be able to create an alias for given path that would simplify the work for the user. For example for above mentioned /ui/v1/zowezlux I want to be able to have /zlux reach to the /ui/v{latest}/zlux
This path will be configurable on per Zowe basis and the system programmer configuring it will get the information about already existing aliases and will be rejected if the person tries to reuse the same alias.
**Describe alternatives you've considered**
Do the same aliasing before the API ML, but it adds unnecessary complexity as the API ML has all the information and is a natural point to make these decisions.
| 1.0 | Allow customization of URL on an instance of the API ML - **Is your feature request related to a problem? Please describe.**
As a user, I don't want to remember complex paths to the services I use. With the current conformance criteria the UI paths needs to contain the name of the company which makes for relatively complex paths such as:
/ui/v1/broadcomacf2
/ui/v1/zowezlux
/ui/ibmdb2
**Describe the solution you'd like**
I want to be able to create an alias for given path that would simplify the work for the user. For example for above mentioned /ui/v1/zowezlux I want to be able to have /zlux reach to the /ui/v{latest}/zlux
This path will be configurable on per Zowe basis and the system programmer configuring it will get the information about already existing aliases and will be rejected if the person tries to reuse the same alias.
**Describe alternatives you've considered**
Do the same aliasing before the API ML, but it adds unnecessary complexity as the API ML has all the information and is a natural point to make these decisions.
| priority | allow customization of url on an instance of the api ml is your feature request related to a problem please describe as a user i don t want to remember complex paths to the services i use with the current conformance criteria the ui paths needs to contain the name of the company which makes for relatively complex paths such as ui ui zowezlux ui describe the solution you d like i want to be able to create an alias for given path that would simplify the work for the user for example for above mentioned ui zowezlux i want to be able to have zlux reach to the ui v latest zlux this path will be configurable on per zowe basis and the system programmer configuring it will get the information about already existing aliases and will be rejected if the person tries to reuse the same alias describe alternatives you ve considered do the same aliasing before the api ml but it adds unnecessary complexity as the api ml has all the information and is a natural point to make these decisions | 1 |
436,011 | 12,544,067,949 | IssuesEvent | 2020-06-05 16:36:18 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | opened | Member Counts Inaccurate in Parties and Guilds | help wanted priority: medium section: Guilds section: Party Page type: medium level coding | ### Description
[//]: # (Describe bug in detail here. Include screenshots if helpful.)
Clean-up of #7454. You can visit that ticket to see the full history of discussion on this issue.
`memberCount` in group records is a stored computed value that we currently attempt to keep up to date by incrementing and decrementing as members join and leave. It frequently gets out of sync, leading to people with 6-member parties showing a member count of 7, etc.; this is a nuisance at best, and in some cases involving the 30-person party limit, can actually interfere with people joining or inviting members.
This is the desired fix:
> change the group remove, leave and join routes to recompute `memberCount` each time it needs to be updated (i.e., count all existing members, rather than add or subtract one from the current value of `memberCount`) | 1.0 | Member Counts Inaccurate in Parties and Guilds - ### Description
[//]: # (Describe bug in detail here. Include screenshots if helpful.)
Clean-up of #7454. You can visit that ticket to see the full history of discussion on this issue.
`memberCount` in group records is a stored computed value that we currently attempt to keep up to date by incrementing and decrementing as members join and leave. It frequently gets out of sync, leading to people with 6-member parties showing a member count of 7, etc.; this is a nuisance at best, and in some cases involving the 30-person party limit, can actually interfere with people joining or inviting members.
This is the desired fix:
> change the group remove, leave and join routes to recompute `memberCount` each time it needs to be updated (i.e., count all existing members, rather than add or subtract one from the current value of `memberCount`) | priority | member counts inaccurate in parties and guilds description describe bug in detail here include screenshots if helpful clean up of you can visit that ticket to see the full history of discussion on this issue membercount in group records is a stored computed value that we currently attempt to keep up to date by incrementing and decrementing as members join and leave it frequently gets out of sync leading to people with member parties showing a member count of etc this is a nuisance at best and in some cases involving the person party limit can actually interfere with people joining or inviting members this is the desired fix change the group remove leave and join routes to recompute membercount each time it needs to be updated i e count all existing members rather than add or subtract one from the current value of membercount | 1 |
345,943 | 10,375,170,754 | IssuesEvent | 2019-09-09 11:22:37 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | TreeView template leaks memory on changes in the observable | Bug C: TreeView F: MVVM FP: In Development Kendo2 Next LIB Priority 1 SEV: Medium Triaged | ### Bug report
[HtmlPage1.zip](https://github.com/telerik/kendo-ui-core/files/3519331/HtmlPage1.zip)
### Reproduction of the problem
1. Run the attached page in the browser.
2. Take a 20 sec long heap snapshot. During these 20 sec click the "change" button to make changes to a specific node's data in the observable.
### Current behavior
Increasing number of detached span elements: [screenshot](https://www.screencast.com/t/drEMriZU8IRj)
Memory consumption increases with every change: [screenshot](https://www.screencast.com/t/kxcxBEkg)
There are no detached span elements if the span within the template does not have a data attribute set.
from:
```
<span data-bind="text: treetext"></span>
```
to:
```
<span>#=item.treetext#</span>
```
### Expected/desired behavior
### Environment
* **Kendo UI version:** 2019.2.619
* **jQuery version:** x.y
* **Browser:** [all ]
| 1.0 | TreeView template leaks memory on changes in the observable - ### Bug report
[HtmlPage1.zip](https://github.com/telerik/kendo-ui-core/files/3519331/HtmlPage1.zip)
### Reproduction of the problem
1. Run the attached page in the browser.
2. Take a 20 sec long heap snapshot. During these 20 sec click the "change" button to make changes to a specific node's data in the observable.
### Current behavior
Increasing number of detached span elements: [screenshot](https://www.screencast.com/t/drEMriZU8IRj)
Memory consumption increases with every change: [screenshot](https://www.screencast.com/t/kxcxBEkg)
There are no detached span elements if the span within the template does not have a data attribute set.
from:
```
<span data-bind="text: treetext"></span>
```
to:
```
<span>#=item.treetext#</span>
```
### Expected/desired behavior
### Environment
* **Kendo UI version:** 2019.2.619
* **jQuery version:** x.y
* **Browser:** [all ]
| priority | treeview template leaks memory on changes in the observable bug report reproduction of the problem run the attached page in the browser take a sec long heap snapshot during these sec click the change button to make changes to a specific node s data in the observable current behavior increasing number of detached span elements memory consumption increases with every change there are no detached span elements if the span within the template does not have a data attribute set from to item treetext expected desired behavior environment kendo ui version jquery version x y browser | 1 |
81,778 | 3,594,411,991 | IssuesEvent | 2016-02-01 23:34:11 | jkotlinski/durexforth | https://api.github.com/repos/jkotlinski/durexforth | closed | Cartridge version? | auto-migrated enhancement Priority-Medium | ```
Would free up some RAM
```
Original issue reported on code.google.com by `kotlin...@gmail.com` on 27 Oct 2012 at 11:59 | 1.0 | Cartridge version? - ```
Would free up some RAM
```
Original issue reported on code.google.com by `kotlin...@gmail.com` on 27 Oct 2012 at 11:59 | priority | cartridge version would free up some ram original issue reported on code google com by kotlin gmail com on oct at | 1 |
65,067 | 3,226,198,523 | IssuesEvent | 2015-10-10 03:22:19 | ankidroid/Anki-Android | https://api.github.com/repos/ankidroid/Anki-Android | closed | Make undo on 'Congratulations! ...' screen show the reviewer again | accepted enhancement Priority-Medium | When a study session is done, the 'Congratulations! You have finished for now.' screen is shown. In this screen, the 'undo' button to undo the last operation is shown (but why would it be there?) and clicking it changes the state of the deck (I guess it really undoes the operations) - this can be seen as the screen changes to the one with the 'Due today' information, and clicking the button more times changes the state each time (the numbers change). Seems like a bug. | 1.0 | Make undo on 'Congratulations! ...' screen show the reviewer again - When a study session is done, the 'Congratulations! You have finished for now.' screen is shown. In this screen, the 'undo' button to undo the last operation is shown (but why would it be there?) and clicking it changes the state of the deck (I guess it really undoes the operations) - this can be seen as the screen changes to the one with the 'Due today' information, and clicking the button more times changes the state each time (the numbers change). Seems like a bug. | priority | make undo on congratulations screen show the reviewer again when a study session is done the congratulations you have finished for now screen is shown in this screen the undo button to undo the last operation is shown but why would it be there and clicking it changes the state of the deck i guess it really undoes the operations this can be seen as the screen changes to the one with the due today information and clicking the button more times changes the state each time the numbers change seems like a bug | 1 |
226,647 | 7,522,011,738 | IssuesEvent | 2018-04-12 19:00:25 | AdChain/AdChainRegistryDapp | https://api.github.com/repos/AdChain/AdChainRegistryDapp | closed | Extend Clickable Domains in Domain Table | Priority: Medium Type: UX Enhancement | Within the domains table, extend the ability to click into the single-domain view from `current` to `what we want`.

| 1.0 | Extend Clickable Domains in Domain Table - Within the domains table, extend the ability to click into the single-domain view from `current` to `what we want`.

| priority | extend clickable domains in domain table within the domains table extend the ability to click into the single domain view from current to what we want | 1 |
76,344 | 3,487,304,118 | IssuesEvent | 2016-01-01 19:14:23 | nanovad/ifios | https://api.github.com/repos/nanovad/ifios | opened | Finish ACPI table support | medium priority | Support needs to be finished for the RSDT and FADT for the century register ( #3 ). | 1.0 | Finish ACPI table support - Support needs to be finished for the RSDT and FADT for the century register ( #3 ). | priority | finish acpi table support support needs to be finished for the rsdt and fadt for the century register | 1 |
90,095 | 3,811,663,048 | IssuesEvent | 2016-03-27 00:28:58 | nim-lang/Aporia | https://api.github.com/repos/nim-lang/Aporia | closed | "Go to definition under cursor" work only once | Bug Medium Priority | After first time the definition is found, it's not react on ctrl+shift+r. If close aporia, it will not start untill nimsuggest process will be killed. | 1.0 | "Go to definition under cursor" work only once - After first time the definition is found, it's not react on ctrl+shift+r. If close aporia, it will not start untill nimsuggest process will be killed. | priority | go to definition under cursor work only once after first time the definition is found it s not react on ctrl shift r if close aporia it will not start untill nimsuggest process will be killed | 1 |
577,213 | 17,105,595,821 | IssuesEvent | 2021-07-09 17:11:39 | Secure-Compliance-Solutions-LLC/GVM-Docker | https://api.github.com/repos/Secure-Compliance-Solutions-LLC/GVM-Docker | closed | [Enhancement] Docker secret for password | Priority:Medium enhancement | I have set up GVM via a compose file.
When I use the password in plain text as an environment variable, the password is used to log in.
When I use a docker secret
- PASSWORD=/run/secrets/gvm_password
the password is set to '/run/secrets/gvm_password' instead of the contents of the docker secret gvm_password.
| 1.0 | [Enhancement] Docker secret for password - I have set up GVM via a compose file.
When I use the password in plain text as an environment variable, the password is used to log in.
When I use a docker secret
- PASSWORD=/run/secrets/gvm_password
the password is set to '/run/secrets/gvm_password' instead of the contents of the docker secret gvm_password.
| priority | docker secret for password i have set up gvm via a compose file when i use the password in plain text as an environment variable the password is used to log in when i use a docker secret password run secrets gvm password the password is set to run secrets gvm password instead of the contents of the docker secret gvm password | 1 |
41,469 | 2,869,008,861 | IssuesEvent | 2015-06-05 22:33:04 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | rename DateFormat to DateTimeFormat | Area-Pkg Pkg-Intl Priority-Medium Triaged Type-Enhancement | *This issue was originally filed by @seaneagan*
_____
... since Date was renamed to DateTime. DateFormat would be confusing once we get a Date class.
| 1.0 | rename DateFormat to DateTimeFormat - *This issue was originally filed by @seaneagan*
_____
... since Date was renamed to DateTime. DateFormat would be confusing once we get a Date class.
| priority | rename dateformat to datetimeformat this issue was originally filed by seaneagan since date was renamed to datetime dateformat would be confusing once we get a date class | 1 |
78,829 | 3,517,891,195 | IssuesEvent | 2016-01-12 10:10:30 | infoScoop/infoscoop | https://api.github.com/repos/infoScoop/infoscoop | opened | Limit: Create a Square / invite users(for square) | Priority-Medium Type-Enhancement | It provides a limit to the Square creation and user invitation.
default:
Square creation: 3(maximum)
Invitation: 10 users(maximum) | 1.0 | Limit: Create a Square / invite users(for square) - It provides a limit to the Square creation and user invitation.
default:
Square creation: 3(maximum)
Invitation: 10 users(maximum) | priority | limit create a square invite users for square it provides a limit to the square creation and user invitation default square creation maximum invitation users maximum | 1 |
417,160 | 12,156,264,191 | IssuesEvent | 2020-04-25 16:35:17 | code4romania/stam-acasa | https://api.github.com/repos/code4romania/stam-acasa | closed | Future dates can be selected from calendar | front-end good first issue help wanted medium-priority | Starting point: https://dev.stamacasa.ro/evaluation/me
Expected: User should not be able to select future dates.
Actual: User is able to select any dates.

| 1.0 | Future dates can be selected from calendar - Starting point: https://dev.stamacasa.ro/evaluation/me
Expected: User should not be able to select future dates.
Actual: User is able to select any dates.

| priority | future dates can be selected from calendar starting point expected user should not be able to select future dates actual user is able to select any dates | 1 |
229,930 | 7,601,175,571 | IssuesEvent | 2018-04-28 10:32:13 | esaude/esaude-emr-poc | https://api.github.com/repos/esaude/esaude-emr-poc | closed | [Registration] Visit history not informing empty list | Medium Priority enhancement | Actual Results
--
When there is no visit history for the patient, the system is not providing any information.
Expected results
--
The system should inform that there is no history of visits for the patient
Steps to reproduce
--
Login > Registration module > Search or create a patient with no visit history > Histórico de Visitas
Screenshot/Attachment (Optional)
--
A visual description of the unexpected behaviour.

| 1.0 | [Registration] Visit history not informing empty list - Actual Results
--
When there is no visit history for the patient, the system is not providing any information.
Expected results
--
The system should inform that there is no history of visits for the patient
Steps to reproduce
--
Login > Registration module > Search or create a patient with no visit history > Histórico de Visitas
Screenshot/Attachment (Optional)
--
A visual description of the unexpected behaviour.

| priority | visit history not informing empty list actual results when there is no visit history for the patient the system is not providing any information expected results the system should inform that there is no history of visits for the patient steps to reproduce login registration module search or create a patient with no visit history histórico de visitas screenshot attachment optional a visual description of the unexpected behaviour | 1 |
22,221 | 2,645,778,435 | IssuesEvent | 2015-03-13 02:10:48 | prikhi/evoluspencil | https://api.github.com/repos/prikhi/evoluspencil | closed | Properties page initial focus not set | 1 star bug imported Priority-Medium | _From [dneb...@gmail.com](https://code.google.com/u/118392894718639048841/) on October 06, 2009 12:44:17_
What steps will reproduce the problem? 1. Open properties page. What is the expected output? What do you see instead? For a label (and most of the stencils), I'd expect the focus to be set in
the 'Label' field. Instead, every time I open a properties page I first
have to click the default field just to begin typing. What version of the product are you using? On what operating system? 1.0 build 6, WinXP fully patched, FF 3.5.3. Please provide any additional information below. It's just a pain to deal w/ the properties, but you have to in order to
change label values, button labels, text field contents, etc...
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=115_ | 1.0 | Properties page initial focus not set - _From [dneb...@gmail.com](https://code.google.com/u/118392894718639048841/) on October 06, 2009 12:44:17_
What steps will reproduce the problem? 1. Open properties page. What is the expected output? What do you see instead? For a label (and most of the stencils), I'd expect the focus to be set in
the 'Label' field. Instead, every time I open a properties page I first
have to click the default field just to begin typing. What version of the product are you using? On what operating system? 1.0 build 6, WinXP fully patched, FF 3.5.3. Please provide any additional information below. It's just a pain to deal w/ the properties, but you have to in order to
change label values, button labels, text field contents, etc...
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=115_ | priority | properties page initial focus not set from on october what steps will reproduce the problem open properties page what is the expected output what do you see instead for a label and most of the stencils i d expect the focus to be set in the label field instead every time i open a properties page i first have to click the default field just to begin typing what version of the product are you using on what operating system build winxp fully patched ff please provide any additional information below it s just a pain to deal w the properties but you have to in order to change label values button labels text field contents etc original issue | 1 |
719,018 | 24,741,818,294 | IssuesEvent | 2022-10-21 06:11:36 | CS3219-AY2223S1/cs3219-project-ay2223s1-g33 | https://api.github.com/repos/CS3219-AY2223S1/cs3219-project-ay2223s1-g33 | closed | User Session Not Terminated After Password Reset | Bug Module/Back-End Status/Medium-Priority | ## Summary
When a reset password is performed, all existing user sessions should be invalidated. At the moment, this does not occur, and is technically challenging because there is no tracking of issued tokens. | 1.0 | User Session Not Terminated After Password Reset - ## Summary
When a reset password is performed, all existing user sessions should be invalidated. At the moment, this does not occur, and is technically challenging because there is no tracking of issued tokens. | priority | user session not terminated after password reset summary when a reset password is performed all existing user sessions should be invalidated at the moment this does not occur and is technically challenging because there is no tracking of issued tokens | 1 |
271,288 | 8,482,274,152 | IssuesEvent | 2018-10-25 18:04:48 | FServais/BoardgameWE | https://api.github.com/repos/FServais/BoardgameWE | closed | Login case sensitive? | component/backend priority/low priority/medium | When I try to login as Fservais instead of fservais, the login fails (user not found).
Should we leave the login as case sensitive or not? :) | 2.0 | Login case sensitive? - When I try to login as Fservais instead of fservais, the login fails (user not found).
Should we leave the login as case sensitive or not? :) | priority | login case sensitive when i try to login as fservais instead of fservais the login fails user not found should we leave the login as case sensitive or not | 1 |
681,708 | 23,320,904,602 | IssuesEvent | 2022-08-08 16:16:53 | PHI-base/PHI5_web_display | https://api.github.com/repos/PHI-base/PHI5_web_display | opened | Change which data is displayed in the Pathogen and Host sections | medium priority | (Follow-up from #51)
The PHI-base team has recently reviewed the Pathogen and Host sections of the gene page and identified a number of problems. We've decided to clarify the requirements for these sections.
* For **pathogen** gene pages:
* The Pathogen section should show a list of all pathogen strains that have annotations involving the gene of the gene page.
* The Host section should display a list of genes, plus the scientific name and NCBI Taxonomy ID, of any host that interacts with the pathogen (as part of a metagenotype). Wild type host genotypes (genotypes with no genes) will presumably not be included now.
* For **host** gene pages:
* The Host section should show a list of all host strains that have annotations involving the gene of the gene page.
* The Pathogen section should display a list of genes, plus the scientific name and NCBI Taxonomy ID, of any pathogen that interacts with the host (as part of a metagenotype).
* We also decided that **we shouldn't show the Reference column** in either the Pathogen or Host section, because the reference is included with the annotations in other tables, and the reference is not likely to work well when data is being aggregated like this.
### Pathogen gene page
Below is a mockup of how the Pathogen and Host sections should appear for a **pathogen gene**, specifically TRI5 of _Fusarium graminearum_ (PHIG:253).

Note that UniProtKB has no names for the genes in the image above, and we don't export the gene names we have recorded (FER1) in the PHI-Canto JSON export independently of the allele names. So for now, we'll probably just have to display the UniProtKB accession number in the gene column when there is no gene name in UniProt.
### Host gene page
Below is a mockup of how the Pathogen and Host sections should appear for a **host gene**, specifically Cf-4A of _Solanum lycopersicum_ (PHIG:311).

Note that in this case there are two strains listed in the Host section, because the Cf-4A gene has been annotated as part of two strains. The current interface only displays "cv. Moneymaker", which is incorrect. The pathogen gene also has a name in this case because the name exists in UniProtKB.
The mockup above shows row grouping in the Pathogen section so that the host name and taxon ID is not repeated every row: this would be nice to have, but is not absolutely required.
- - -
@Molecular-Connections Since the logic to extract the correct data from the export could be quite difficult, I could include summary lists of strains and interacting genes for each gene in the new JSON export format, so for these sections you would only have to display data that is already in the export.
Alternatively, I could provide instructions (pseudocode) for how to extract the data from the current export format.
Please let me know what you'd prefer. | 1.0 | Change which data is displayed in the Pathogen and Host sections - (Follow-up from #51)
The PHI-base team has recently reviewed the Pathogen and Host sections of the gene page and identified a number of problems. We've decided to clarify the requirements for these sections.
* For **pathogen** gene pages:
* The Pathogen section should show a list of all pathogen strains that have annotations involving the gene of the gene page.
* The Host section should display a list of genes, plus the scientific name and NCBI Taxonomy ID, of any host that interacts with the pathogen (as part of a metagenotype). Wild type host genotypes (genotypes with no genes) will presumably not be included now.
* For **host** gene pages:
* The Host section should show a list of all host strains that have annotations involving the gene of the gene page.
* The Pathogen section should display a list of genes, plus the scientific name and NCBI Taxonomy ID, of any pathogen that interacts with the host (as part of a metagenotype).
* We also decided that **we shouldn't show the Reference column** in either the Pathogen or Host section, because the reference is included with the annotations in other tables, and the reference is not likely to work well when data is being aggregated like this.
### Pathogen gene page
Below is a mockup of how the Pathogen and Host sections should appear for a **pathogen gene**, specifically TRI5 of _Fusarium graminearum_ (PHIG:253).

Note that UniProtKB has no names for the genes in the image above, and we don't export the gene names we have recorded (FER1) in the PHI-Canto JSON export independently of the allele names. So for now, we'll probably just have to display the UniProtKB accession number in the gene column when there is no gene name in UniProt.
### Host gene page
Below is a mockup of how the Pathogen and Host sections should appear for a **host gene**, specifically Cf-4A of _Solanum lycopersicum_ (PHIG:311).

Note that in this case there are two strains listed in the Host section, because the Cf-4A gene has been annotated as part of two strains. The current interface only displays "cv. Moneymaker", which is incorrect. The pathogen gene also has a name in this case because the name exists in UniProtKB.
The mockup above shows row grouping in the Pathogen section so that the host name and taxon ID is not repeated every row: this would be nice to have, but is not absolutely required.
- - -
@Molecular-Connections Since the logic to extract the correct data from the export could be quite difficult, I could include summary lists of strains and interacting genes for each gene in the new JSON export format, so for these sections you would only have to display data that is already in the export.
Alternatively, I could provide instructions (pseudocode) for how to extract the data from the current export format.
Please let me know what you'd prefer. | priority | change which data is displayed in the pathogen and host sections follow up from the phi base team has recently reviewed the pathogen and host sections of the gene page and identified a number of problems we ve decided to clarify the requirements for these sections for pathogen gene pages the pathogen section should show a list of all pathogen strains that have annotations involving the gene of the gene page the host section should display a list of genes plus the scientific name and ncbi taxonomy id of any host that interacts with the pathogen as part of a metagenotype wild type host genotypes genotypes with no genes will presumably not be included now for host gene pages the host section should show a list of all host strains that have annotations involving the gene of the gene page the pathogen section should display a list of genes plus the scientific name and ncbi taxonomy id of any pathogen that interacts with the host as part of a metagenotype we also decided that we shouldn t show the reference column in either the pathogen or host section because the reference is included with the annotations in other tables and the reference is not likely to work well when data is being aggregated like this pathogen gene page below is a mockup of how the pathogen and host sections should appear for a pathogen gene specifically of fusarium graminearum phig note that uniprotkb has no names for the genes in the image above and we don t export the gene names we have recorded in the phi canto json export independently of the allele names so for now we ll probably just have to display the uniprotkb accession number in the gene column when there is no gene name in uniprot host gene page below is a mockup of how the pathogen and host sections should appear for a host gene specifically cf of solanum lycopersicum phig note that in this case there are two strains listed in the host section because the cf gene has been annotated as part of two strains the current interface only displays cv moneymaker which is incorrect the pathogen gene also has a name in this case because the name exists in uniprotkb the mockup above shows row grouping in the pathogen section so that the host name and taxon id is not repeated every row this would be nice to have but is not absolutely required molecular connections since the logic to extract the correct data from the export could be quite difficult i could include summary lists of strains and interacting genes for each gene in the new json export format so for these sections you would only have to display data that is already in the export alternatively i could provide instructions pseudocode for how to extract the data from the current export format please let me know what you d prefer | 1 |
48,899 | 3,000,827,245 | IssuesEvent | 2015-07-24 06:30:19 | jayway/powermock | https://api.github.com/repos/jayway/powermock | closed | How to mock this class!!!!!!!!!!!!!!! | bug imported Priority-Medium wontfix | _From [anuopass...@gmail.com](https://code.google.com/u/110762255943320889841/) on May 17, 2010 19:13:57_
I want to mock this class using powerMock,when i try to test it thrown
java.lang.ExceptionInInitializerError
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
why ??? How to mock it?
class:
public final class ContentFacade
{
private ContentFacade()
{
contentService = (ContentService)
BaselineStartUp.getApplicationContext().getBean("contentService");
folderContentService = (FolderContentService)
BaselineStartUp.getApplicationContext().getBean("folderContentService");
providerService = (ProviderService)
BaselineStartUp.getApplicationContext().getBean("providerService");
basicService = (BasicService)BaselineStartUp.getApplicationContext
().getBean("basicService");
unknowTerm = (TerminalInfo)BaselineStartUp.getApplicationContext
().getBean("unknowTerm");
}
public static ContentFacade getInstance()
{
return INSTANCE;
}
}
test:
ContentFacade cfde=PowerMock.createMock(ContentFacade.class); Please provide any additional information below.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=260_ | 1.0 | How to mock this class!!!!!!!!!!!!!!! - _From [anuopass...@gmail.com](https://code.google.com/u/110762255943320889841/) on May 17, 2010 19:13:57_
I want to mock this class using powerMock,when i try to test it thrown
java.lang.ExceptionInInitializerError
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
why ??? How to mock it?
class:
public final class ContentFacade
{
private ContentFacade()
{
contentService = (ContentService)
BaselineStartUp.getApplicationContext().getBean("contentService");
folderContentService = (FolderContentService)
BaselineStartUp.getApplicationContext().getBean("folderContentService");
providerService = (ProviderService)
BaselineStartUp.getApplicationContext().getBean("providerService");
basicService = (BasicService)BaselineStartUp.getApplicationContext
().getBean("basicService");
unknowTerm = (TerminalInfo)BaselineStartUp.getApplicationContext
().getBean("unknowTerm");
}
public static ContentFacade getInstance()
{
return INSTANCE;
}
}
test:
ContentFacade cfde=PowerMock.createMock(ContentFacade.class); Please provide any additional information below.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=260_ | priority | how to mock this class from on may i want to mock this class using powermock when i try to test it thrown java lang exceptionininitializererror at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke why how to mock it class public final class contentfacade private contentfacade contentservice contentservice baselinestartup getapplicationcontext getbean contentservice foldercontentservice foldercontentservice baselinestartup getapplicationcontext getbean foldercontentservice providerservice providerservice baselinestartup getapplicationcontext getbean providerservice basicservice basicservice baselinestartup getapplicationcontext getbean basicservice unknowterm terminalinfo baselinestartup getapplicationcontext getbean unknowterm public static contentfacade getinstance return instance test contentfacade cfde powermock createmock contentfacade class please provide any additional information below original issue | 1 |
388,302 | 11,485,972,861 | IssuesEvent | 2020-02-11 09:00:17 | OpenSourceEconomics/ruspy | https://api.github.com/repos/OpenSourceEconomics/ruspy | closed | Newton–Kantorovich (NK) iterations | enhancement priority medium size large | We need to incorporate Newton–Kantorovich (NK) iterations in our value function iteration process. @MaxBlesch, please review all elements of our code in light of the replication material provided here
https://www.econometricsociety.org/publications/econometrica/2016/01/01/comment-%E2%80%9Cconstrained-optimization-approaches-estimation | 1.0 | Newton–Kantorovich (NK) iterations - We need to incorporate Newton–Kantorovich (NK) iterations in our value function iteration process. @MaxBlesch, please review all elements of our code in light of the replication material provided here
https://www.econometricsociety.org/publications/econometrica/2016/01/01/comment-%E2%80%9Cconstrained-optimization-approaches-estimation | priority | newton–kantorovich nk iterations we need to incorporate newton–kantorovich nk iterations in our value function iteration process maxblesch please review all elements of our code in light of the replication material provided here | 1 |
477,318 | 13,759,815,412 | IssuesEvent | 2020-10-07 04:10:04 | STAMACODING/RSA-App | https://api.github.com/repos/STAMACODING/RSA-App | opened | Testen! | medium priority | Unsere Anwendung ist mittlerweile ziemlich groß geworden. Neben der Dokumentation, sollten wir einzelne Funktionen auch mehr testen (JUnit). | 1.0 | Testen! - Unsere Anwendung ist mittlerweile ziemlich groß geworden. Neben der Dokumentation, sollten wir einzelne Funktionen auch mehr testen (JUnit). | priority | testen unsere anwendung ist mittlerweile ziemlich groß geworden neben der dokumentation sollten wir einzelne funktionen auch mehr testen junit | 1 |
57,982 | 3,086,943,445 | IssuesEvent | 2015-08-25 08:21:16 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | Автоматическое удаление файла со списком подключаемых хабов при выходе | bug imported Priority-Medium | _From [S.Artyuk...@gmail.com](https://code.google.com/u/106149142188586589113/) on September 16, 2013 14:40:46_
Неуверен, что проблема во флае, но опишу:
За последние две недели повторилась не менее четырех раз, ранее не наблюдалась.
Версия: 502 бетка, самая последняя из автоматического обновления
После закрытия подключенных хабов и выхода из программы из каталога с настройками пропадал файл содержащий список подключаемых хабов. Как будто флай подчищая свои временные файлы подчищал и его.
Дефект плавающий. Сколько специально не пытался поймать не удалось
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1288_ | 1.0 | Автоматическое удаление файла со списком подключаемых хабов при выходе - _From [S.Artyuk...@gmail.com](https://code.google.com/u/106149142188586589113/) on September 16, 2013 14:40:46_
Неуверен, что проблема во флае, но опишу:
За последние две недели повторилась не менее четырех раз, ранее не наблюдалась.
Версия: 502 бетка, самая последняя из автоматического обновления
После закрытия подключенных хабов и выхода из программы из каталога с настройками пропадал файл содержащий список подключаемых хабов. Как будто флай подчищая свои временные файлы подчищал и его.
Дефект плавающий. Сколько специально не пытался поймать не удалось
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1288_ | priority | автоматическое удаление файла со списком подключаемых хабов при выходе from on september неуверен что проблема во флае но опишу за последние две недели повторилась не менее четырех раз ранее не наблюдалась версия бетка самая последняя из автоматического обновления после закрытия подключенных хабов и выхода из программы из каталога с настройками пропадал файл содержащий список подключаемых хабов как будто флай подчищая свои временные файлы подчищал и его дефект плавающий сколько специально не пытался поймать не удалось original issue | 1 |
52,985 | 3,032,301,971 | IssuesEvent | 2015-08-05 07:54:04 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | closed | Port weave functions to Cython | maintainability performance Priority-Medium Python3 usability | So scipy.weave doesn't support Python 3, so functions written in weave should be ported to Cython to avoid blocking Python 3 (See #260)
There's currently 2 functions in analysis.distances which use weave:
* contact_matrix_pbc
* contact_matrix_no_pbc
I think these were originally written by @jandom but looking at them it should be easy to move them into src/numtools/calc_distances.h then write a new .pyx interface.
An option is to move these functions inside the distances.pyx interface, so then they could (eventually) share the parallelism there (assuming #261 goes ahead). | 1.0 | Port weave functions to Cython - So scipy.weave doesn't support Python 3, so functions written in weave should be ported to Cython to avoid blocking Python 3 (See #260)
There's currently 2 functions in analysis.distances which use weave:
* contact_matrix_pbc
* contact_matrix_no_pbc
I think these were originally written by @jandom but looking at them it should be easy to move them into src/numtools/calc_distances.h then write a new .pyx interface.
An option is to move these functions inside the distances.pyx interface, so then they could (eventually) share the parallelism there (assuming #261 goes ahead). | priority | port weave functions to cython so scipy weave doesn t support python so functions written in weave should be ported to cython to avoid blocking python see there s currently functions in analysis distances which use weave contact matrix pbc contact matrix no pbc i think these were originally written by jandom but looking at them it should be easy to move them into src numtools calc distances h then write a new pyx interface an option is to move these functions inside the distances pyx interface so then they could eventually share the parallelism there assuming goes ahead | 1 |
208,220 | 7,137,098,784 | IssuesEvent | 2018-01-23 09:49:51 | yandex/pandora | https://api.github.com/repos/yandex/pandora | closed | Make sample reporting non blocking | Priority: Medium Status: In Progress Type: Feature | From Aggregator doc:
``` go
// If Aggregator can't process reported sample without blocking, it should just throw it away.
// If any reported samples were thrown away, Run should return error describing how many samples
// were thrown away.
```
We should implement that behaviour in all existing Aggregators. | 1.0 | Make sample reporting non blocking - From Aggregator doc:
``` go
// If Aggregator can't process reported sample without blocking, it should just throw it away.
// If any reported samples were thrown away, Run should return error describing how many samples
// were thrown away.
```
We should implement that behaviour in all existing Aggregators. | priority | make sample reporting non blocking from aggregator doc go if aggregator can t process reported sample without blocking it should just throw it away if any reported samples were thrown away run should return error describing how many samples were thrown away we should implement that behaviour in all existing aggregators | 1 |
145,086 | 5,558,395,219 | IssuesEvent | 2017-03-24 14:39:03 | USGCRP/gcis | https://api.github.com/repos/USGCRP/gcis | closed | gcisops should own all DB tables | a quickfix an enhancement context Content Management priority medium type technical | Ownership on the DB tables still has bduggan listed. Convert this to gcisops.
This will make the public content releases smoother. | 1.0 | gcisops should own all DB tables - Ownership on the DB tables still has bduggan listed. Convert this to gcisops.
This will make the public content releases smoother. | priority | gcisops should own all db tables ownership on the db tables still has bduggan listed convert this to gcisops this will make the public content releases smoother | 1 |
561,174 | 16,612,557,909 | IssuesEvent | 2021-06-02 13:17:07 | chef/automate | https://api.github.com/repos/chef/automate | opened | Dependabot issue -4 | MEDIUM PRIORITY | Vulnerability - hosted-git-info
Component- …/chef-ui-library/package-lock.json
CVE : CVE-2021-23362
https://github.com/advisories/GHSA-43f8-2h32-f4cj
Alert description : https://github.com/chef/automate/security/dependabot/components/chef-ui-library/package-lock.json/hosted-git-info/closed
Note: This issue was closed by mistake.
| 1.0 | Dependabot issue -4 - Vulnerability - hosted-git-info
Component- …/chef-ui-library/package-lock.json
CVE : CVE-2021-23362
https://github.com/advisories/GHSA-43f8-2h32-f4cj
Alert description : https://github.com/chef/automate/security/dependabot/components/chef-ui-library/package-lock.json/hosted-git-info/closed
Note: This issue was closed by mistake.
| priority | dependabot issue vulnerability hosted git info component … chef ui library package lock json cve cve alert description note this issue was closed by mistake | 1 |
311,644 | 9,536,975,478 | IssuesEvent | 2019-04-30 11:13:04 | AlternativetIT/alleos-issues | https://api.github.com/repos/AlternativetIT/alleos-issues | opened | Udløbsdato på bestyrelsesteams | enhancement low priority medium | Bestyrelsesteams skal have et udløb, så man ved oprettelse af et team bliver tvunget til at sætte et udløb på teamet. | 1.0 | Udløbsdato på bestyrelsesteams - Bestyrelsesteams skal have et udløb, så man ved oprettelse af et team bliver tvunget til at sætte et udløb på teamet. | priority | udløbsdato på bestyrelsesteams bestyrelsesteams skal have et udløb så man ved oprettelse af et team bliver tvunget til at sætte et udløb på teamet | 1 |
84,248 | 3,655,795,346 | IssuesEvent | 2016-02-17 17:28:10 | Captianrock/android_PV | https://api.github.com/repos/Captianrock/android_PV | opened | Create basic GUI to collect files to parse and output | Medium Priority New Feature | Create a basic GUI that essentially starts the program - tells parser where to find files based on user input | 1.0 | Create basic GUI to collect files to parse and output - Create a basic GUI that essentially starts the program - tells parser where to find files based on user input | priority | create basic gui to collect files to parse and output create a basic gui that essentially starts the program tells parser where to find files based on user input | 1 |
369,594 | 10,915,213,060 | IssuesEvent | 2019-11-21 10:42:27 | OceanShare/OceanShare-iOS | https://api.github.com/repos/OceanShare/OceanShare-iOS | closed | Send user name in markers table | Priority: Medium Status: Abandoned Type: Optimization Type: Production | Send user name in the markers table instead of retrieving userid for every markers and duplicate requests. | 1.0 | Send user name in markers table - Send user name in the markers table instead of retrieving userid for every markers and duplicate requests. | priority | send user name in markers table send user name in the markers table instead of retrieving userid for every markers and duplicate requests | 1 |
706,734 | 24,282,597,355 | IssuesEvent | 2022-09-28 18:48:57 | codbex/codbex-kronos | https://api.github.com/repos/codbex/codbex-kronos | closed | [HDBDD] Data type error | priority-medium effort-medium customer parsers | From xsk created by [aihanrashidov](https://github.com/aihanrashidov): SAP/xsk#1179
Error in the `Problems` view `Error at line: 8. No such type found: Double`
Sample which would cause the error:
```
namespace sap.db;
@Schema: 'ADMIN'
context Products {
entity Item {
key ItemId : Double;
OrderId : String(500);
};
};
```
Reference to CDS [types](https://help.sap.com/viewer/09b6623836854766b682356393c6c416/2.0.00/en-US/a83fe9b8de1c4f4bbee3eea675851a04.html)
See com/sap/xsk/parser/hdbdd/symbols/SymbolTable.java:56
| 1.0 | [HDBDD] Data type error - From xsk created by [aihanrashidov](https://github.com/aihanrashidov): SAP/xsk#1179
Error in the `Problems` view `Error at line: 8. No such type found: Double`
Sample which would cause the error:
```
namespace sap.db;
@Schema: 'ADMIN'
context Products {
entity Item {
key ItemId : Double;
OrderId : String(500);
};
};
```
Reference to CDS [types](https://help.sap.com/viewer/09b6623836854766b682356393c6c416/2.0.00/en-US/a83fe9b8de1c4f4bbee3eea675851a04.html)
See com/sap/xsk/parser/hdbdd/symbols/SymbolTable.java:56
| priority | data type error from xsk created by sap xsk error in the problems view error at line no such type found double sample which would cause the error namespace sap db schema admin context products entity item key itemid double orderid string reference to cds see com sap xsk parser hdbdd symbols symboltable java | 1 |
732,290 | 25,253,526,209 | IssuesEvent | 2022-11-15 16:17:20 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | opened | Creature_formations cannot handle overlapping formations | ChromieCraft Generic Priority-Medium | ### Current Behaviour
It's currently not possible to use creature_formations to link the same member to 2 different leaders, making some Blizzlike mechanics hard to implement.
If both leaderGUID and memberGUID are made primary keys, making the query is possible but one formation will work while the other will not.
### Expected Blizzlike Behaviour
In the image below, all of the packs in red are linked together. And, if the boss is pulled before all of the packs are dealt with, they will aggro as well.

### Source
Retail
### Steps to reproduce the problem
<details><summary>SQL Query</summary>
ALTER TABLE `creature_formations`
DROP PRIMARY KEY,
ADD PRIMARY KEY (`memberGUID`, `leaderGUID`) USING BTREE;
-- Static Formations
DELETE FROM `creature_formations` WHERE `leaderGUID` IN (83304,83306,83309,83311,83314,83316,83332,83335,83338,83340,83347,86396,91124,91134,91139,91151,91154,91162,91163,91164,91167,91174,91179,91182,91186,91188,91197,91204,91208,91212,91218,91223,91226,91231,91238,91242) AND `memberGUID` IN (83304,83305,83306,83307,83309,83310,83311,83312,83313,83314,83315,83316,83317,83318,83319,83320,83321,83332,83333,83335,83336,83338,83339,83340,83341,83342,83343,83347,83348,86396,91121,91122,91124,91125,91126,91134,91135,91136,91137,91138,91139,91140,91141,91142,91143,91151,91152,91153,91154,91155,91156,91162,91163,91164,91165,91166,91167,91168,91169,91170,91174,91175,91176,91179,91180,91182,91183,91184,91186,91187,91188,91189,91190,91194,91197,91198,91199,91200,91201,91202,91203,91204,91205,91206,91208,91209,91210,91212,91213,91214,91218,91219,91220,91221,91222,91223,91224,91225,91226,91227,91228,91229,91231,91232,91233,91238,91239,91242,91243,91244) AND `groupAI` IN (1, 3);
INSERT INTO `creature_formations` (`leaderGUID`, `memberGUID`, `dist`, `angle`, `groupAI`) VALUES
(83304, 83304, 0, 0, 3),
(83304, 83305, 0, 0, 3),
(83306, 83306, 0, 0, 3),
(83306, 83307, 0, 0, 3),
(83311, 83311, 0, 0, 3),
(83311, 83312, 0, 0, 3),
(83311, 83313, 0, 0, 3),
(83314, 83314, 0, 0, 3),
(83314, 83319, 0, 0, 3),
(83314, 83321, 0, 0, 3),
(83316, 83316, 0, 0, 3),
(83316, 83317, 0, 0, 3),
(83316, 83318, 0, 0, 3),
(83309, 83309, 0, 0, 3),
(83309, 83310, 0, 0, 3),
(91163, 91163, 0, 0, 1),
(91163, 83310, 0, 0, 1),
(91163, 83318, 0, 0, 1),
(91163, 83317, 0, 0, 1),
(91163, 83316, 0, 0, 1),
(91163, 83312, 0, 0, 1),
(91163, 83313, 0, 0, 1),
(91163, 83311, 0, 0, 1),
(91163, 83320, 0, 0, 1),
(91163, 83315, 0, 0, 1),
(91163, 83314, 0, 0, 1),
(91163, 83321, 0, 0, 1),
(91163, 83319, 0, 0, 1),
(91238, 91238, 0, 0, 3),
(91238, 91239, 0, 0, 3),
(91226, 91226, 0, 0, 3),
(91226, 91227, 0, 0, 3),
(91226, 91228, 0, 0, 3),
(91226, 91229, 0, 0, 3),
(91242, 91242, 0, 0, 3),
(91242, 91243, 0, 0, 3),
(91242, 91244, 0, 0, 3),
(91154, 91154, 0, 0, 3),
(91154, 91155, 0, 0, 3),
(91154, 91156, 0, 0, 3),
(91218, 91218, 0, 0, 3),
(91218, 91219, 0, 0, 3),
(91218, 91220, 0, 0, 3),
(91218, 91221, 0, 0, 3),
(91218, 91222, 0, 0, 3),
(91223, 91223, 0, 0, 3),
(91223, 91224, 0, 0, 3),
(91223, 91225, 0, 0, 3),
(91212, 91212, 0, 0, 3),
(91212, 91213, 0, 0, 3),
(91212, 91214, 0, 0, 3),
(91151, 91151, 0, 0, 3),
(91151, 91152, 0, 0, 3),
(91151, 91153, 0, 0, 3),
(91208, 91208, 0, 0, 3),
(91208, 91209, 0, 0, 3),
(91208, 91210, 0, 0, 3),
(83340, 83340, 0, 0, 3),
(83340, 83341, 0, 0, 3),
(83340, 83342, 0, 0, 3),
(83340, 83343, 0, 0, 3),
(91124, 91124, 0, 0, 3),
(91124, 91125, 0, 0, 3),
(91124, 91126, 0, 0, 3),
(91188, 91188, 0, 0, 3),
(91188, 91189, 0, 0, 3),
(91188, 91190, 0, 0, 3),
(91186, 91186, 0, 0, 3),
(91186, 91187, 0, 0, 3),
(83347, 83347, 0, 0, 3),
(83347, 83348, 0, 0, 3),
(86396, 86396, 0, 0, 3),
(86396, 91121, 0, 0, 3),
(86396, 91122, 0, 0, 3),
(83338, 83338, 0, 0, 3),
(83338, 83339, 0, 0, 3),
(91231, 91231, 0, 0, 3),
(91231, 91232, 0, 0, 3),
(91231, 91233, 0, 0, 3),
(83335, 83335, 0, 0, 3),
(83335, 83336, 0, 0, 3),
(83332, 83332, 0, 0, 3),
(83332, 83333, 0, 0, 3),
(91164, 91164, 0, 0, 3),
(91164, 91165, 0, 0, 3),
(91164, 91166, 0, 0, 3),
(91179, 91179, 0, 0, 3),
(91179, 91180, 0, 0, 3),
(91182, 91182, 0, 0, 3),
(91182, 91183, 0, 0, 3),
(91182, 91184, 0, 0, 3),
(91174, 91174, 0, 0, 3),
(91174, 91175, 0, 0, 3),
(91174, 91176, 0, 0, 3),
(91167, 91167, 0, 0, 3),
(91167, 91168, 0, 0, 3),
(91167, 91169, 0, 0, 3),
(91167, 91170, 0, 0, 3),
(91204, 91204, 0, 0, 3),
(91204, 91205, 0, 0, 3),
(91204, 91206, 0, 0, 3),
(91197, 91197, 0, 0, 3),
(91197, 91198, 0, 0, 3),
(91197, 91199, 0, 0, 3),
(91197, 91200, 0, 0, 3),
(91134, 91134, 0, 0, 3),
(91134, 91135, 0, 0, 3),
(91134, 91136, 0, 0, 3),
(91134, 91137, 0, 0, 3),
(91134, 91138, 0, 0, 3),
(91139, 91139, 0, 0, 3),
(91139, 91140, 0, 0, 3),
(91139, 91141, 0, 0, 3),
(91139, 91142, 0, 0, 3),
(91139, 91143, 0, 0, 3),
(91162, 91162, 0, 0, 1),
(91162, 91204, 0, 0, 1),
(91162, 91205, 0, 0, 1),
(91162, 91206, 0, 0, 1),
(91162, 91197, 0, 0, 1),
(91162, 91198, 0, 0, 1),
(91162, 91199, 0, 0, 1),
(91162, 91200, 0, 0, 1),
(91162, 91134, 0, 0, 1),
(91162, 91135, 0, 0, 1),
(91162, 91136, 0, 0, 1),
(91162, 91137, 0, 0, 1),
(91162, 91138, 0, 0, 1),
(91162, 91139, 0, 0, 1),
(91162, 91140, 0, 0, 1),
(91162, 91141, 0, 0, 1),
(91162, 91142, 0, 0, 1),
(91162, 91143, 0, 0, 1),
(91162, 91194, 0, 0, 1),
(91162, 91201, 0, 0, 1),
(91162, 91203, 0, 0, 1),
(91162, 91202, 0, 0, 1);
</details>
1. .tele manatomb
2. engage the packs near the entrance, see if they pull together
3. reset all encounters
4. engage the first boss, see if the packs near the boss also are aggroed
### Extra Notes
_No response_
### AC rev. hash/commit
https://github.com/azerothcore/azerothcore-wotlk/commit/66bb2e9f20559bb2d311c6d7832f7095eee6fc74
### Operating system
Windows 10
### Custom changes or Modules
_No response_ | 1.0 | Creature_formations cannot handle overlapping formations - ### Current Behaviour
It's currently not possible to use creature_formations to link the same member to 2 different leaders, making some Blizzlike mechanics hard to implement.
If both leaderGUID and memberGUID are made primary keys, making the query is possible but one formation will work while the other will not.
### Expected Blizzlike Behaviour
In the image below, all of the packs in red are linked together. And, if the boss is pulled before all of the packs are dealt with, they will aggro as well.

### Source
Retail
### Steps to reproduce the problem
<details><summary>SQL Query</summary>
ALTER TABLE `creature_formations`
DROP PRIMARY KEY,
ADD PRIMARY KEY (`memberGUID`, `leaderGUID`) USING BTREE;
-- Static Formations
DELETE FROM `creature_formations` WHERE `leaderGUID` IN (83304,83306,83309,83311,83314,83316,83332,83335,83338,83340,83347,86396,91124,91134,91139,91151,91154,91162,91163,91164,91167,91174,91179,91182,91186,91188,91197,91204,91208,91212,91218,91223,91226,91231,91238,91242) AND `memberGUID` IN (83304,83305,83306,83307,83309,83310,83311,83312,83313,83314,83315,83316,83317,83318,83319,83320,83321,83332,83333,83335,83336,83338,83339,83340,83341,83342,83343,83347,83348,86396,91121,91122,91124,91125,91126,91134,91135,91136,91137,91138,91139,91140,91141,91142,91143,91151,91152,91153,91154,91155,91156,91162,91163,91164,91165,91166,91167,91168,91169,91170,91174,91175,91176,91179,91180,91182,91183,91184,91186,91187,91188,91189,91190,91194,91197,91198,91199,91200,91201,91202,91203,91204,91205,91206,91208,91209,91210,91212,91213,91214,91218,91219,91220,91221,91222,91223,91224,91225,91226,91227,91228,91229,91231,91232,91233,91238,91239,91242,91243,91244) AND `groupAI` IN (1, 3);
INSERT INTO `creature_formations` (`leaderGUID`, `memberGUID`, `dist`, `angle`, `groupAI`) VALUES
(83304, 83304, 0, 0, 3),
(83304, 83305, 0, 0, 3),
(83306, 83306, 0, 0, 3),
(83306, 83307, 0, 0, 3),
(83311, 83311, 0, 0, 3),
(83311, 83312, 0, 0, 3),
(83311, 83313, 0, 0, 3),
(83314, 83314, 0, 0, 3),
(83314, 83319, 0, 0, 3),
(83314, 83321, 0, 0, 3),
(83316, 83316, 0, 0, 3),
(83316, 83317, 0, 0, 3),
(83316, 83318, 0, 0, 3),
(83309, 83309, 0, 0, 3),
(83309, 83310, 0, 0, 3),
(91163, 91163, 0, 0, 1),
(91163, 83310, 0, 0, 1),
(91163, 83318, 0, 0, 1),
(91163, 83317, 0, 0, 1),
(91163, 83316, 0, 0, 1),
(91163, 83312, 0, 0, 1),
(91163, 83313, 0, 0, 1),
(91163, 83311, 0, 0, 1),
(91163, 83320, 0, 0, 1),
(91163, 83315, 0, 0, 1),
(91163, 83314, 0, 0, 1),
(91163, 83321, 0, 0, 1),
(91163, 83319, 0, 0, 1),
(91238, 91238, 0, 0, 3),
(91238, 91239, 0, 0, 3),
(91226, 91226, 0, 0, 3),
(91226, 91227, 0, 0, 3),
(91226, 91228, 0, 0, 3),
(91226, 91229, 0, 0, 3),
(91242, 91242, 0, 0, 3),
(91242, 91243, 0, 0, 3),
(91242, 91244, 0, 0, 3),
(91154, 91154, 0, 0, 3),
(91154, 91155, 0, 0, 3),
(91154, 91156, 0, 0, 3),
(91218, 91218, 0, 0, 3),
(91218, 91219, 0, 0, 3),
(91218, 91220, 0, 0, 3),
(91218, 91221, 0, 0, 3),
(91218, 91222, 0, 0, 3),
(91223, 91223, 0, 0, 3),
(91223, 91224, 0, 0, 3),
(91223, 91225, 0, 0, 3),
(91212, 91212, 0, 0, 3),
(91212, 91213, 0, 0, 3),
(91212, 91214, 0, 0, 3),
(91151, 91151, 0, 0, 3),
(91151, 91152, 0, 0, 3),
(91151, 91153, 0, 0, 3),
(91208, 91208, 0, 0, 3),
(91208, 91209, 0, 0, 3),
(91208, 91210, 0, 0, 3),
(83340, 83340, 0, 0, 3),
(83340, 83341, 0, 0, 3),
(83340, 83342, 0, 0, 3),
(83340, 83343, 0, 0, 3),
(91124, 91124, 0, 0, 3),
(91124, 91125, 0, 0, 3),
(91124, 91126, 0, 0, 3),
(91188, 91188, 0, 0, 3),
(91188, 91189, 0, 0, 3),
(91188, 91190, 0, 0, 3),
(91186, 91186, 0, 0, 3),
(91186, 91187, 0, 0, 3),
(83347, 83347, 0, 0, 3),
(83347, 83348, 0, 0, 3),
(86396, 86396, 0, 0, 3),
(86396, 91121, 0, 0, 3),
(86396, 91122, 0, 0, 3),
(83338, 83338, 0, 0, 3),
(83338, 83339, 0, 0, 3),
(91231, 91231, 0, 0, 3),
(91231, 91232, 0, 0, 3),
(91231, 91233, 0, 0, 3),
(83335, 83335, 0, 0, 3),
(83335, 83336, 0, 0, 3),
(83332, 83332, 0, 0, 3),
(83332, 83333, 0, 0, 3),
(91164, 91164, 0, 0, 3),
(91164, 91165, 0, 0, 3),
(91164, 91166, 0, 0, 3),
(91179, 91179, 0, 0, 3),
(91179, 91180, 0, 0, 3),
(91182, 91182, 0, 0, 3),
(91182, 91183, 0, 0, 3),
(91182, 91184, 0, 0, 3),
(91174, 91174, 0, 0, 3),
(91174, 91175, 0, 0, 3),
(91174, 91176, 0, 0, 3),
(91167, 91167, 0, 0, 3),
(91167, 91168, 0, 0, 3),
(91167, 91169, 0, 0, 3),
(91167, 91170, 0, 0, 3),
(91204, 91204, 0, 0, 3),
(91204, 91205, 0, 0, 3),
(91204, 91206, 0, 0, 3),
(91197, 91197, 0, 0, 3),
(91197, 91198, 0, 0, 3),
(91197, 91199, 0, 0, 3),
(91197, 91200, 0, 0, 3),
(91134, 91134, 0, 0, 3),
(91134, 91135, 0, 0, 3),
(91134, 91136, 0, 0, 3),
(91134, 91137, 0, 0, 3),
(91134, 91138, 0, 0, 3),
(91139, 91139, 0, 0, 3),
(91139, 91140, 0, 0, 3),
(91139, 91141, 0, 0, 3),
(91139, 91142, 0, 0, 3),
(91139, 91143, 0, 0, 3),
(91162, 91162, 0, 0, 1),
(91162, 91204, 0, 0, 1),
(91162, 91205, 0, 0, 1),
(91162, 91206, 0, 0, 1),
(91162, 91197, 0, 0, 1),
(91162, 91198, 0, 0, 1),
(91162, 91199, 0, 0, 1),
(91162, 91200, 0, 0, 1),
(91162, 91134, 0, 0, 1),
(91162, 91135, 0, 0, 1),
(91162, 91136, 0, 0, 1),
(91162, 91137, 0, 0, 1),
(91162, 91138, 0, 0, 1),
(91162, 91139, 0, 0, 1),
(91162, 91140, 0, 0, 1),
(91162, 91141, 0, 0, 1),
(91162, 91142, 0, 0, 1),
(91162, 91143, 0, 0, 1),
(91162, 91194, 0, 0, 1),
(91162, 91201, 0, 0, 1),
(91162, 91203, 0, 0, 1),
(91162, 91202, 0, 0, 1);
</details>
1. .tele manatomb
2. engage the packs near the entrance, see if they pull together
3. reset all encounters
4. engage the first boss, see if the packs near the boss also are aggroed
### Extra Notes
_No response_
### AC rev. hash/commit
https://github.com/azerothcore/azerothcore-wotlk/commit/66bb2e9f20559bb2d311c6d7832f7095eee6fc74
### Operating system
Windows 10
### Custom changes or Modules
_No response_ | priority | creature formations cannot handle overlapping formations current behaviour it s currently not possible to use creature formations to link the same member to different leaders making some blizzlike mechanics hard to implement if both leaderguid and memberguid are made primary keys making the query is possible but one formation will work while the other will not expected blizzlike behaviour in the image below all of the packs in red are linked together and if the boss is pulled before all of the packs are dealt with they will aggro as well source retail steps to reproduce the problem sql query alter table creature formations drop primary key add primary key memberguid leaderguid using btree static formations delete from creature formations where leaderguid in and memberguid in and groupai in insert into creature formations leaderguid memberguid dist angle groupai values tele manatomb engage the packs near the entrance see if they pull together reset all encounters engage the first boss see if the packs near the boss also are aggroed extra notes no response ac rev hash commit operating system windows custom changes or modules no response | 1 |
551,054 | 16,136,757,021 | IssuesEvent | 2021-04-29 12:50:39 | music-encoding/music-encoding | https://api.github.com/repos/music-encoding/music-encoding | closed | Support enclosed dynamics and hairpins | Component: Core Schema Priority: Medium Status: Needs Patch Type: Feature Request | To support enclosed dynamics and hairpins; that is, those surrounded by square brackets or a box, described by Gould on pp. 249, 597, and 599, make `<dynam>` and `<hairpin>` members of att.enclosingChars and extend data.ENCLOSURE to include the value "box". This change will also make it possible to enclose the current members of att.enclosingChars (accid, ambNote, artic, chord, cpMark, keyAccid, note, and rest) by a box. For the sake of completeness, the value "none" should also be added so that it's possible to say explicitly that the object is _not_ enclosed.
| 1.0 | Support enclosed dynamics and hairpins - To support enclosed dynamics and hairpins; that is, those surrounded by square brackets or a box, described by Gould on pp. 249, 597, and 599, make `<dynam>` and `<hairpin>` members of att.enclosingChars and extend data.ENCLOSURE to include the value "box". This change will also make it possible to enclose the current members of att.enclosingChars (accid, ambNote, artic, chord, cpMark, keyAccid, note, and rest) by a box. For the sake of completeness, the value "none" should also be added so that it's possible to say explicitly that the object is _not_ enclosed.
| priority | support enclosed dynamics and hairpins to support enclosed dynamics and hairpins that is those surrounded by square brackets or a box described by gould on pp and make and members of att enclosingchars and extend data enclosure to include the value box this change will also make it possible to enclose the current members of att enclosingchars accid ambnote artic chord cpmark keyaccid note and rest by a box for the sake of completeness the value none should also be added so that it s possible to say explicitly that the object is not enclosed | 1 |
388,716 | 11,491,522,414 | IssuesEvent | 2020-02-11 19:09:34 | TLTMedia/Marginalia | https://api.github.com/repos/TLTMedia/Marginalia | opened | UI & Backend for adding new course admin | Backend Frontend [MEDIUM] Priority | Course admins can add courses.
Make UI & Backend so that these course admins can add new course admins too. | 1.0 | UI & Backend for adding new course admin - Course admins can add courses.
Make UI & Backend so that these course admins can add new course admins too. | priority | ui backend for adding new course admin course admins can add courses make ui backend so that these course admins can add new course admins too | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.